Browse all available sessions and instructor-led trainings below. Click “Add to Schedule” to build your personal agenda and reserve your place at instructor-led trainings.
Prerequisites: Basic technical background
Explore the fundamentals of deep learning by training neural networks and using results to improve performance and capabilities.
In this hands-on course, you’ll learn the basics of deep learning by training and deploying neural networks. You’ll learn how to:
Upon completion of this workshop, you’ll be able to start solving problems on your own with deep learning.
You will need to purchase a special pass to attend this full-day workshop. See GTC Pricing for more information.DLI Instructor-Led Workshop Adam Thompson - Senior Solutions Architect, NVIDIA
Prerequisites: Basic C/C++ competency
The CUDA computing platform enables the acceleration of CPU-only applications to run on the world's fastest massively parallel GPUs. Experience C/C++ application acceleration by:
Upon completion of this workshop, you'll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast.
You will need to purchase a special pass to attend this full-day workshop. See GTC Pricing for more information.DLI Instructor-Led Workshop Robert Crovella - OEM Technical Enablement Manager, NVIDIA
Prerequisites: Basic familiarity with concepts of deep learning and convolutional neural networks
This hands-on course explores how to apply Convolutional Neural Networks (CNNs) to MRI scans to perform a variety of medical tasks and calculations. You’ll learn how to:
Upon completion of this workshop, you’ll be able to apply CNNs to MRI scans to conduct a variety of medical tasks.
You will need to purchase a special pass to attend this full-day workshop. See GTC Pricing for more information.DLI Instructor-Led Workshop David Nola - Deep Learning Solutions Architect, Healthcare, NVIDIA
Prerequisites: Basic experience with neural networks
Explore the latest techniques for understanding textual input using natural language processing (NLP). You’ll learn how to:
Upon completion of this workshop, you’ll be proficient in NLP using embeddings in similar applications.
Attend this half-day session on Monday, October 22 from 1:00-5:00pm to hear how AI is transforming operations now and how deep learning will influence the future.
Session topics include: "Bootstrap AI Projects within the Government", "Building Blocks of AI, Improving ROI with Computer Vision and Natural Language Processing", "How to effectively recruit and train employees while building your AI capability" and "The Future of AI".Sponsor Session James Chung, Booz Allen Hamilton
Many examples of AI are reported daily that enhance traditional products and services– but the benefits have only just begun to scratch the surface. Entire industries will be transformed and massive benefits will be realized in the next wave of AI deployment. Learn about the next generation of AI and how it will add over 60M jobs and $13 trillion to the global economy if policy makers, businesses and the AI community adopt the right strategies, initiatives and platforms to harness this incredible new technology.Keynote Ian Buck - VP, Accelerated Computing, NVIDIA
Numerous Fortune 500 customers experience latency and performance issues in their machine learning and data pipelines. Big data platforms and solutions tried to address these challenges with massive infrastructure scale out. But the cost to scale relative to the volume and velocity of current need is prohibitively expensive. NVIDIA is addressing these challenges with RAPIDS, an end-to-end GPU-accelerated data science software stack for enterprises to explore and integrate AI into their core data driven decision-making processes. Learn how to get started with GPU-accelerated data science and quickly identify opportunities to accelerate machine learning workflows in your organization.Talk Jeffrey Tseng - Head of Product, AI Infrastructure, NVIDIA
Learn how to keep your GPUs fed with data as you train the next-generation of deep learning architectures. As GPU technology continues to advance, the demand for faster data continues to grow. In deep learning, input pipelines are responsible for a complex chain of actions that ultimately feed data into GPU memory: defining how files are read from storage, deserializing them into data structures, pre-processing on a CPU, and copying to the GPU. These pipelines bring together complex hardware systems--including cluster networks, peripheral interconnects, modern CPUs, and storage devices--along with sophisticated software systems to drive the data movement and transformation. In this talk, we present a new benchmark suite for evaluating and tuning input pipelines. We will examine results with TensorFlow's DataSets API on a DGX-1 with V100 and provide guidance on key tuning parameters and diagnostic techniques for improving performance.Sponsor Session Brian Gold - Founding Member, FlashBlade, Pure Storage
Attendees will learn from how to address the challenges of building AI systems based on the design principles and technologies that have proven successful in Penguin Computing AI deployments for customers in the Top 500. Lessons learned will focus on end-to-end aspects of designing and deploying large scale GPU clusters including datacenter and environmental challenges, network performance and optimization, data pipeline and storage challenges as well as workload orchestration and optimization. Attendees will hear about real life deployments for private organizations and government labs, including those using OCP technology.Sponsor Session Sid Mair - Sr. Vice President Federal Systems, Penguin Computing
Launching today, IBM introduces new software for managing large scale data sets to facilitate both HPC and AI data lifecycles.
New simulation and AI models are dramatically shifting the traditional data lifecycle, which has shifted from intelligent archiving to continuous review, revision and extraction of growing data sets. The result is greater need to tag, track and report on the metadata to be used to extract training sets, understand usage patterns, and manage storage.
In this presentation, I will cover new IBM Spectrum software to deliver better metadata management and review case studies of HPC and AI clients who have been testing it.Sponsor Session Doug O'Flaherty - Portfolio Marketing, IBM
Prerequisites: Advanced experience with neural networks and knowledge of financial industry
Learn the fundamentals of natural language processing (NLP) as it applies to the generation of trade signals from real-time news data. In this session, you'll leverage a dataset of news article headlines to:
Upon completion, you'll be able to apply NLP to generate trade signals from real-time news data.Instructor-Led Training Yuval Mazor - Senior Solutions Architect, Deep Learning, NVIDIA
Prerequisites: Experience with C++
Understanding video requires multi-stream decoding/encoding, scaling, color space conversion, tracking, and multi-stage inference. DeepStream SDK allows developers to focus on core deep learning development, while offering the best system level software optimization and performance. You'll learn how to:
Upon completion, you'll know how to create AI-based video analytics applications using DeepStream to transform video into valuable insights.Instructor-Led Training Nicholas Becker - Deep Learning Solutions Architect, NVIDIA
Get started with an introduction to object detection and image segmentation. You'll explore the shift from traditional computer vision techniques to innovative methods based on deep learning and convolution neural networks (CNNs) for object detection. You'll learn how to:
Upon completion, you'll understand how to implement object detection networks with the API in TensorFlow.Instructor-Led Training Jonathan Howe, NVIDIA
Learn how to effectively schedule and manage your system workload using Slurm; the free, open source and highly scalable cluster management and job scheduling system for Linux clusters. Slurm is in use today on roughly half of the largest systems in the world servicing a broad spectrum of applications. Slurm developers have been working closely with NVIDIA to provide capabilities specifically focused on the needs of GPU management. This includes a multitude of new options to specify GPU requirements for a job in various ways (GPU count per job, node, socket and/or task), additional resource requirements for allocated GPUs (CPUs and/or memory per GPU), how spawned tasks should be bound to allocated GPUs, and control over GPU frequency and voltage. An introduction to Slurm's design and capabilities will be presented with a focus on managing workloads for GPUs.Sponsor Session Morris Jette - Developer, SchedMD LLC
Analytics and AI present a serious challenge to businesses in developing new expertise and transforming data architectures from enterprise-class to AI-ready. AI workloads demand a different approach to managing the data lifecycle. The new AI datacenter must be optimized for ingesting, storing, transforming and optimizing data and feeding that data through hyper-intensive analytics workflows and ultimately, extracting value. Failing fast during experimentation, and scaling successful models quickly to production is vital. Learn how to architect and deploy data platforms with robust and balanced performance for all I/O patterns.Sponsor Session Kurt Kuckein - Sr. Director, Marketing, DDN Storage
Facebook's strength in AI innovation comes from its ability to quickly bring cutting-edge research into large scale production using a multi-faceted toolset. Learn how ONNX and PyTorch 1.0 are helping to accelerate the path from research to production by making AI development more seamless and interoperable. We'll share the latest on PyTorch 1.0 and discuss Facebook's initiatives around ethical and responsible AI development.Sponsor Session Sarah Bird - Technical Program Manager, Facebook
AI's potential cuts across all industries, from agriculture to healthcare to oil and gas and more. But it also can help government agencies be more efficient, better at identifying waste and fraud, and more responsive and convenient for all Americans. This panel will discuss the various AI applications that can make government smarter, and how we get there.Panel Sunmin Kim - Technology Policy Advisor, Office of U.S. Senator Brian Schatz
Learn how RAPIDS and the open source ecosystem are advancing data science. In this session, we will explore RAPIDS, the NEW open source data science platform from NVIDIA. Deep dive into the RAPIDS platform and learn how to get started leveraging the open-source libraries for easier development and enhanced performance data science on GPUs. See the latest engineering work, including benchmarks and demos. Finally, see how customers are benefiting from early primitives and outperforming CPU equivalents.Talk Joshua Patterson - Director, AI Infrastructure, NVIDIA
With the changing demands of application workloads, together with the huge growth of data, new ways of approaching how to solve mission-critical problems need to be addressed. Conventional scale-out systems and methods are often not sufficient. This session will cover some of the challenges we face along with solutions including innovative hardware architectures combined with a robust Deep Learning software ecosystem to address these challenges.Sponsor Session Aaron Potler - Distinguished Engineer - U.S. Federal IBM Global Markets - Systems HW Sales, IBM
This three part presentation will explore how Radiant Solutions is making it possible to see and understand our changing world by applying computer vision to satellite imagery, enabling interactive terrain analytics, and powering immersive analytics in virtual reality.
Kevin McGee will talk first about machine learning/computer vision, then Ryan Smith will talk about terrain analytics, and Nick Deliman will close with VR.Talk Nicholas Deliman - Senior Scientist, Radiant Solutions
Sponsor Session Michael Shrader - Vice President, Innovative and Intelligence Solutions, Carahsoft
The rise of GPU-accelerated data science and AI has come about through a combination of open source innovation and better tooling to support reproducible workflows. However, as the diverse array of deep learning libraries continue to mature, attention is moving to other parts of the AI pipeline, including simulation, ETL, and deployment. In this talk, I'll review open source projects that address these other areas, such as Numba, for implementing custom simulations and data transformations on the GPU, and PyGDF, for GPU accelerated dataframes. I'll discuss how the Anaconda Distribution and its conda packaging system helps data scientists create reproducible environments and deploy models. Finally, I'll talk about how Anaconda Enterprise allows data science teams to collaborate efficiently on GPU-accelerated projects with each other, and supports AI workflows from data exploration all the way to deployment.Talk Stanley Seibert - Director of Community Innovation, Anaconda
Artificial intelligence has opened a new class of efficient technologies to help the American farmer from AI-assisted thinning, weeding and spraying for row crops to automated soft fruit picking. This panel will discuss the latest AI innovations for the farm and the policies that will continue to advance U.S. agriculture through the 21st Century.Panel Trevor White - Professional Staff, U.S. House Agriculture Committee
The convergence of Artificial Intelligence and Virtual Reality has escalated within the past few years. With techniques like deep learning elevating the efficiency of AI, the government can now approach training simulations for specific missions based on the information provided by AI algorithms. This session will dive into the different initiatives that are needed to empower these emerging technologies and address the benefits that will make the world a safer place.Sponsor Session Cameron Kruse - Lead Technologist, Booz Allen Hamilton
This panel will explore the challenges and opportunities of operationalizing and sustaining artificial intelligence (AI) enabled systems in complex high consequence disaster response and recovery scenarios. How can the Department of Defense (DoD), Federal Aviation Administration (FAA), National Oceanic and Atmospheric Administration (NOAA) and other agencies leverage AI across the disaster preparation, response and recovery spectrum? How might AI impact training and procedures? What are the challenges and opportunities?Sponsor Session Jana Eggers - CEO, Nara Logics
What will the Sport of The Future look like? Join The Drone Racing League (DRL), the global, professional circuit for drone racing, for an engaging discussion on how AI will transform sports. The Star Wars-inspired league recently announced a multi-year partnership with Lockheed Martin to accelerate AI innovation, and will soon launch the premier, global autonomous drone racing platform. Learn how to get involved in DRL's new Artificial Intelligence Robotic Racing (AIRR) Circuit, which will challenge teams of the world's best engineers to design an AI/ML framework, powered by the NVIDIA Jetson platform, capable of flying a drone autonomously through complex, 3D tracks -- all for the chance to win more than $2 million in prize money.Talk Ryan Gury - Director of Product, Drone Racing League
Prerequisites: Advanced experience with neural networks and knowledge of financial industry
The "unsupervised" and "end-to-end" detection of anomalies in transactional data is one of the long-standing challenges in financial statement audits or fraud investigations. In this session, you'll explore how autoencoder neural network can be trained to detect anomalies by learning a compressed but "lossy" model of regular transactions. You'll learn:
Upon completion, you'll understand how to train deep autoencoder neural networks to detect anomalies in financial transactions.Instructor-Led Training Yuval Mazor - Senior Solutions Architect, Deep Learning, NVIDIA
Get a hands-on practical introduction to deep learning for radiology and medical imaging. You'll learn how to:
Upon completion, you’ll be able to apply CNNs to classify images in a medical imaging dataset.Instructor-Led Training Cristiana Dinea - Master Instructor, DLI, NVIDIA
Prerequisites: Basic Python and Numpy Competency
Explore an introduction to Numba, a just-in-time function compiler that allows developers to utilize the CUDA platform in their Python applications. You'll learn how to:
Robert Crovella - OEM Technical Enablement Manager, NVIDIA
Upon completion, you'll be able to use Numba to GPU-accelerate NumPy ufuncs in your Python code, and will be ready to learn how to write custom CUDA kernels in Python.
Autonomous vehicles can drastically increase the safety of U.S. roads, where federal officials estimate 94% of all traffic accidents are caused by human error. In addition to road safety, AVs can also improve the quality of life, reducing congestion and freeing up valuable time spent in the car. To realize these significant benefits, manufacturers must develop this technology safely and comprehensively. This session will explore how breakthrough technologies like virtual reality simulation and deep learning are helping to accelerate safe development and deployment of AVs. It will also address, how are manufacturers and regulators can work to ensure the rules of the road evolve with autonomous driving technology, and the further benefits that driverless vehicles bring to our everyday lives.Panel Heidi King - Deputy Administrator, National Highway Traffic Safety Administration (NHTSA)
In this session, we'll explore some of the common challenges with scaling-out deep learning training and inference deployment on data centers and public cloud using Kubernetes on NVIDIA GPUs. Through examples, we'll review a typical workflow for AI deployments on Kubernetes. We'll discuss advanced deployment options such as deploying to heterogenous GPU clusters, specifying GPU memory requirements, and analyzing and monitoring GPU utilizations using NVIDIA DCGM, Prometheus and Grafana.Talk Shashank Prasanna - Sr. Solutions Architect for Autonomous Driving, NVIDIA
For the first time, cyber defenders have access to technical solutions that can proactively detect and combat future zero-day attacks at the pace of the cyber mission. This is only made possible by augmenting established cyber defenses with NVIDIA’s RAPIDS platform, accelerated GPU hardware, and Artificial Intelligence techniques deployed at the edge and in the data center. In this session, we’ll cover the shortcomings of traditional cyber methods and tools, and how AI is the force multiplier needed to scale an end-to-end cyber capability without having to change your infrastructure.Sponsor Session Josh Sullivan - Senior Vice President, Booz Allen Hamilton
BlazingDB, the distributed SQL engine on GPUs, will show how we contribute to the Apache GPU Data Frame (GDF) project, and begun to leverage inside BlazingDB. Through the integration of the GDF we have been able to dramatically accelerate our data engine, getting over 10x performance improvements. More importantly, we have built a robust framework to help users bring data from their data lake into GPU accelerated workloads without having to ETL on CPU memory, or separate CPU clusters. Keep everything in the GPU, BlazingDB handles the SQL ETL, and then pyGDF and DaskGDF can take these results to continue machine learning workloads. With the GDF customer workloads can keep the data in the GPU, reduce network and PCIE I/O, dramatically improve ETL heavy GPU workloads, and enable data scientists to run end-to-end data pipelines from the comfort of one GPU server/cluster.Talk Felipe Aramburu - CTO, Blazing DB