Deploying Deep Learning Networks to Embedded GPUs and CPUs using GPU Coder
Overview
Designing and deploying deep learning and computer vision applications to embedded CPU and GPU platforms is challenging because of resource constraints inherent in embedded devices. A MATLAB® based workflow facilitates the design of these applications, and automatically generated C or CUDA® code can be deployed on boards like the Jetson TX2 and DRIVE PX and achieve very fast inference.
The presentation illustrates how MATLAB supports all major phases of this workflow. Starting with algorithm design, the algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB. Next, these networks are trained using GPU and parallel computing support for MATLAB either on the desktop, cluster, or the cloud. Finally, GPU Coder™ generates portable and optimized C/C++ and/or CUDA® code from the MATLAB algorithm, which is then cross-compiled and deployed to CPUs and/or Tegra® board. Benchmarks show that performance of the auto-generated CUDA code is ~5x faster than TensorFlow® and ~2X faster than MXNet.
Highlights
Watch this talk to learn how to:
1. Access and manage large image sets
2. Visualize networks and gain insight into the training process
3. Import reference networks such as AlexNet and GoogLeNet
4. Automatically generate portable and optimized CUDA code from the MATLAB algorithm for NVIDIA GPUs
5. Automatically generate portable and optimized code from deep learning networks implemented in MATLAB for Intel Xeon CPUs and ARM Cortex-A processors
About the Presenter
Bill Chou is the Product Marketing Manager for GPU Coder and has been working with MathWorks code generation technologies for the past 12 years. Bill holds an M.S. degree in Electrical Engineering from the University of Southern California and a B.A.Sc degree in Electrical Engineering from the University of British Columbia.
Recorded: 20 Jun 2018