GPU Coder™ generates optimized CUDA® code from MATLAB® code and Simulink® models. The generated code includes CUDA kernels for parallelizable parts of your deep learning, embedded vision, and signal processing algorithms. For high performance, the generated code calls optimized NVIDIA® CUDA libraries, including TensorRT, cuDNN, cuFFT, cuSolver, and cuBLAS. The code can be integrated into your project as source code, static libraries, or dynamic libraries, and it can be compiled for desktops, servers, and GPUs embedded on NVIDIA Jetson, NVIDIA DRIVE, and other platforms. You can use the generated CUDA within MATLAB to accelerate deep learning networks and other computationally intensive portions of your algorithm. GPU Coder lets you incorporate handwritten CUDA code into your algorithms and into the generated code.
When used with Embedded Coder®, GPU Coder lets you verify the numerical behavior of the generated code via software-in-the-loop (SIL) and processor-in-the-loop (PIL) testing.
Generate CUDA code from MATLAB code by using the GPU Coder app.
Generate CUDA code from MATLAB code by using the
Behavioral verification of generated code, traceability, and code generation reports.
Generate code for pretrained convolutional neural networks by using the cuDNN library.
Generate code for pretrained convolutional neural networks by using the TensorRT library.
Improve simulation speed by using NVIDIA GPUs.
Generate CUDA code from Simulink models by using GPU Coder.
Simulate and generate code for deep learning models in Simulink using library blocks.
Introduction to GPU accelerated computing.
Workflow for CUDA MEX and standalone CUDA C++ code generation.