With MATLAB and Simulink, you can design, simulate, test, verify, and deploy AI algorithms that enhance the performance and functionality of complex embedded systems.
Using MATLAB and Simulink for Deployment to Embedded AI
Discover how you can prepare AI models and automatically generate code to deploy embedded AI applications to CPUs, GPUs, FPGAs, and more. Explore tutorials, examples, and videos for practical advice on embedded AI with MATLAB and Simulink.
![Deep Network Quantizer App Screenshot of a layered graph, calibration statistics, and validation results to optimize AI models for embedded deployment.](https://uk.mathworks.com/solutions/deep-learning/embedded-ai/_jcr_content/mainParsys/band_398554659_copy_/mainParsys/columns_copy_copy/507537ca-afa8-41c5-806f-bbdd06667040/image_copy_copy.adapt.full.medium.jpg/1736355373970.jpg)
Deploy to CPUs and Microcontrollers
Generate portable, optimized C/C++ code from trained machine learning and deep learning models with MATLAB Coder and Simulink Coder.
![GPU Coder App Screenshot of C/C++ code in Simulink being deployed to images of a NVIDIA desktop and embedded GPU.](https://uk.mathworks.com/solutions/deep-learning/embedded-ai/_jcr_content/mainParsys/band_398554659_copy_/mainParsys/columns_copy_copy/b827a46e-7d00-424f-81d8-b611fc9edab9/image_copy.adapt.full.medium.jpg/1736355374032.jpg)
Deploy to GPUs
Generate optimized CUDA® code for trained deep learning networks with GPU Coder for deployment to desktops, servers, and embedded GPUs.
![Prototype and Deploy Deep Learning HDL IP Cores for FPGAs and SoCs Running FPGA-based deep learning inference on prototype hardware from MATLAB, then generating a deep learning HDL IP core for deployment on any FPGA or ASIC.](https://uk.mathworks.com/solutions/deep-learning/embedded-ai/_jcr_content/mainParsys/band_398554659_copy_/mainParsys/columns_copy_copy_co/507537ca-afa8-41c5-806f-bbdd06667040/image_copy_copy.adapt.full.medium.jpg/1736355374128.jpg)
Deploy to FPGAs and SoCs
Prototype and implement deep learning networks on FPGAs and SoCs with Deep Learning HDL Toolbox. Generate custom deep learning processor IP cores and bitstreams with HDL Coder.
![Deep Network Quantizer App Screenshot of a layered graph, calibration statistics, and validation results to optimize AI models for embedded deployment.](https://uk.mathworks.com/solutions/deep-learning/embedded-ai/_jcr_content/mainParsys/band_398554659_copy_/mainParsys/columns_copy_copy_co/b827a46e-7d00-424f-81d8-b611fc9edab9/image_copy.adapt.full.medium.jpg/1736355374188.jpg)
AI Model Compression
Compress deep neural networks with quantization, projection, or pruning to reduce memory footprint and increase inference performance.