Target library
Deep learning library to use during code generation for target environment
Model Configuration Pane: Code Generation / Interface
Description
The Target library parameter specifies the deep learning library the code generator uses to produce code for the target environment.
Dependency
To enable this parameter, set the System target file
parameter togrt.tlc or ert.tlc and
the Language parameter to
C++.
Settings
None (default) | MKL-DNN | ARM Compute | cuDNN | TensorRTMKL-DNNGenerates code that uses the Intel® Math Kernel Library for Deep Neural Networks (Intel MKL-DNN).
To enable this setting (or
ARM Compute), clear the GPU acceleration parameter.When select this setting, the Language standard parameter is set to
C++11 (ISO).
ARM ComputeGenerates code that uses the ARM® Compute Library.
To enable this setting (or
MKL-DNN), clear the GPU acceleration parameter.
cuDNNGenerates code that uses the CUDA® Deep Neural Network library (cuDNN).
This setting requires a GPU Coder™ license.
To enable this setting (or
TensorRT), select the GPU acceleration parameter.
TensorRTGenerates code that takes advantage of the NVIDIA® TensorRT – high performance deep learning inference optimizer and run-time library.
This setting requires a GPU Coder license.
To enable this setting (or
cuDNN, select the GPU acceleration parameter.
Recommended Settings
| Application | Setting |
|---|---|
| Debugging | |
| Traceability | |
| Efficiency | |
| Safety precaution |
Programmatic Use
Parameter:
DLTargetLibrary |
| Type: character vector |
Value:
'None' | 'MKL-DNN' |
'arm-compute' | 'cuDNN' |
'TensorRT' |
Default:
'None' |
Version History
Introduced in R2020b