Use code generation options and optimizations to improve the execution speed
of the generated code. You can modify or disable dynamic memory allocation,
which can affect execution speed.
Parallelized code can be generated by using
When available, take advantage of preexisting optimized C code and specialized
libraries to speed up execution.
For more information about how to optimize your code for specific conditions, see Optimization Strategies.
|Declare variable-size data|
|Fold expressions into constants in generated code|
|Control inlining of a specific function in generated code|
|Disable automatic parallelization of a |
|Call external C/C++ function|
|Abstract class for specifying the LAPACK library and LAPACKE header file for LAPACK calls in generated code|
|Abstract class for specifying the BLAS library and CBLAS header and data type information for BLAS calls in generated code|
|Abstract class for specifying an FFTW library for FFTW calls in generated code|
Optimize the execution speed or memory usage of generated code.
To improve the performance of generated code, the code generator uses optimizations.
Implicit expansion in the generated code is enabled by default.
Dynamic memory allocation can slow down execution speeds.
Improve execution time by minimizing dynamic memory allocation.
Use techniques to help the code generator determine the upper bound for a variable-size array.
Disable dynamic memory allocation in the app or at the command line.
Disable dynamic memory allocation for arrays less than a certain size.
Improve execution time of dynamic arrays in generated C code.
Generate C/C++ code with row elements stored contiguously in memory.
Generate MEX functions for
parfor-loops are classified as loop, sliced,
broadcast, reduction, or temporary.
Generate a loop that runs in parallel on shared-memory multicore platforms.
Generate a MEX function that executes loop iterations in parallel on specific number of available cores.
for-loops on specific number of available
cores in the generated code.
A reduction variable accumulates a value that depends on all the loop iterations together.
that run on a single thread.
Install OpenMP library to generate parallel
Move operations outside of loop when possible.
Control loop unrolling.
Iterations of parallel
for loops can run simultaneously on multiple cores on the target hardware.
Improve the execution speed of the generated code using Intel SSE and Intel AVX technology.
Generate code that passes input arguments by reference.
Inlining eliminates the overhead of function calls but can produce larger C/C++ code and reduce code readability.
Reduce execution time by replacing expression with constant in the generated code.
Improve performance by suppressing generation of supporting code to handle integer overflow or nonfinites.
LAPACK function calls improve the execution speed of code generated for certain linear algebra functions.
BLAS function calls improve the execution speed of code generated for certain low-level vector and matrix operations.
Choose the correct fast Fourier transform implementation for your workflow and target hardware.
Improve performance by integrating your own optimized code.
Generate LAPACK calls for certain linear algebra functions. Specify LAPACK library to use.
Generate BLAS calls for certain low-level matrix operations. Specify BLAS library to use.
Generate FFTW library calls for fast Fourier transforms. Specify the FFTW library.
Implement FFT library callback class methods and provide supporting C code to prevent concurrent access to FFTW planning.
Diagnose errors for code generation of
Troubleshoot issues that occur when the source MATLAB® code contains global or persistent variables that are reachable
from the body of a
Troubleshoot automatic parallelization failure caused by memory access pattern