Main Content

Data type (ARM Compute)

Inference computation precision

Since R2021a

Description

App Configuration Pane: Deep Learning

Configuration Objects: coder.ARMNEONConfig

Specify the precision of the inference computations in supported layers.

Dependencies

To enable this parameter, you must set Deep learning library to ARM Compute.

Settings

fp32

This setting is the default setting.

Inference computation is performed in 32-bit floats.

int8

Inference computation is performed in 8-bit integers.

Programmatic Use

Property: DataType
Values: 'fp32' | 'int8'
Default: 'fp32'

Version History

Introduced in R2021a