Main Content

Supported Networks, Layers, Boards, and Tools

Supported Pretrained Networks

Deep Learning HDL Toolbox™ supports code generation for series convolutional neural networks (CNNs or ConvNets). You can generate code for any trained CNN whose computational layers are supported for code generation. For a full list, see Supported Layers. You can use one of the pretrained networks listed in the table to generate code for your target Intel® or Xilinx® FPGA boards.

NetworkNetwork DescriptionTypeSingle Data Type (with Shipping Bitstreams)INT8 data type (with Shipping Bitstreams)Application Area
   ZCU102ZC706Arria10 SoCZCU102ZC706Arria10 SoCClassification
AlexNet

AlexNet convolutional neural network.

Series NetworkYesYesYesYesYesYesClassification
LogoNet

Logo recognition network (LogoNet) is a MATLAB® developed logo identification network. For more information, see Logo Recognition Network.

Series NetworkYesYesYesYesYesYesClassification
DigitsNet

Digit classification network. See Create Simple Deep Learning Network for Classification

Series NetworkYesYesYesYesYesYesClassification
Lane detection

LaneNet convolutional neural network. For more information, see Deploy Transfer Learning Network for Lane Detection.

Series NetworkYesYesYesYesYesYesClassification
VGG-16

VGG-16 convolutional neural network. For the pretrained VGG-16 model, see vgg16.

Series NetworkNo. Network exceeds PL DDR memory sizeNo. Network exceeds FC module memory size.YesYesNo. Network exceeds FC module memory size.YesClassification
VGG-19

VGG-19 convolutional neural network. For the pretrained VGG-19 model, see vgg19 .

Series NetworkNo. Network exceeds PL DDR memory sizeNo. Network exceeds FC module memory size.YesYesNo. Network exceeds FC module memory size.YesClassification
Darknet-19

Darknet-19 convolutional neural network. For the pretrained darknet-19 model, see darknet19.

Series NetworkYesYesYesYesYesYesClassification
Radar ClassificationConvolutional neural network that uses micro-Doppler signatures to identify and classify the object. For more information, see Bicyclist and Pedestrian Classification by Using FPGA.Series NetworkYesYesYesYesYesYesClassification and Software Defined Radio (SDR)
Defect Detection snet_defnetsnet_defnet is a custom AlexNet network used to identify and classify defects. For more information, see Defect Detection.Series NetworkYesYesYesYesYesYesClassification
Defect Detection snet_blemdetnetsnet_blemdetnet is a custom convolutional neural network used to identify and classify defects. For more information, see Defect Detection.Series NetworkYesYesYesYesYesYesClassification
YOLO v2 Vehicle DetectionYou look only once (YOLO) is an object detector that decodes the predictions from a convolutional neural network and generates bounding boxes around the objects. For more information, see Vehicle Detection Using YOLO v2 Deployed to FPGA.Series Network basedYesYesYesYesYesYesObject detection
DarkNet-53Darknet-53 convolutional neural network. For the pretrained DarkNet-53 model, see darknet53.Directed acyclic graph (DAG) network basedNo. Network exceeds PL DDR memory size.No. Network fully connected layer exceeds memory size.YesYesNo. Network fully connected layer exceeds memory size.YesClassification
ResNet-18ResNet-18 convolutional neural network. For the pretrained ResNet-18 model, see resnet18.Directed acyclic graph (DAG) network basedYes YesYesYesYesClassification
ResNet-50ResNet-50 convolutional neural network. For the pretrained ResNet-50 model, see resnet50.Directed acyclic graph (DAG) network basedNo. Network exceeds PL DDR memory size.No. Network exceeds PL DDR memory size.YesYesYesYesClassification
ResNet-based YOLO v2You look only once (YOLO) is an object detector that decodes the predictions from a convolutional neural network and generates bounding boxes around the objects. For more information, see Vehicle Detection Using DAG Network Based YOLO v2 Deployed to FPGA.Directed acyclic graph (DAG) network basedYesYesYesYesYesYesObject detection
MobileNetV2MobileNet-v2 convolutional neural network. For the pretrained MobileNet-v2 model, see mobilenetv2.Directed acyclic graph (DAG) network basedYesNo. Fully Connected layer exceeds PL DDR memory size.YesNoNo. Fully Connected layer exceeds PL DDR memory size.NoClassification
GoogLeNetGoogLeNet convolutional neural network. For the pretrained GoogLeNet model, see googlenet.        

Supported Layers

Deep Learning HDL Toolbox supports the layers listed in these tables.

Input Layers

Layer Layer Type Hardware (HW) or Software(SW)Description and LimitationsINT8 Compatible

imageInputLayer

SW

An image input layer inputs 2-D images to a network and applies data normalization.

Yes. Runs as single datatype in SW.

Convolution and Fully Connected Layers

Layer Layer Type Hardware (HW) or Software(SW)Layer Output FormatDescription and LimitationsINT8 Compatible

convolution2dLayer

HWConvolution (Conv)

A 2-D convolutional layer applies sliding convolutional filters to the input.

When generating code for a network using this layer, these limitations apply:

  • Filter size must be 1-15 and square. For example [1 1] or [15 15].

  • Stride size must be 1-15 and square.

  • Padding size must be in the range 0-8.

  • Dilation factor must be [1 1].

  • Padding value is not supported.

Yes

groupedConvolution2dLayer

HWConvolution (Conv)

A 2-D grouped convolutional layer separates the input channels into groups and applies sliding convolutional filters. Use grouped convolutional layers for channel-wise separable (also known as depth-wise separable) convolution.

Code generation is now supported for a 2-D grouped convolution layer that has the NumGroups property set as 'channel-wise'.

When generating code for a network using this layer, these limitations apply:

  • Filter size must be 1-15 and square. For example [1 1] or [14 14]. When the NumGroups is set as 'channel-wise', filter size must be 3-14.

  • Stride size must be 1-15 and square.

  • Padding size must be in the range 0-8.

  • Dilation factor must be [1 1].

  • Number of groups must be 1 or 2.

  • The input feature number must be greater than a single multiple of the square root of the ConvThreadNumber.

  • When the NumGroups is not set as 'channel-wise', the number of filters per group must be a multiple of the square root of the ConvThreadNumber.

Yes

fullyConnectedLayer

HWFully Connected (FC)

A fully connected layer multiplies the input by a weight matrix, and then adds a bias vector.

When generating code for a network using this layer, these limitations apply:

Yes

Activation Layers

LayerLayer Type Hardware (HW) or Software(SW)Layer Output FormatDescription and LimitationsINT8 Compatible

reluLayer

HWLayer is fused.

A ReLU layer performs a threshold operation to each element of the input where any value less than zero is set to zero.

A ReLU layer is supported only when it is preceded by any of these layers:

  • Convolution

  • Fully Connected

  • Adder

Yes

leakyReluLayer

HWLayer is fused.

A leaky ReLU layer performs a threshold operation where any input value less than zero is multiplied by a fixed scalar.

A leaky ReLU layer is supported only when it is preceded by any of these layers:

  • Convolution

  • Fully Connected

  • Adder

Yes

clippedReluLayer

HWLayer is fused.

A clipped ReLU layer performs a threshold operation where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling value.

A clipped ReLU layer is supported only when it is preceded by any of these layers:

  • Convolution

  • Fully Connected

  • Adder

Yes

Normalization, Dropout, and Cropping Layers

LayerLayer Type Hardware (HW) or Software(SW)Layer Output FormatDescription and LimitationsINT8 Compatible

batchNormalizationLayer

HWLayer is fused.

A batch normalization layer normalizes each input channel across a mini-batch.

A batch normalization layer is supported only when it is preceded by a convolution layer.

Yes

crossChannelNormalizationLayer

HWConvolution (Conv)

A channel-wise local response (cross-channel) normalization layer carries out channel-wise normalization.

The WindowChannelSize must be in the range of 3-9 for code generation.

Yes. Runs as single datatype in HW.

dropoutLayer

NoOP on inferenceNoOP on inference

A dropout layer randomly sets input elements to zero within a given probability.

Yes

Pooling and Unpooling Layers

LayerLayer Type Hardware (HW) or Software(SW)Layer Output FormatDescription and LimitationsINT8 Compatible

maxPooling2dLayer

HWConvolution (Conv)

A max pooling layer performs downsampling by dividing the layer input into rectangular pooling regions and computing the maximum of each region.

When generating code for a network using this layer, these limitations apply:

  • Pool size must be 1-15 and square. For example [1 1] or [12 12].

  • Stride size must be 1-15 and square.

  • Padding size must be in the range 0-2.

Yes

averagePooling2dLayer

HWConvolution (Conv)

An average pooling layer performs downsampling by dividing the layer input into rectangular pooling regions and computing the average values of each region.

When generating code for a network using this layer, these limitations apply:

  • Pool size must be 1-15 and square. For example [3 3]

  • Stride size must be 1-15 and square.

  • Padding size must be in the range 0-2.

Yes

globalAveragePooling2dLayer

HWConvolution (Conv) or Fully Connected (FC). When the input activation size is lower than the memory threshold, the layer output format is FC.

A global average pooling layer performs downsampling by computing the mean of the height and width dimensions of the input.

When generating code for a network using this layer, these limitations apply:

  • Can accept inputs of sizes up to 15-by-15-by-N.

  • Total activation pixel size must be smaller than the deep learning processor convolution module input memory size. For more information, see InputMemorySize

Yes

Combination Layers

LayerLayer Type Hardware (HW) or Software(SW)Layer Output FormatDescription and LimitationsINT8 Compatible

additionLayer

HWInherit from input.

An addition layer adds inputs from multiple neural network layers element-wise.

You can now generated code for this layer with int8 data type when the layer is combined with a Leaky ReLU or Clipped ReLU layer.

When generating code for a network using this layer, these limitations apply:

  • Both input layers must have the same output layer format. For example, both layers must have conv output format or fc output format.

Yes

depthConcatenationLayer

HWInherit from input.

A depth concatenation layer takes inputs that have the same height and width and concatenates them along the third dimension (the channel dimension).

When generating code for a network using this layer, these limitations apply:

  • The input activation feature number must be a multiple of the square root of the ConvThreadNumber.

  • Inputs to the depth concatenation layer must be exclusive to the depth concatenation layer.

  • Layers that have a conv output format and layers that have an FC output format cannot be concatenated together.

Yes

Output Layer

LayerLayer Type Hardware (HW) or Software(SW)Description and LimitationsINT8 Compatible

softmax

SW and HW

A softmax layer applies a softmax function to the input.

If the softmax layer is implemented in hardware:

  • The inputs must be in the range exp(-87) to exp(88).

  • Softmax layer followed by adder layer or depth concatenation layer is not supported.

Yes. Runs as single datatype in SW.

classificationLayer

SW

A classification layer computes the cross-entropy loss for multiclass classification issues that have mutually exclusive classes.

Yes

regressionLayer

SW

A regression layer computes the half mean squared error loss for regression problems.

Yes

Keras and ONNX Layers

LayerLayer Type Hardware (HW) or Software(SW)Layer Output FormatDescription and LimitationsINT8 Compatible
nnet.keras.layer.FlattenCStyleLayerHWLayer will be fused

Flatten activations into 1-D layers assuming C-style (row-major) order.

A nnet.keras.layer.FlattenCStyleLayer is supported only when it is followed by a fully connected layer.

Yes

nnet.keras.layer.ZeroPadding2dLayerHWLayer will be fused.

Zero padding layer for 2-D input.

A nnet.keras.layer.ZeroPadding2dLayer is supported only when it is followed by a convolution layer or a maxpool layer.

Yes

Supported Boards

These boards are supported by Deep Learning HDL Toolbox:

  • Xilinx Zynq®-7000 ZC706

  • Intel Arria® 10 SoC

  • Xilinx Zynq UltraScale+™ MPSoC ZCU102

Third-Party Synthesis Tools and Version Support

Deep Learning HDL Toolbox has been tested with:

  • Xilinx Vivado Design Suite 2020.1

  • Intel Quartus Prime 18.1

Related Topics