Main Content

dlnetwork

Deep learning neural network

Description

A dlnetwork object specifies a deep learning neural network architecture.

Tip

For most deep learning tasks, you can use a pretrained neural network and adapt it to your own data. For an example showing how to use transfer learning to retrain a convolutional neural network to classify a new set of images, see Retrain Neural Network to Classify New Images. Alternatively, you can create and train neural networks from scratch using the trainnet and trainingOptions functions.

If the trainingOptions function does not provide the training options that you need for your task, then you can create a custom training loop using automatic differentiation. To learn more, see Train Network Using Custom Training Loop.

If the trainnet function does not provide the loss function that you need for your task, then you can specify a custom loss function to the trainnet as a function handle. For loss functions that require more inputs than the predictions and targets (for example, loss functions that require access to the neural network or additional inputs), train the model using a custom training loop. To learn more, see Train Network Using Custom Training Loop.

If Deep Learning Toolbox™ does not provide the layers you need for your task, then you can create a custom layer. To learn more, see Define Custom Deep Learning Layers. For models that cannot be specified as networks of layers, you can define the model as a function. To learn more, see Train Network Using Model Function.

For more information about which training method to use for which task, see Train Deep Learning Model in MATLAB.

Creation

Description

Empty Network

net = dlnetwork creates a dlnetwork object with no layers. Use this syntax to create a neural network from scratch. (since R2024a)

example

Network with Input Layers

net = dlnetwork(layers) creates neural network using the specified layers and initializes any unset learnable and state parameters. This syntax uses the input layer in layers to determine the size and format of the learnable and state parameters of the neural network.

Use this syntax when layers defines a complete single-input neural network, has layers arranged in series, and has an input layer.

example

net = dlnetwork(layers,OutputNames=names) also sets the OutputNames property. The OutputNames property specifies the layers or layer outputs that correspond to network outputs.

Use this syntax when layers defines a complete single-input multi-output neural network, has layers arranged in series, and has an input layer.

net = dlnetwork(layers,Initialize=tf) specifies whether to initialize the learnable and state parameters of the neural network. When tf is 1, (true), this syntax is equivalent to net = dlnetwork(layers). When tf is 0 (false), this syntax is equivalent to creating an empty network and then adding layers using the addLayers function.

Network With Unconnected Inputs

net = dlnetwork(layers,X1,...,XN) creates a neural network using the specified layers and initializes any unset learnable and state parameters. This syntax uses the network data layout objects or example inputs X1,...,XN to determine the size and format of the learnable parameters and state values of the neural network, where N is the number of network inputs.

Use this syntax when layers defines a complete neural network, has layers arranged in series, and has inputs that are not connected to input layers.

net = dlnetwork(layers,X1,...,XN,OutputNames=names) also sets the OutputNames property.

Use this syntax when layers defines a complete neural network, has multiple outputs, has layers arranged in series, and has inputs that are not connected to input layers.

Conversion

net = dlnetwork(prunableNet) converts a TaylorPrunableNetwork to a dlnetwork object by removing filters selected for pruning from the convolution layers of prunableNet and returns a compressed dlnetwork object that has fewer learnable parameters and is smaller in size.

net = dlnetwork(mdl) converts a Statistics and Machine Learning Toolbox™ machine learning model to a dlnetwork object.

Input Arguments

expand all

Network layers, specified as a Layer array.

The software connects the layers in series.

For a list of supported layers, see List of Deep Learning Layers.

Example data or data layouts to use to determine the size and formats of learnable and state parameters, specified as formatted dlarray objects or formatted networkDataLayout objects. The software propagates X1,...XN through the network to determine the appropriate sizes and formats of the learnable and state parameters of the dlnetwork object and initializes any unset learnable or state parameters.

The order of X1,...,XN must match the order of the layers that require inputs in layers.

Note

Automatic initialization uses only the size and format information of the input data. For initialization that depends on the values on the input data, you must initialize the learnable parameters manually.

Flag to initialize learnable and state parameters, specified as one of these values:

  • 1 (true) — Initialize the learnable and state parameters. The software uses the input layer in layers to determine the sizes of the learnable and state parameters.

  • 0 (false) — Do not initialize the learnable and state parameters. Use this option when:

    • You expect to make further edits to the neural network. For example, when you expect to add or remove layers and connections.

    • You use the network in a custom layer and you want to use a custom initialize function.

Neural network prediction and custom training loops requires an initialized network. To initialize an uninitialized network, use the initialize function.

Network for pruning by using first-order Taylor approximation, specified as a TaylorPrunableNetwork object.

Pruning a deep neural network requires the Deep Learning Toolbox Model Quantization Library support package. This support package is a free add-on that you can download using the Add-On Explorer. Alternatively, see Deep Learning Toolbox Model Quantization Library.

Since R2024b

Classification or regression neural network, specified as a ClassificationNeuralNetwork (Statistics and Machine Learning Toolbox), RegressionNeuralNetwork (Statistics and Machine Learning Toolbox), CompactClassificationNeuralNetwork (Statistics and Machine Learning Toolbox), or CompactRegressionNeuralNetwork (Statistics and Machine Learning Toolbox) object.

The fitcnet (Statistics and Machine Learning Toolbox) and fitrnet (Statistics and Machine Learning Toolbox) functions return ClassificationNeuralNetwork and RegressionNeuralNetwork objects, respectively. The compact (Statistics and Machine Learning Toolbox) function returns CompactClassificationNeuralNetwork and CompactRegressionNeuralNetwork objects.

Properties

expand all

Network layers, specified as a Layer array.

Layer connections, specified as a table with two columns.

Each table row represents a connection in the neural network. The first column, Source, specifies the source of each connection. The second column, Destination, specifies the destination of each connection. The connection sources and destinations are either layer names or have the form "layerName/IOName", where "IOName" is the name of the layer input or output.

Data Types: table

Network learnable parameters, specified as a table with three columns:

  • Layer — Layer name, specified as a string scalar.

  • Parameter — Parameter name, specified as a string scalar.

  • Value — Value of parameter, specified as a dlarray object.

The network learnable parameters contain the features learned by the network. For example, the weights of convolution and fully connected layers.

The learnable parameter values can be complex-valued (since R2024a).

Data Types: table

Network state, specified as a table.

The network state is a table with three columns:

  • Layer – Layer name, specified as a string scalar.

  • Parameter – State parameter name, specified as a string scalar.

  • Value – Value of state parameter, specified as a dlarray object.

Layer states contain information calculated during the layer operation to be retained for use in subsequent forward passes of the layer. For example, the cell state and hidden state of LSTM layers, or running statistics in batch normalization layers.

For recurrent layers, such as LSTM layers, with the HasStateInputs property set to 1 (true), the state table does not contain entries for the states of that layer.

During training or inference, you can update the network state using the output of the forward and predict functions.

The state values can be complex-valued (since R2024a).

Data Types: table

Names of the network inputs, specified as a cell array of character vectors.

Network inputs are the input layers and the unconnected inputs of layers.

For input layers and layers with a single input, the input name is the name of the layer. For layers with multiple inputs, the input name is "layerName/inputName", where layerName is the name of the layer and inputName is the name of the layer input.

For networks with multiple inputs, training and prediction functions use this property to determine the order of the inputs. For example, for in-memory inputs X1,...,XM to the predict function, the order of the inputs must match the order of the corresponding inputs in the InputNames property of the network.

To customize the order, set InputNames to the desired order. (since R2024b)

Before R2024b: This property is read-only. Adjust your code such that calls to the predict function and similar has input arguments that match the order specified by InputNames.

Note

If you customize the InputNames property and then make edits to the neural network, for example, by adding or removing layers, then the InputNames property does not change. This behavior means that if you add or remove layers that correspond to network inputs, then you must also update the InputNames property manually.

Data Types: cell

Names of the network outputs, specified as a cell array of character vectors.

For layers with a single output, the output name is the name of the layer. For layers with multiple outputs, the output name is "layerName/outputName", where layerName is the name of the layer and outputName is the name of the layer output.

If you do not specify the output names, then when you create the network, the software sets the OutputNames property to the layers with unconnected outputs.

For networks with multiple outputs, training and prediction functions use this property to determine the order of the outputs. For example, the outputs Y1,...,YN of the predict function correspond to the outputs specified by the OutputNames property of the network.

Note

If you customize the OutputNames property and then make edits to the neural network, for example, by adding or removing layers, then the OutputNames property does not change. This behavior means that if you add or remove layers that correspond to network outputs, then you must also update the OutputNames property manually.

Data Types: cell

This property is read-only.

Flag for initialized network, specified as one of these values:

  • 1 (true) — Network is initialized and is ready for prediction and custom training loops. If you change the values of the learnable or state parameters, then the network remains initialized.

  • 0 (false) — Network is not initialized and is not ready prediction or custom training loops. To initialize an uninitialized network, use the initialize function.

Data Types: logical

Object Functions

addInputLayerAdd input layer to network
addLayersAdd layers to neural network
removeLayersRemove layers from neural network
connectLayersConnect layers in neural network
disconnectLayersDisconnect layers in neural network
replaceLayerReplace layer in neural network
getLayerLook up a layer by name or path
expandLayersExpand network layers
groupLayersGroup layers into network layers
summaryPrint network summary
plotPlot neural network architecture
initializeInitialize learnable and state parameters of a dlnetwork
predictCompute deep learning network output for inference
forwardCompute deep learning network output for training
resetStateReset state parameters of neural network
setL2FactorSet L2 regularization factor of layer learnable parameter
setLearnRateFactorSet learn rate factor of layer learnable parameter
getLearnRateFactorGet learn rate factor of layer learnable parameter
getL2FactorGet L2 regularization factor of layer learnable parameter

Examples

collapse all

Define a two-output neural network that predicts both categorical labels and numeric values given 2-D images as input.

Specify the number of classes and responses.

numClasses = 10;
numResponses = 1;

Create an empty neural network.

net = dlnetwork;

Define the layers of the main branch of the network and the softmax output.

layers = [
    imageInputLayer([28 28 1],Normalization="none")

    convolution2dLayer(5,16,Padding="same")
    batchNormalizationLayer
    reluLayer(Name="relu_1")

    convolution2dLayer(3,32,Padding="same",Stride=2)
    batchNormalizationLayer
    reluLayer
    convolution2dLayer(3,32,Padding="same")
    batchNormalizationLayer
    reluLayer

    additionLayer(2,Name="add")

    fullyConnectedLayer(numClasses)
    softmaxLayer(Name="softmax")];

net = addLayers(net,layers);

Add the skip connection.

layers = [
    convolution2dLayer(1,32,Stride=2,Name="conv_skip")
    batchNormalizationLayer
    reluLayer(Name="relu_skip")];

net = addLayers(net,layers);
net = connectLayers(net,"relu_1","conv_skip");
net = connectLayers(net,"relu_skip","add/in2");

Add the fully connected layer for the regression output.

layers = fullyConnectedLayer(numResponses,Name="fc_2");
net = addLayers(net,layers);
net = connectLayers(net,"add","fc_2");

View the neural network in a plot.

figure
plot(net)

Figure contains an axes object. The axes object contains an object of type graphplot.

If you have a layer that defines a complete single-input neural network, has layers arranged in series, and has an input layer, then you can convert the layer array to a dlnetwork object directly.

Specify an LSTM network as a layer array.

layers = [
    sequenceInputLayer(12)
    lstmLayer(100)
    fullyConnectedLayer(9)
    softmaxLayer];

Convert the layer array to a dlnetwork object. Because the layer array has an input layer and no other inputs, the software initializes the neural network.

net = dlnetwork(layers)
net = 
  dlnetwork with properties:

         Layers: [4x1 nnet.cnn.layer.Layer]
    Connections: [3x2 table]
     Learnables: [5x3 table]
          State: [2x3 table]
     InputNames: {'sequenceinput'}
    OutputNames: {'softmax'}
    Initialized: 1

  View summary with summary.

Load a pretrained network.

net = imagePretrainedNetwork;

The Learnables property of the dlnetwork object is a table that contains the learnable parameters of the network. The table includes parameters of nested layers in separate rows. View the first few rows of the learnables table.

learnables = net.Learnables;
head(learnables)
          Layer           Parameter           Value       
    __________________    _________    ___________________

    "conv1"               "Weights"    {3x3x3x64  dlarray}
    "conv1"               "Bias"       {1x1x64    dlarray}
    "fire2-squeeze1x1"    "Weights"    {1x1x64x16 dlarray}
    "fire2-squeeze1x1"    "Bias"       {1x1x16    dlarray}
    "fire2-expand1x1"     "Weights"    {1x1x16x64 dlarray}
    "fire2-expand1x1"     "Bias"       {1x1x64    dlarray}
    "fire2-expand3x3"     "Weights"    {3x3x16x64 dlarray}
    "fire2-expand3x3"     "Bias"       {1x1x64    dlarray}

To freeze the learnable parameters of the network, loop over the learnable parameters and set the learn rate to 0 using the setLearnRateFactor function.

factor = 0;

numLearnables = size(learnables,1);
for i = 1:numLearnables
    layerName = learnables.Layer(i);
    parameterName = learnables.Parameter(i);
    
    net = setLearnRateFactor(net,layerName,parameterName,factor);
end

To use the updated learn rate factors when training, you must pass the dlnetwork object to the update function in the custom training loop. For example, use the command

[net,velocity] = sgdmupdate(net,gradients,velocity);

Extended Capabilities

Version History

Introduced in R2019b

expand all