Bidirectional long shortterm memory (BiLSTM) layer
A bidirectional LSTM (BiLSTM) layer learns bidirectional longterm dependencies between time steps of time series or sequence data. These dependencies can be useful when you want the network to learn from the complete time series at each time step.
creates a bidirectional LSTM layer and sets the layer
= bilstmLayer(numHiddenUnits
)NumHiddenUnits
property.
sets additional layer
= bilstmLayer(numHiddenUnits
,Name,Value
)OutputMode
, Activations, State, Parameters and Initialization, Learning Rate and Regularization, and
Name
properties using one or more namevalue pair arguments. You can specify multiple
namevalue pair arguments. Enclose each property name in quotes.
NumHiddenUnits
— Number of hidden unitsNumber of hidden units (also known as the hidden size), specified as a positive integer.
The number of hidden units corresponds to the amount of information remembered between time steps (the hidden state). The hidden state can contain information from all previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer might overfit to the training data. This value can vary from a few dozen to a few thousand.
The hidden state does not limit the number of time steps that are processed in an
iteration. To split your sequences into smaller sequences for training, use the
'SequenceLength'
option in trainingOptions
.
Example: 200
OutputMode
— Output mode'sequence'
(default)  'last'
Output mode, specified as one of the following:
'sequence'
– Output the complete sequence.
'last'
– Output the last time step of the
sequence.
HasStateInputs
— Flag for state inputs to layer0
(false) (default)  1
(true)Flag for state inputs to the layer, specified as 0
(false) or
1
(true).
If the HasStateInputs
property is 0
(false), then the
layer has one input with name 'in'
, which corresponds to the input data.
In this case, the layer uses the HiddenState
and
CellState
properties for the layer operation.
If the HasStateInputs
property is 1
(true), then the
layer has three inputs with names 'in'
, 'hidden'
, and
'cell'
, which correspond to the input data, hidden state, and cell
state respectively. In this case, the layer uses the values passed to these inputs for the
layer operation. If HasStateInputs
is
1
(true), then the HiddenState
and
CellState
properties must be empty.
HasStateOutputs
— Flag for state outputs from layer0
(false) (default)  1
(true)Flag for state outputs from the layer, specified as 0
(false) or
1
(true).
If the HasStateOutputs
property is 0
(false), then the
layer has one output with name 'out'
, which corresponds to the output
data.
If the HasStateOutputs
property is 1
(true), then the
layer has three outputs with names 'out'
,
'hidden'
, and 'cell'
, which correspond
to the output data, hidden state, and cell state, respectively. In this case, the
layer also outputs the state values computed during the layer operation.
InputSize
— Input size'auto'
(default)  positive integer Input size, specified as a positive integer or 'auto'
. If InputSize
is 'auto'
, then the software automatically assigns the input size at training time.
Example: 100
StateActivationFunction
— Activation function to update the cell and hidden state'tanh'
(default)  'softsign'
Activation function to update the cell and hidden state, specified as one of the following:
'tanh'
– Use the hyperbolic tangent function (tanh).
'softsign'
– Use the softsign function $$\text{softsign}(x)=\frac{x}{1+\leftx\right}$$.
The layer uses this option as the function $${\sigma}_{c}$$ in the calculations to update the cell and hidden state. For more information on how activation functions are used in an LSTM layer, see Long ShortTerm Memory Layer.
GateActivationFunction
— Activation function to apply to the gates'sigmoid'
(default)  'hardsigmoid'
Activation function to apply to the gates, specified as one of the following:
'sigmoid'
– Use the sigmoid function $$\sigma (x)={(1+{e}^{x})}^{1}$$.
'hardsigmoid'
– Use the hard sigmoid function
$$\sigma (x)=\{\begin{array}{cc}\begin{array}{l}0\hfill \\ 0.2x+0.5\hfill \\ 1\hfill \end{array}& \begin{array}{l}\text{if}x2.5\hfill \\ \text{if}2.5\le x\le 2.5\hfill \\ \text{if}x2.5\hfill \end{array}\end{array}.$$
The layer uses this option as the function $${\sigma}_{g}$$ in the calculations for the layer gates.
CellState
— Cell stateCell state to use in the layer operation, specified as a
2*NumHiddenUnits
by1 numeric vector. This value
corresponds to the initial cell state when data is passed to the
layer.
After setting this property manually, calls to the
resetState
function set the cell state to this
value.
If HasStateInputs
is
true
, then the CellState
property must be empty.
HiddenState
— Hidden stateHidden state to use in the layer operation, specified as a
2*NumHiddenUnits
by1 numeric vector. This value
corresponds to the initial hidden state when data is passed to the
layer.
After setting this property manually, calls to the
resetState
function set the hidden state to
this value.
If HasStateInputs
is
true
, then the HiddenState
property must be empty.
InputWeightsInitializer
— Function to initialize input weights'glorot'
(default)  'he'
 'orthogonal'
 'narrownormal'
 'zeros'
 'ones'
 function handleFunction to initialize the input weights, specified as one of the following:
'glorot'
– Initialize the input weights
with the Glorot initializer [1]
(also known as Xavier initializer). The Glorot initializer
independently samples from a uniform distribution with zero
mean and variance 2/(InputSize + numOut)
,
where numOut = 8*NumHiddenUnits
.
'he'
– Initialize the input weights
with the He initializer [2].
The He initializer samples from a normal distribution with
zero mean and variance
2/InputSize
.
'orthogonal'
– Initialize the input
weights with Q, the orthogonal matrix
given by the QR decomposition of Z =
QR for a random
matrix Z sampled from a unit normal
distribution. [3]
'narrownormal'
– Initialize the input
weights by independently sampling from a normal distribution
with zero mean and standard deviation 0.01.
'zeros'
– Initialize the input weights
with zeros.
'ones'
– Initialize the input weights
with ones.
Function handle – Initialize the input weights with a
custom function. If you specify a function handle, then the
function must be of the form weights =
func(sz)
, where sz
is the
size of the input weights.
The layer only initializes the input weights when the
InputWeights
property is empty.
Data Types: char
 string
 function_handle
RecurrentWeightsInitializer
— Function to initialize recurrent weights'orthogonal'
(default)  'glorot'
 'he'
 'narrownormal'
 'zeros'
 'ones'
 function handleFunction to initialize the recurrent weights, specified as one of the following:
'orthogonal'
– Initialize the input
weights with Q, the orthogonal matrix
given by the QR decomposition of Z =
QR for a random
matrix Z sampled from a unit normal
distribution. [3]
'glorot'
– Initialize the recurrent
weights with the Glorot initializer [1]
(also known as Xavier initializer). The Glorot initializer
independently samples from a uniform distribution with zero
mean and variance 2/(numIn + numOut)
,
where numIn = NumHiddenUnits
and
numOut = 8*NumHiddenUnits
.
'he'
– Initialize the recurrent weights
with the He initializer [2].
The He initializer samples from a normal distribution with
zero mean and variance
2/NumHiddenUnits
.
'narrownormal'
– Initialize the
recurrent weights by independently sampling from a normal
distribution with zero mean and standard deviation
0.01.
'zeros'
– Initialize the recurrent
weights with zeros.
'ones'
– Initialize the recurrent
weights with ones.
Function handle – Initialize the recurrent weights with a
custom function. If you specify a function handle, then the
function must be of the form weights =
func(sz)
, where sz
is the
size of the recurrent weights.
The layer only initializes the recurrent weights when the
RecurrentWeights
property is empty.
Data Types: char
 string
 function_handle
BiasInitializer
— Function to initialize bias'unitforgetgate'
(default)  'narrownormal'
 'ones'
 function handleFunction to initialize the bias, specified as one of the following:
'unitforgetgate'
– Initialize the forget gate bias
with ones and the remaining biases with zeros.
'narrownormal'
– Initialize the bias by independently
sampling from a normal distribution with zero mean and standard deviation
0.01.
'ones'
– Initialize the bias with ones.
Function handle – Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form bias = func(sz)
, where sz
is the size of the bias.
The layer only initializes the bias when the Bias
property is
empty.
Data Types: char
 string
 function_handle
InputWeights
— Input weights[]
(default)  matrixInput weights, specified as a matrix.
The input weight matrix is a concatenation of the eight input weight matrices for the components (gates) in the bidirectional LSTM layer. The eight matrices are concatenated vertically in the following order:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
The input weights are learnable parameters. When training a network, if InputWeights
is nonempty, then trainNetwork
uses the InputWeights
property as the initial value. If InputWeights
is empty, then trainNetwork
uses the initializer specified by InputWeightsInitializer
.
At training time, InputWeights
is
an 8*NumHiddenUnits
byInputSize
matrix.
RecurrentWeights
— Recurrent weights[]
(default)  matrixRecurrent weights, specified as a matrix.
The recurrent weight matrix is a concatenation of the eight recurrent weight matrices for the components (gates) in the bidirectional LSTM layer. The eight matrices are concatenated vertically in the following order:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
The recurrent weights are learnable parameters. When training a network, if RecurrentWeights
is nonempty, then trainNetwork
uses the RecurrentWeights
property as the initial value. If RecurrentWeights
is empty, then trainNetwork
uses the initializer specified by RecurrentWeightsInitializer
.
At training time, RecurrentWeights
is an
8*NumHiddenUnits
byNumHiddenUnits
matrix.
Bias
— Layer biases[]
(default)  numeric vectorLayer biases, specified as a numeric vector.
The bias vector is a concatenation of the eight bias vectors for the components (gates) in the bidirectional LSTM layer. The eight vectors are concatenated vertically in the following order:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
The layer biases are learnable parameters. When you train a
network, if Bias
is nonempty, then trainNetwork
uses the Bias
property as the
initial value. If Bias
is empty, then
trainNetwork
uses the initializer specified by BiasInitializer
.
At training time, Bias
is an
8*NumHiddenUnits
by1 numeric vector.
InputWeightsLearnRateFactor
— Learning rate factor for input weightsLearning rate factor for the input weights, specified as a numeric scalar or a 1by8 numeric vector.
The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, if InputWeightsLearnRateFactor
is 2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions
function.
To control the value of the learning rate factor for the four
individual matrices in InputWeights
, assign a
1by8 vector, where the entries correspond to the learning rate factor
of the following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 0.1
RecurrentWeightsLearnRateFactor
— Learning rate factor for recurrent weightsLearning rate factor for the recurrent weights, specified as a numeric scalar or a 1by8 numeric vector.
The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, if RecurrentWeightsLearnRateFactor
is 2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions
function.
To control the value of the learn rate for the four individual
matrices in RecurrentWeights
, assign a 1by8
vector, where the entries correspond to the learning rate factor of the
following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 0.1
Example:
[1 2 1 1 1 2 1 1]
BiasLearnRateFactor
— Learning rate factor for biasesLearning rate factor for the biases, specified as a nonnegative scalar or a 1by8 numeric vector.
The software multiplies this factor by the global learning rate
to determine the learning rate for the biases in this layer. For example, if
BiasLearnRateFactor
is 2
, then the learning rate for
the biases in the layer is twice the current global learning rate. The software determines the
global learning rate based on the settings you specify using the trainingOptions
function.
To control the value of the learning rate factor for the four
individual matrices in Bias
, assign a 1by8
vector, where the entries correspond to the learning rate factor of the
following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Example:
2
Example:
[1 2 1 1 1 2 1 1]
InputWeightsL2Factor
— L2 regularization factor for input weightsL2 regularization factor for the input weights, specified as a numeric scalar or a 1by8 numeric vector.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the input weights of the layer. For example, if InputWeightsL2Factor
is 2, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the trainingOptions
function.
To control the value of the L2 regularization factor for the four
individual matrices in InputWeights
, assign a
1by8 vector, where the entries correspond to the L2 regularization
factor of the following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 0.1
Example:
[1 2 1 1 1 2 1 1]
RecurrentWeightsL2Factor
— L2 regularization factor for recurrent weightsL2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1by8 numeric vector.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the recurrent weights of the layer. For example, if RecurrentWeightsL2Factor
is 2, then the L2 regularization factor for the recurrent weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the trainingOptions
function.
To control the value of the L2 regularization factor for the four
individual matrices in RecurrentWeights
, assign a
1by8 vector, where the entries correspond to the L2 regularization
factor of the following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 0.1
Example:
[1 2 1 1 1 2 1 1]
BiasL2Factor
— L2 regularization factor for biasesL2 regularization factor for the biases, specified as a nonnegative scalar.
The software multiplies this factor by the global
L_{2} regularization factor to determine the
L_{2} regularization for the biases in this
layer. For example, if BiasL2Factor
is 2
, then the
L_{2} regularization for the biases in this layer
is twice the global L_{2} regularization factor. You can
specify the global L_{2} regularization factor using the
trainingOptions
function.
To control the value of the L2 regularization factor for the four
individual matrices in Bias
, assign a 1by8
vector, where the entries correspond to the L2 regularization factor of
the following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Example:
2
Example:
[1 2 1 1 1 2 1 1]
Name
— Layer name''
(default)  character vector  string scalar
Layer name, specified as a character vector or a string scalar.
For Layer
array input, the trainNetwork
,
assembleNetwork
, layerGraph
, and
dlnetwork
functions automatically assign names to layers with
Name
set to ''
.
Data Types: char
 string
NumInputs
— Number of inputs1
 3
Number of inputs of the layer.
If the HasStateInputs
property is 0
(false), then the
layer has one input with name 'in'
, which corresponds to the input data.
In this case, the layer uses the HiddenState
and
CellState
properties for the layer operation.
If the HasStateInputs
property is 1
(true), then the
layer has three inputs with names 'in'
, 'hidden'
, and
'cell'
, which correspond to the input data, hidden state, and cell
state respectively. In this case, the layer uses the values passed to these inputs for the
layer operation. If HasStateInputs
is
1
(true), then the HiddenState
and
CellState
properties must be empty.
Data Types: double
InputNames
— Input names{'in'}
 {'in','hidden','cell'}
Input names of the layer.
If the HasStateInputs
property is 0
(false), then the
layer has one input with name 'in'
, which corresponds to the input data.
In this case, the layer uses the HiddenState
and
CellState
properties for the layer operation.
If the HasStateInputs
property is 1
(true), then the
layer has three inputs with names 'in'
, 'hidden'
, and
'cell'
, which correspond to the input data, hidden state, and cell
state respectively. In this case, the layer uses the values passed to these inputs for the
layer operation. If HasStateInputs
is
1
(true), then the HiddenState
and
CellState
properties must be empty.
NumOutputs
— Number of outputs1
 3
Number of outputs of the layer.
If the HasStateOutputs
property is 0
(false), then the
layer has one output with name 'out'
, which corresponds to the output
data.
If the HasStateOutputs
property is 1
(true), then the
layer has three outputs with names 'out'
,
'hidden'
, and 'cell'
, which correspond
to the output data, hidden state, and cell state, respectively. In this case, the
layer also outputs the state values computed during the layer operation.
Data Types: double
OutputNames
— Output names{'out'}
 {'out','hidden','cell'}
Output names of the layer.
If the HasStateOutputs
property is 0
(false), then the
layer has one output with name 'out'
, which corresponds to the output
data.
If the HasStateOutputs
property is 1
(true), then the
layer has three outputs with names 'out'
,
'hidden'
, and 'cell'
, which correspond
to the output data, hidden state, and cell state, respectively. In this case, the
layer also outputs the state values computed during the layer operation.
Create a bidirectional LSTM layer with the name 'bilstm1'
and 100 hidden units.
layer = bilstmLayer(100,'Name','bilstm1')
layer = BiLSTMLayer with properties: Name: 'bilstm1' InputNames: {'in'} OutputNames: {'out'} NumInputs: 1 NumOutputs: 1 HasStateInputs: 0 HasStateOutputs: 0 Hyperparameters InputSize: 'auto' NumHiddenUnits: 100 OutputMode: 'sequence' StateActivationFunction: 'tanh' GateActivationFunction: 'sigmoid' Learnable Parameters InputWeights: [] RecurrentWeights: [] Bias: [] State Parameters HiddenState: [] CellState: [] Show all properties
Include a bidirectional LSTM layer in a Layer
array.
inputSize = 12;
numHiddenUnits = 100;
numClasses = 9;
layers = [ ...
sequenceInputLayer(inputSize)
bilstmLayer(numHiddenUnits)
fullyConnectedLayer(numClasses)
softmaxLayer
classificationLayer]
layers = 5x1 Layer array with layers: 1 '' Sequence Input Sequence input with 12 dimensions 2 '' BiLSTM BiLSTM with 100 hidden units 3 '' Fully Connected 9 fully connected layer 4 '' Softmax softmax 5 '' Classification Output crossentropyex
Layers in a layer array or layer graph pass data specified as formatted dlarray
objects.
You can interact with these dlarray
objects in automatic differentiation workflows such as when developing a custom layer, using a functionLayer
object, or using the forward
and predict
functions with dlnetwork
objects.
This table shows the supported input formats of a BiLSTMLayer
object and
the corresponding output format. If the output of the layer is passed to a custom layer that
does not inherit from the nnet.layer.Formattable
class, or a
FunctionLayer
object with the Formattable
option set
to false
, then the layer receives an unformatted dlarray
object with dimensions ordered corresponding to the formats outlined in this table.
Input Format  OutputMode  Output Format 

 "sequence" 

"last" 

In dlnetwork
objects, BiLSTMLayer
objects also support the following input and output format combinations.
Input Format  OutputMode  Output Format 

 "sequence" 

"last" 
 
 "sequence" 

"last" 
 
 "sequence" 

"last" 

To use these input formats in trainNetwork
workflows, first convert the data to "CBT"
(channel, batch, time) format using flattenLayer
.
If the HasStateInputs
property is 1
(true), then the
layer has two additional inputs with names 'hidden'
and
'cell'
, which correspond to the hidden state and cell state,
respectively. These additional inputs expect input format "CB"
(channel,
batch).
If the HasStateOutputs
property is 1
(true), then the
layer has two additional outputs with names 'hidden'
and
'cell'
, which correspond to the hidden state and cell state,
respectively. These additional outputs have output format "CB"
(channel,
batch).
Behavior changed in R2019a
Starting in R2019a, the software, by default, initializes the layer input weights of this layer using the Glorot initializer. This behavior helps stabilize training and usually reduces the training time of deep networks.
In previous releases, the software, by default, initializes the layer input weights using the
by sampling from a normal distribution with zero mean and variance 0.01. To reproduce this
behavior, set the 'InputWeightsInitializer'
option of the layer to
'narrownormal'
.
Behavior changed in R2019a
Starting in R2019a, the software, by default, initializes the layer recurrent weights of this layer with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. This behavior helps stabilize training and usually reduces the training time of deep networks.
In previous releases, the software, by default, initializes the layer recurrent weights using
the by sampling from a normal distribution with zero mean and variance 0.01. To reproduce
this behavior, set the 'RecurrentWeightsInitializer'
option of the layer
to 'narrownormal'
.
[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010.
[2] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing HumanLevel Performance on ImageNet Classification." In Proceedings of the 2015 IEEE International Conference on Computer Vision, 1026–1034. Washington, DC: IEEE Computer Vision Society, 2015.
[3] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks." arXiv preprint arXiv:1312.6120 (2013).
Usage notes and limitations:
When generating code with Intel^{®} MKLDNN:
The StateActivationFunction
property must be set to
'tanh'
.
The GateActivationFunction
property must be set to
'sigmoid'
.
The HasStateInputs
and
HasStateOutputs
properties must be set to
0
(false).
Usage notes and limitations:
For GPU code generation, the
StateActivationFunction
property must be set to
'tanh'
.
For GPU code generation, the GateActivationFunction
property must be set to 'sigmoid'
.
The HasStateInputs
and
HasStateOutputs
properties must be set to
0
(false).
trainingOptions
 trainNetwork
 sequenceInputLayer
 lstmLayer
 gruLayer
 convolution1dLayer
 maxPooling1dLayer
 averagePooling1dLayer
 globalMaxPooling1dLayer
 globalAveragePooling1dLayer
You have a modified version of this example. Do you want to open this example with your edits?
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
Select web siteYou can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.