This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Long Short-Term Memory Networks

This topic explains how to work with sequence and time series data for classification and regression tasks using long short-term memory (LSTM) networks. For an example showing how to classify sequence data using an LSTM network, see Sequence Classification Using Deep Learning.

An LSTM network is a type of recurrent neural network (RNN) that can learn long-term dependencies between time steps of sequence data.

LSTM Network Architecture

The core components of an LSTM network are a sequence input layer and an LSTM layer. A sequence input layer inputs sequence or time series data into the network. An LSTM layer learns long-term dependencies between time steps of sequence data.

This diagram illustrates the architecture of a simple LSTM network for classification. The network starts with a sequence input layer followed by an LSTM layer. To predict class labels, the network ends with a fully connected layer, a softmax layer, and a classification output layer.

This diagram illustrates the architecture of a simple LSTM network for regression. The network starts with a sequence input layer followed by an LSTM layer. The network ends with a fully connected layer and a regression output layer.

Classification LSTM Networks

To create an LSTM network for sequence-to-label classification, create a layer array containing a sequence input layer, an LSTM layer, a fully connected layer, a softmax layer, and a classification output layer.

Specify the size of the sequence input layer to be the number of features of the input data. Specify the size of the fully connected layer to be the number of classes. You do not need to specify the sequence length.

For the LSTM layer, specify the number of hidden units and the output mode 'last'.

numFeatures = 12;
numHiddenUnits = 100;
numClasses = 9;
layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits,'OutputMode','last')
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer];

For an example showing how to train an LSTM network for sequence-to-label classification and classify new data, see Sequence Classification Using Deep Learning.

To create an LSTM network for sequence-to-sequence classification, use the same architecture for sequence-to-label classification, but set the output mode of the LSTM layer to 'sequence'.

numFeatures = 12;
numHiddenUnits = 100;
numClasses = 9;
layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits,'OutputMode','sequence')
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer];

Regression LSTM Networks

To create an LSTM network for sequence-to-one regression, create a layer array containing a sequence input layer, an LSTM layer, a fully connected layer, and a regression output layer.

Specify the size of the sequence input layer to be the number of features of the input data. Specify the size of the fully connected layer to be the number of responses. You do not need to specify the sequence length.

For the LSTM layer, specify the number of hidden units and the output mode 'last'.

numFeatures = 12;
numHiddenUnits = 125;
numResponses = 1;

layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits,'OutputMode','last')
    fullyConnectedLayer(numResponses)
    regressionLayer];

To create an LSTM network for sequence-to-sequence regression, use the same architecture for sequence-to-one regression, but set the output mode of the LSTM layer to 'sequence'.

numFeatures = 12;
numHiddenUnits = 125;
numResponses = 1;

layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits,'OutputMode','sequence')
    fullyConnectedLayer(numResponses)
    regressionLayer];

For an example showing how to train an LSTM network for sequence-to-sequence regression and predict on new data, see Sequence-to-Sequence Regression Using Deep Learning.

Deeper LSTM Networks

You can make LSTM networks deeper by inserting extra LSTM layers with the output mode 'sequence' before the LSTM layer.

For sequence-to-label classification networks, the output mode of the last LSTM layer must be 'last'.

numFeatures = 12;
numHiddenUnits1 = 125;
numHiddenUnits2 = 100;
numClasses = 9;
layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits1,'OutputMode','sequence')
    lstmLayer(numHiddenUnits2,'OutputMode','last')
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer];

For sequence-to-sequence classification networks, the output mode of the last LSTM layer must be 'sequence'.

numFeatures = 12;
numHiddenUnits1 = 125;
numHiddenUnits2 = 100;
numClasses = 9;
layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits1,'OutputMode','sequence')
    lstmLayer(numHiddenUnits2,'OutputMode','sequence')
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer];

Layers

Sequence Input Layer

A sequence input layer inputs sequence data to a network. You can create a sequence input layer using sequenceInputLayer.

LSTM Layer

An LSTM layer learns long-term dependencies between time steps in time series and sequence data.

Create an LSTM layer using lstmLayer.

Bidirectional LSTM Layer

A bidirectional LSTM (BiLSTM) layer is an RNN layer that learns bidirectional long-term dependencies between time steps. These dependencies can be useful when you want the network to learn from the complete time series at each time step.

Create a BiLSTM layer using bilstmLayer.

Classification and Prediction

To classify or make predictions on new data, use classify and predict.

LSTM networks can remember the state of the network between predictions. The network state is useful when you do not have the complete time series in advance, or if you want to make multiple predictions on a long time series.

To predict and classify on parts of a time series and update the network state, you can use predictAndUpdateState and classifyAndUpdateState. To reset the network state between predictions, use resetState.

For an example showing how to forecast future time steps of a sequence, see Time Series Forecasting Using Deep Learning.

Sequence Padding, Truncation, and Splitting

LSTM networks support input data with varying sequence lengths. When passing data through the network, the software pads, truncates, or splits sequences in each mini-batch to have the specified length. You can specify the sequence lengths and the value used to pad the sequences using the SequenceLength and SequencePaddingValue name-value pair arguments in trainingOptions.

Sort Sequences by Length

To reduce the amount of padding or discarded data when padding or truncating sequences, try sorting your data by sequence length. To sort the data by sequence length, first get the number of columns of each sequence by applying size(X,2) to every sequence using cellfun. Then sort the sequence lengths using sort, and use the second output to reorder the original sequences.

sequenceLengths = cellfun(@(X) size(X,2), XTrain);
[sequenceLengthsSorted,idx] = sort(sequenceLengths);
XTrain = XTrain(idx);

The following figures show the sequence lengths of the sorted and unsorted data in bar charts.

Pad Sequences

If you specify the sequence length 'longest', then the software pads the sequences in each mini-batch to have the same length as the longest sequence in that mini-batch. This option is the default.

The following figures illustrate the effect of setting 'SequenceLength' to 'longest'.

Truncate Sequences

If you specify the sequence the length to be 'shortest', then the software truncates the sequences in each mini-batch to have the same length as the shortest sequence in that mini-batch. The remaining data in the sequences is discarded.

The following figures illustrate the effect of setting 'SequenceLength' to 'shortest'.

Split Sequences

If you specify sequence the length to be an integer value, then the software pads the sequences in each mini-batch to have the same length as the longest sequence, then splits the sequences into smaller sequences of the specified length. If splitting occurs, then the software creates extra mini-batches.

The following figures illustrate the effect of setting 'SequenceLength' to 5.

Normalize Sequence Data

To normalize sequence data, first calculate the per-feature mean and standard deviation of all the sequences. Then, for each training observation, subtract the mean value and divide by the standard deviation.

mu = mean([XTrain{:}],2);
sigma = std([XTrain{:}],0,2);
XTrain = cellfun(@(X) (X-mu)./sigma,XTrain,'UniformOutput',false);

Out of Memory Data

Use custom mini-batch datastores for sequence, time series, and signal data when data is too large to fit in memory, or to perform specific operations when reading batches of data.

To learn how to develop a custom mini-batch datastore, see Develop Custom Mini-Batch Datastore.

LSTM Layer Architecture

This diagram illustrates the flow of a time series X with D features of length S through an LSTM layer. In this diagram, h denotes the output (also known as the hidden state) and c denotes the cell state.

The first LSTM block takes the initial state of the network and the first time step of the sequence X1, and then computes the first output h1 and the updated cell state c1. At time step t, the block takes the current state of the network (ct1,ht1) and the next time step of the sequence Xt, and then computes the output ht and the updated cell state ct.

The state of the layer consists of the hidden state (also known as the output state) and the cell state. The hidden state at time step t contains the output of the LSTM layer for this time step. The cell state contains information learned from the previous time steps. At each time step, the layer adds information to or removes information from the cell state, where the layer controls these updates using gates.

This table summarizes the components that control the cell state and hidden state of the layer.

ComponentPurpose
Input gate (i)Control level of cell state update
Forget gate (f)Control level of cell state reset (forget)
Cell candidate (g)Add information to cell state
Output gate (o)Control level of cell state added to hidden state

This diagram illustrates the flow of data at time step t. The diagram highlights how the gates forget, update, and output the cell and hidden states.

The learnable weights of an LSTM layer are the input weights W (InputWeights), the recurrent weights R (RecurrentWeights), and the bias b (Bias). The matrices W, R, and b are concatenations of the input weights, the recurrent weights, and the bias of each component, respectively. These matrices are concatenated as follows:

W=[WiWfWgWo],R=[RiRfRgRo],b=[bibfbgbo],

where i, f, g, and o denote the input gate, forget gate, cell candidate, and output gate, respectively.

The cell state at time step t is given by

ct=ftct1+itgt,

where denotes the Hadamard product (element-wise multiplication of vectors).

The hidden state at time step t is given by

ht=otσc(ct),

where σc denotes the state activation function. The lstmLayer function, by default, uses the hyperbolic tangent function (tanh) for the state activation function.

This table shows the formula for each component at time step t.

ComponentFormula
Input gateit=σg(Wixt+Riht1+bi)
Forget gateft=σg(Wfxt+Rfht1+bf)
Cell candidategt=σc(Wgxt+Rght1+bg)
Output gateot=σg(Woxt+Roht1+bo)

In these calculations, σg denotes the gate activation function. The lstmLayer function, by default, uses the sigmoid function given by σ(x)=(1+ex)1 for the gate activation function.

References

[1] Hochreiter, S, and J. Schmidhuber, 1997. Long short-term memory. Neural computation, 9(8), pp.1735–1780.

See Also

| | | | |

Related Topics