This example shows how to create a deep learning experiment to compare different data preprocessing and network depth configurations for sequence-to-sequence regression. In this example, you use Experiment Manager to train long short-term memory (LSTM) networks that predict the remaining useful life (RUL) of engines. The experiment uses the Turbofan Engine Degradation Simulation Data Set described in . For more information on processing this data set for sequence-to-sequence regression, see Sequence-to-Sequence Regression Using Deep Learning.
RUL captures how many operational cycles an engine can make before failure. To learn more from the sequence data when the engines are close to failing, preprocess the data by clipping the responses at a specified threshold. This preprocessing operation allows the network to focus on predictor data behaviors close to failing by treating instances with higher RUL values as equal. For example, this figure shows the first response observation and the corresponding clipped response with a threshold of 150.
When you train a deep learning network, how you preprocess data and the number of layers in the network can affect the training behavior and performance of the network. Choosing the depth of an LSTM network involves balancing speed and accuracy. For example, deeper networks can be more accurate but take longer to train and converge .
By default, when you run a built-in training experiment for regression, Experiment Manager computes the loss and root mean squared error (RMSE) for each trial in your experiment. This example compares the performance of the network in each trial by using a custom metric that is specific to the problem data set. For more information on using custom metric functions, see Evaluate Deep Learning Experiments by Using Metric Functions.
First, open the example. Experiment Manager loads a project with a preconfigured experiment. To open the experiment, in the Experiment Browser, double-click the name of the experiment (
Built-in training experiments consist of a description, a table of hyperparameters, a setup function, and a collection of metric functions to evaluate the results of the experiment. For more information, see Configure Built-In Training Experiment.
The Description field contains a textual description of the experiment. For this example, the description is:
Sequence-to-sequence regression to predict the remaining useful life (RUL) of engines. This experiment compares network performance when changing data thresholding level and LSTM layer depth.
The Hyperparameter Table specifies the strategy (
Exhaustive Sweep) and hyperparameter values to use for the experiment. When you run the experiment, Experiment Manager sweeps through the hyperparameter values and trains the network multiple times. Each trial uses a different combination of the hyperparameter values specified in the hyperparameter table. This example uses two hyperparameters:
'Threshold' sets all response data above the threshold value to be equal to the threshold value. To prevent uniform response data, use threshold values greater or equal to 150.
'LSTMDepth' indicates the number of LSTM layers used in the network. Specify this hyperparameter as an integer between 1 and 3.
The Setup Function configures the training data, network architecture, and training options for the experiment. To inspect the setup function, under Setup Function, click Edit. The setup function opens in MATLAB Editor.
In this example, the setup function has three sections.
Load and Preprocess Data downloads and extracts the Turbofan Engine Degradation Simulation Data Set from https://ti.arc.nasa.gov/tech/dash/groups/pcoe/prognostic-data-repository/ . This section of the setup function also filters out constant valued features, normalizes the predictor data to have zero mean and unit variance, clips the response data by using the value of the 'Threshold' hyperparameter, and randomly selects training examples to be used for validation.
Define Network Architecture defines the architecture for an LSTM network for sequence-to-sequence regression. The network consists of LSTM layers with 128 hidden units, followed by a fully connected layer of size 100 and a dropout layer with dropout probability 0.5. The number of LSTM layers equals the
'LSTMDepth' value from the hyperparameter table.
Specify Training Options defines the training options for the experiment. Because deeper networks take longer to converge, the number of epochs is set to 300 to ensure all network depths to converge. This example validates the network every 30 iterations. The initial learning rate is 0.01 and drops by a factor of 0.2 every 15 epochs. With the training option
'ExecutionEnvironment' set to
'auto', the experiment runs on a GPU if one is available. Otherwise, the software uses the CPU. Because this example compares network depths and trains for many epochs, using the GPU speeds up training time considerably. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For more information, see GPU Support by Release (Parallel Computing Toolbox).
The Metrics section specifies optional functions that evaluate the results of the experiment. Experiment Manager evaluates these functions each time it finishes training the network. To inspect a metric function, select the name of the metric function and click Edit. The metric function opens in MATLAB Editor.
The prediction of the RUL of an engine requires careful consideration. If the prediction underestimates the RUL, engine maintenance might be scheduled before it is necessary. If the prediction overestimates the RUL, the engine might fail while in operation, resulting in high costs or safety concerns. To help mitigate these scenarios, this example includes a metric function
MeanMaxAbsoluteError that identifies networks that underpredict or overpredict the RUL.
MeanMaxAbsoluteError metric calculates the maximum absolute error, averaged across the entire training set. This metric calls the
predict function to make a sequence of RUL predictions from the training set. Then, after calculating the maximum absolute error between each training response and predicted response sequence, the function computes the mean of all maximum absolute errors. This metric identifies the maximum deviations between the actual and predicted responses.
When you run the experiment, Experiment Manager trains the network defined by the setup function nine times. Each trial uses a different combination of hyperparameter values. By default, Experiment Manager runs one trial at a time. If you have Parallel Computing Toolbox™, you can run multiple trials at the same time. For best results, before you run your experiment, start a parallel pool with as many workers as GPUs. For more information, see Use Experiment Manager to Train Networks in Parallel and GPU Support by Release (Parallel Computing Toolbox).
To run one trial of the experiment at a time, on the Experiment Manager toolstrip, click Run.
To run multiple trials at the same time, click Use Parallel and then Run. If there is no current parallel pool, Experiment Manager starts one using the default cluster profile. Experiment Manager then executes multiple simultaneous trials, depending on the number of parallel workers available.
A table of results displays the metric function values for each trial.
While the experiment is running, click Training Plot to display the training plot and track the progress of each trial. The elapsed time for a trial to complete training increases with network depth.
In the table of results, the MeanMaxAbsoluteError value quantifies how much the network underpredicts or overpredicts the RUL. The Validation RMSE value quantifies how well the network generalizes to unseen data. To find the best result for your experiment, sort the table of results and select the trial that has the lowest MeanMaxAbsoluteError and Validation RMSE values.
Point to the Validation RMSE or MeanMaxAbsoluteError column.
Click the triangle icon.
Select Sort in Ascending Order.
If no single trial minimizes both values, consider giving preference to a trial that ranks well for each value. For instance, in these results, trial 3 has the smallest Validation RMSE value and the second smallest MeanMaxAbsoluteError value.
To record observations about the results of your experiment, add an annotation.
In the results table, right-click the Validation RMSE cell of the best trial.
Select Add Annotation.
In the Annotations pane, enter your observations in the text box.
Repeat the previous steps for the MeanMaxAbsoluteError cell.
To test the performance of your best trial, export the trained network and display the predicted response sequence for several randomly-chosen test sequences.
Select the best trial in your experiment.
On the Experiment Manager toolstrip, click Export.
In the dialog window, enter the name of a workspace variable for the exported network. The default name is
Use the exported network and the
Threshold value of the network as inputs to the helper function
helperPlot. For instance, in the MATLAB Command Window, enter:
The function plots the true and predicted response sequences of unseen test data.
In the Experiment Browser, right-click the name of the project and select Close Project. Experiment Manager closes all of the experiments and results contained in the project.
Saxena, Abhinav, Kai Goebel, Don Simon, and Neil Eklund. "Damage Propagation Modeling for Aircraft Engine Run-to-Failure Simulation." 2008 International Conference on Prognostics and Health Management (2008): 1-9.
Jozefowicz, Rafal, Wojciech Zaremba, and Ilya Sutskever. "An Empirical Exploration of Recurrent Network Architectures." Proceedings of the 32nd International Conference on Machine Learning (2015): 2342-2350.
Saxena, Abhinav, Kai Goebel. "Turbofan Engine Degradation Simulation Data Set." NASA Ames Prognostics Data Repository, https://ti.arc.nasa.gov/tech/dash/groups/pcoe/prognostic-data-repository/, NASA Ames Research Center, Moffett Field, CA.