Main Content

Create a Deep Learning Experiment for Classification

This example shows how to train a deep learning network for classification by using Experiment Manager. In this example, you train two networks to classify images of MathWorks merchandise into five classes. Each network is trained using three algorithms. In each case, a confusion matrix compares the true classes for a set of validation images with the classes predicted by the trained network. For more information on training a network for image classification, see Train Deep Learning Network to Classify New Images.

Open Experiment

First, open the example. Experiment Manager loads a project with a preconfigured experiment that you can inspect and run. To open the experiment, in the Experiment Browser pane, double-click the name of the experiment (ClassificationExperiment).

Built-in training experiments consist of a description, a table of hyperparameters, a setup function, and a collection of metric functions to evaluate the results of the experiment. For more information, see Configure Built-In Training Experiment.

The Description field contains a textual description of the experiment. For this example, the description is:

Merchandise image classification using:
- an untrained network (default) or a pretrained network (googlenet)
- various solvers for training networks (sgdm, rmsprop, or adam)

The Hyperparameters section specifies the strategy (Exhaustive Sweep) and hyperparameter values to use for the experiment. When you run the experiment, Experiment Manager trains the network using every combination of hyperparameter values specified in the hyperparameter table. This example uses two hyperparameters:

  • Network specifies the network to train. The options include "default" (the default network provided by the experiment template for image classification) and "googlenet" (a pretrained GoogLeNet network with modified layers for transfer learning).

  • Solver indicates the algorithm used to train the network. The options include "sgdm" (stochastic gradient descent with momentum), "rmsprop" (root mean square propagation), and "adam" (adaptive moment estimation). For more information about these algorithms, see Stochastic Gradient Descent.

The Setup Function configures the training data, network architecture, and training options for the experiment. To inspect the setup function, under Setup Function, click Edit. The setup function opens in MATLAB® Editor.

The input to the setup function is a structure with fields from the hyperparameter table. The setup function returns three outputs that you use to train a network for image classification problems. The setup function has three sections.

  • Load Training Data defines image datastores containing the training and validation data. This example loads images from the file This small data set contains 75 images of MathWorks merchandise, belonging to five different classes. The images are of size 227-by-227-by-3. For more information on this data set, see Image Data Sets.

  • Define Network Architecture defines the architecture for a convolutional neural network for deep learning classification. In this example, the choice of network to train depends on the value of the hyperparameter Network.

  • Specify Training Options defines a trainingOptions object for the experiment. The example trains the network for 8 epochs using the algorithm specified by the Solver entry in the hyperparameter table.

The Metrics section specifies optional functions that evaluate the results of the experiment. This example does not include any custom metric functions.

Run Experiment

When you run the experiment, Experiment Manager trains the network defined by the setup function six times. Each trial uses a different combination of hyperparameter values. By default, Experiment Manager runs one trial at a time. If you have Parallel Computing Toolbox™, you can run multiple trials at the same time. For best results, before you run your experiment, start a parallel pool with as many workers as GPUs. For more information, see Use Experiment Manager to Train Networks in Parallel and GPU Support by Release (Parallel Computing Toolbox).

  • To run one trial of the experiment at a time, on the Experiment Manager toolstrip, click Run.

  • To run multiple trials at the same time, click Use Parallel and then Run. If there is no current parallel pool, Experiment Manager starts one using the default cluster profile. Experiment Manager then executes multiple simultaneous trials, depending on the number of parallel workers available.

A table of results displays the accuracy and loss for each trial.

While the experiment is running, click Training Plot to display the training plot and track the progress of each trial.

Evaluate Results

To find the best result for your experiment, sort the table of results by validation accuracy.

  1. Point to the Validation Accuracy column.

  2. Click the triangle icon.

  3. Select Sort in Descending Order.

The trial with the highest validation accuracy appears at the top of the results table.

To display the confusion matrix for this trial, select the top row in the results table and click Confusion Matrix.

To record observations about the results of your experiment, add an annotation.

  1. In the results table, right-click the Validation Accuracy cell of the best trial.

  2. Select Add Annotation.

  3. In the Annotations pane, enter your observations in the text box.

For more information, see Sort, Filter, and Annotate Experiment Results.

Close Experiment

In the Experiment Browser pane, right-click the name of the project and select Close Project. Experiment Manager closes all of the experiments and results contained in the project.

See Also



Related Topics