# fitrnet

Train neural network regression model

## Syntax

## Description

Use `fitrnet`

to train a feedforward, fully connected neural
network for regression. The first fully connected layer of the neural network has a connection
from the network input (predictor data), and each subsequent layer has a connection from the
previous layer. Each fully connected layer multiplies the input by a weight matrix and then
adds a bias vector. An activation function follows each fully connected layer, excluding the
last. The final fully connected layer produces the network's output, namely predicted response
values. For more information, see Neural Network Structure.

returns a neural network regression model `Mdl`

= fitrnet(`Tbl`

,`ResponseVarName`

)`Mdl`

trained using the
predictors in the table `Tbl`

and the response values in the
`ResponseVarName`

table variable.

specifies options using one or more name-value arguments in addition to any of the input
argument combinations in previous syntaxes. For example, you can adjust the number of
outputs and the activation functions for the fully connected layers by specifying the
`Mdl`

= fitrnet(___,`Name,Value`

)`LayerSizes`

and `Activations`

name-value
arguments.

## Examples

### Train Neural Network Regression Model

Train a neural network regression model, and assess the performance of the model on a test set.

Load the `carbig`

data set, which contains measurements of cars made in the 1970s and early 1980s. Create a table containing the predictor variables `Acceleration`

, `Displacement`

, and so on, as well as the response variable `MPG`

.

load carbig cars = table(Acceleration,Displacement,Horsepower, ... Model_Year,Origin,Weight,MPG);

Remove rows of `cars`

where the table has missing values.

cars = rmmissing(cars);

Categorize the cars based on whether they were made in the USA.

cars.Origin = categorical(cellstr(cars.Origin)); cars.Origin = mergecats(cars.Origin,["France","Japan",... "Germany","Sweden","Italy","England"],"NotUSA");

Partition the data into training and test sets. Use approximately 80% of the observations to train a neural network model, and 20% of the observations to test the performance of the trained model on new data. Use `cvpartition`

to partition the data.

rng("default") % For reproducibility of the data partition c = cvpartition(height(cars),"Holdout",0.20); trainingIdx = training(c); % Training set indices carsTrain = cars(trainingIdx,:); testIdx = test(c); % Test set indices carsTest = cars(testIdx,:);

Train a neural network regression model by passing the `carsTrain`

training data to the `fitrnet`

function. For better results, specify to standardize the predictor data.

Mdl = fitrnet(carsTrain,"MPG","Standardize",true)

Mdl = RegressionNeuralNetwork PredictorNames: {1x6 cell} ResponseName: 'MPG' CategoricalPredictors: 5 ResponseTransform: 'none' NumObservations: 314 LayerSizes: 10 Activations: 'relu' OutputLayerActivation: 'none' Solver: 'LBFGS' ConvergenceInfo: [1x1 struct] TrainingHistory: [708x7 table] Properties, Methods

`Mdl`

is a trained `RegressionNeuralNetwork`

model. You can use dot notation to access the properties of `Mdl`

. For example, you can specify `Mdl.TrainingHistory`

to get more information about the training history of the neural network model.

Evaluate the performance of the regression model on the test set by computing the test mean squared error (MSE). Smaller MSE values indicate better performance.

`testMSE = loss(Mdl,carsTest,"MPG")`

testMSE = 7.1092

### Specify Neural Network Regression Model Architecture

Specify the structure of the neural network regression model, including the size of the fully connected layers.

Load the `carbig`

data set, which contains measurements of cars made in the 1970s and early 1980s. Create a matrix `X`

containing the predictor variables `Acceleration`

, `Cylinders`

, and so on. Store the response variable `MPG`

in the variable `Y`

.

```
load carbig
X = [Acceleration Cylinders Displacement Weight];
Y = MPG;
```

Delete rows of `X`

and `Y`

where either array has missing values.

R = rmmissing([X Y]); X = R(:,1:end-1); Y = R(:,end);

Partition the data into training data (`XTrain`

and `YTrain`

) and test data (`XTest`

and `YTest`

). Reserve approximately 20% of the observations for testing, and use the rest of the observations for training.

rng("default") % For reproducibility of the partition c = cvpartition(length(Y),"Holdout",0.20); trainingIdx = training(c); % Indices for the training set XTrain = X(trainingIdx,:); YTrain = Y(trainingIdx); testIdx = test(c); % Indices for the test set XTest = X(testIdx,:); YTest = Y(testIdx);

Train a neural network regression model. Specify to standardize the predictor data, and to have 30 outputs in the first fully connected layer and 10 outputs in the second fully connected layer. By default, both layers use a rectified linear unit (ReLU) activation function. You can change the activation functions for the fully connected layers by using the `Activations`

name-value argument.

Mdl = fitrnet(XTrain,YTrain,"Standardize",true, ... "LayerSizes",[30 10])

Mdl = RegressionNeuralNetwork ResponseName: 'Y' CategoricalPredictors: [] ResponseTransform: 'none' NumObservations: 319 LayerSizes: [30 10] Activations: 'relu' OutputLayerActivation: 'none' Solver: 'LBFGS' ConvergenceInfo: [1x1 struct] TrainingHistory: [1000x7 table] Properties, Methods

Access the weights and biases for the fully connected layers of the trained model by using the `LayerWeights`

and `LayerBiases`

properties of `Mdl`

. The first two elements of each property correspond to the values for the first two fully connected layers, and the third element corresponds to the values for the final fully connected layer for regression. For example, display the weights and biases for the first fully connected layer.

Mdl.LayerWeights{1}

`ans = `*30×4*
0.0122 0.0116 -0.0094 0.1174
-0.4400 -1.5674 -0.1234 -2.2396
0.3370 0.2628 -1.9752 0.2937
-2.9872 -3.1024 -0.9050 -1.5978
0.7721 2.2010 1.3134 0.2364
0.1718 1.8862 -3.0548 -0.4272
0.9583 -0.0591 -0.9272 -0.3960
1.6701 -0.1617 -1.2640 0.7811
-0.7890 -0.8045 0.2993 1.5391
0.2053 -2.3423 1.7768 1.1690
⋮

Mdl.LayerBiases{1}

`ans = `*30×1*
-0.4448
-1.0814
-0.5026
-0.9984
0.2245
-2.1709
1.6112
1.3802
-1.2855
0.1969
⋮

The final fully connected layer has one output. The number of layer outputs corresponds to the first dimension of the layer weights and layer biases.

size(Mdl.LayerWeights{end})

`ans = `*1×2*
1 10

size(Mdl.LayerBiases{end})

`ans = `*1×2*
1 1

To estimate the performance of the trained model, compute the test set mean squared error (MSE) for `Mdl`

. Smaller MSE values indicate better performance.

testMSE = loss(Mdl,XTest,YTest)

testMSE = 16.8576

Compare the predicted test set response values to the true response values. Plot the predicted miles per gallon (MPG) along the vertical axis and the true MPG along the horizontal axis. Points on the reference line indicate correct predictions. A good model produces predictions that are scattered near the line.

testPredictions = predict(Mdl,XTest); plot(YTest,testPredictions,".") hold on plot(YTest,YTest) hold off xlabel("True MPG") ylabel("Predicted MPG")

### Stop Neural Network Training Early Using Validation Data

At each iteration of the training process, compute the validation loss of the neural network. Stop the training process early if the validation loss reaches a reasonable minimum.

Load the `patients`

data set. Create a table from the data set. Each row corresponds to one patient, and each column corresponds to a diagnostic variable. Use the `Systolic`

variable as the response variable, and the rest of the variables as predictors.

```
load patients
tbl = table(Age,Diastolic,Gender,Height,Smoker,Weight,Systolic);
```

Separate the data into a training set `tblTrain`

and a validation set `tblValidation`

. The software reserves approximately 30% of the observations for the validation data set and uses the rest of the observations for the training data set.

rng("default") % For reproducibility of the partition c = cvpartition(size(tbl,1),"Holdout",0.30); trainingIndices = training(c); validationIndices = test(c); tblTrain = tbl(trainingIndices,:); tblValidation = tbl(validationIndices,:);

Train a neural network regression model by using the training set. Specify the `Systolic`

column of `tblTrain`

as the response variable. Evaluate the model at each iteration by using the validation set. Specify to display the training information at each iteration by using the `Verbose`

name-value argument. By default, the training process ends early if the validation loss is greater than or equal to the minimum validation loss computed so far, six times in a row. To change the number of times the validation loss is allowed to be greater than or equal to the minimum, specify the `ValidationPatience`

name-value argument.

Mdl = fitrnet(tblTrain,"Systolic", ... "ValidationData",tblValidation, ... "Verbose",1);

|==========================================================================================| | Iteration | Train Loss | Gradient | Step | Iteration | Validation | Validation | | | | | | Time (sec) | Loss | Checks | |==========================================================================================| | 1| 516.021993| 3220.880047| 0.644473| 0.017784| 568.289202| 0| | 2| 313.056754| 229.931405| 0.067026| 0.007759| 304.023695| 0| | 3| 308.461807| 277.166516| 0.011122| 0.003015| 296.935608| 0| | 4| 262.492770| 844.627934| 0.143022| 0.000535| 240.559640| 0| | 5| 169.558740| 1131.714363| 0.336463| 0.000508| 152.531663| 0| | 6| 89.134368| 362.084104| 0.382677| 0.001126| 83.147478| 0| | 7| 83.309729| 994.830303| 0.199923| 0.000536| 76.634122| 0| | 8| 70.731524| 327.637362| 0.041366| 0.000611| 66.421750| 0| | 9| 66.650091| 124.369963| 0.125232| 0.000547| 65.914063| 0| | 10| 66.404753| 36.699328| 0.016768| 0.000581| 65.357335| 0| |==========================================================================================| | Iteration | Train Loss | Gradient | Step | Iteration | Validation | Validation | | | | | | Time (sec) | Loss | Checks | |==========================================================================================| | 11| 66.357143| 46.712988| 0.009405| 0.001784| 65.306106| 0| | 12| 66.268225| 54.079264| 0.007953| 0.000812| 65.234391| 0| | 13| 65.788550| 99.453225| 0.030942| 0.000606| 64.869708| 0| | 14| 64.821095| 186.344649| 0.048078| 0.000610| 64.191533| 0| | 15| 62.353896| 319.273873| 0.107160| 0.000614| 62.618374| 0| | 16| 57.836593| 447.826470| 0.184985| 0.000556| 60.087065| 0| | 17| 51.188884| 524.631067| 0.253062| 0.000562| 56.646294| 0| | 18| 41.755601| 189.072516| 0.318515| 0.000524| 49.046823| 0| | 19| 37.539854| 78.602559| 0.382284| 0.000529| 44.633562| 0| | 20| 36.845322| 151.837884| 0.211286| 0.000516| 47.291367| 1| |==========================================================================================| | Iteration | Train Loss | Gradient | Step | Iteration | Validation | Validation | | | | | | Time (sec) | Loss | Checks | |==========================================================================================| | 21| 36.218289| 62.826818| 0.142748| 0.000533| 46.139104| 2| | 22| 35.776921| 53.606315| 0.215188| 0.000542| 46.170460| 3| | 23| 35.729085| 24.400342| 0.060096| 0.001820| 45.318023| 4| | 24| 35.622031| 9.602277| 0.121153| 0.000792| 45.791861| 5| | 25| 35.573317| 10.735070| 0.126854| 0.000576| 46.062826| 6| |==========================================================================================|

Create a plot that compares the training mean squared error (MSE) and the validation MSE at each iteration. By default, `fitrnet`

stores the loss information inside the `TrainingHistory`

property of the object `Mdl`

. You can access this information by using dot notation.

iteration = Mdl.TrainingHistory.Iteration; trainLosses = Mdl.TrainingHistory.TrainingLoss; valLosses = Mdl.TrainingHistory.ValidationLoss; plot(iteration,trainLosses,iteration,valLosses) legend(["Training","Validation"]) xlabel("Iteration") ylabel("Mean Squared Error")

Check the iteration that corresponds to the minimum validation MSE. The final returned model `Mdl`

is the model trained at this iteration.

[~,minIdx] = min(valLosses); iteration(minIdx)

ans = 19

### Find Good Regularization Strength for Neural Network Using Cross-Validation

Assess the cross-validation loss of neural network models with different regularization strengths, and choose the regularization strength corresponding to the best performing model.

Load the `carbig`

data set, which contains measurements of cars made in the 1970s and early 1980s. Create a table containing the predictor variables `Acceleration`

, `Displacement`

, and so on, as well as the response variable `MPG`

.

load carbig cars = table(Acceleration,Displacement,Horsepower, ... Model_Year,Origin,Weight,MPG);

Delete rows of `cars`

where the table has missing values.

cars = rmmissing(cars);

Categorize the cars based on whether they were made in the USA.

cars.Origin = categorical(cellstr(cars.Origin)); cars.Origin = mergecats(cars.Origin,["France","Japan", ... "Germany","Sweden","Italy","England"],"NotUSA");

Create a `cvpartition`

object for 5-fold cross-validation. `cvp`

partitions the data into five folds, where each fold has roughly the same number of observations. Set the random seed to the default value for reproducibility of the partition.

rng("default") n = size(cars,1); cvp = cvpartition(n,"KFold",5);

Compute the cross-validation mean squared error (MSE) for neural network regression models with different regularization strengths. Try regularization strengths on the order of 1/*n*, where *n* is the number of observations. Specify to standardize the data before training the neural network models.

1/n

ans = 0.0026

lambda = (0:0.5:5)*1e-3; cvloss = zeros(length(lambda),1); for i = 1:length(lambda) cvMdl = fitrnet(cars,"MPG","Lambda",lambda(i), ... "CVPartition",cvp,"Standardize",true); cvloss(i) = kfoldLoss(cvMdl); end

Plot the results. Find the regularization strength corresponding to the lowest cross-validation MSE.

plot(lambda,cvloss) xlabel("Regularization Strength") ylabel("Cross-Validation Loss")

[~,idx] = min(cvloss); bestLambda = lambda(idx)

bestLambda = 0.0045

Train a neural network regression model using the `bestLambda`

regularization strength.

Mdl = fitrnet(cars,"MPG","Lambda",bestLambda, ... "Standardize",true)

Mdl = RegressionNeuralNetwork PredictorNames: {1x6 cell} ResponseName: 'MPG' CategoricalPredictors: 5 ResponseTransform: 'none' NumObservations: 392 LayerSizes: 10 Activations: 'relu' OutputLayerActivation: 'none' Solver: 'LBFGS' ConvergenceInfo: [1x1 struct] TrainingHistory: [761x7 table] Properties, Methods

### Minimize Cross-Validation Error in Neural Network

Create a neural network with low error by using the `OptimizeHyperparameters`

argument. This argument causes `fitrnet`

to minimize cross-validation loss over some problem hyperparameters by using Bayesian optimization.

Load the `carbig`

data set, which contains measurements of cars made in the 1970s and early 1980s. Create a table containing the predictor variables `Acceleration`

, `Displacement`

, and so on, as well as the response variable `MPG`

.

load carbig cars = table(Acceleration,Displacement,Horsepower, ... Model_Year,Origin,Weight,MPG);

Delete rows of `cars`

where the table has missing values.

cars = rmmissing(cars);

Categorize the cars based on whether they were made in the USA.

cars.Origin = categorical(cellstr(cars.Origin)); cars.Origin = mergecats(cars.Origin,["France","Japan",... "Germany","Sweden","Italy","England"],"NotUSA");

Partition the data into training and test sets. Use approximately 80% of the observations to train a neural network model, and 20% of the observations to test the performance of the trained model on new data. Use `cvpartition`

to partition the data.

rng("default") % For reproducibility of the data partition c = cvpartition(height(cars),"Holdout",0.20); trainingIdx = training(c); % Training set indices carsTrain = cars(trainingIdx,:); testIdx = test(c); % Test set indices carsTest = cars(testIdx,:);

Train a regression neural network using the `OptimizeHyperparameters`

argument set to `"auto"`

. For reproducibility, set the `AcquisitionFunctionName`

to `"expected-improvement-plus"`

in a `HyperparameterOptimizationOptions`

structure. `fitrnet`

performs Bayesian optimization by default. To use grid search or random search, set the `Optimizer`

field in `HyperparameterOptimizationOptions`

.

rng("default") % For reproducibility Mdl = fitrnet(carsTrain,"MPG","OptimizeHyperparameters","auto", ... "HyperparameterOptimizationOptions",struct("AcquisitionFunctionName","expected-improvement-plus"))

|============================================================================================================================================| | Iter | Eval | Objective: | Objective | BestSoFar | BestSoFar | Activations | Standardize | Lambda | LayerSizes | | | result | log(1+loss) | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 1 | Best | 2.223 | 7.9465 | 2.223 | 2.223 | relu | true | 3.841 | [101 47 15] | | 2 | Accept | 3.0455 | 6.4993 | 2.223 | 2.2557 | sigmoid | false | 7.5401e-07 | [100 17] | | 3 | Best | 2.0961 | 2.4177 | 2.0961 | 2.1112 | relu | true | 0.01569 | 15 | | 4 | Accept | 2.5142 | 3.7521 | 2.0961 | 2.1127 | none | true | 0.00016461 | [ 2 145 8] | | 5 | Accept | 3.0292 | 0.61682 | 2.0961 | 2.0961 | relu | true | 5.4264e-08 | 1 | | 6 | Accept | 3.1026 | 0.91408 | 2.0961 | 2.1494 | relu | true | 0.1155 | [ 4 1] | | 7 | Accept | 2.22 | 2.4772 | 2.0961 | 2.0971 | relu | true | 0.010391 | 17 | | 8 | Best | 2.0925 | 2.7919 | 2.0925 | 2.0993 | relu | true | 0.046371 | 18 | | 9 | Accept | 2.2307 | 1.6333 | 2.0925 | 2.1656 | relu | true | 0.97415 | 17 | | 10 | Accept | 2.2964 | 2.2352 | 2.0925 | 2.1672 | relu | true | 4.2374e-08 | 10 | | 11 | Accept | 2.8992 | 2.6264 | 2.0925 | 2.1694 | relu | true | 5.1161e-08 | 44 | | 12 | Accept | 3.275 | 6.4128 | 2.0925 | 2.1694 | relu | true | 3.5229e-06 | [149 16 16] | | 13 | Accept | 3.2788 | 8.5362 | 2.0925 | 2.1089 | relu | true | 0.00059803 | [104 44 3] | | 14 | Accept | 2.0983 | 2.003 | 2.0925 | 2.0967 | relu | true | 0.082165 | 11 | | 15 | Accept | 6.4083 | 0.13663 | 2.0925 | 2.1519 | relu | true | 228.14 | [ 88 1 2] | | 16 | Accept | 2.2574 | 8.3444 | 2.0925 | 2.1518 | relu | true | 5.1643 | [ 64 133 45] | | 17 | Best | 2.0755 | 18.847 | 2.0755 | 2.0979 | relu | true | 0.38848 | [263 79 62] | | 18 | Accept | 2.0918 | 13.53 | 2.0755 | 2.0954 | relu | true | 0.25108 | [ 63 41 225] | | 19 | Accept | 2.5142 | 0.17006 | 2.0755 | 2.0954 | none | true | 4.0253e-07 | [ 6 14 5] | | 20 | Accept | 2.5142 | 3.9872 | 2.0755 | 2.0928 | none | true | 1.4175e-06 | [ 49 71 49] | |============================================================================================================================================| | Iter | Eval | Objective: | Objective | BestSoFar | BestSoFar | Activations | Standardize | Lambda | LayerSizes | | | result | log(1+loss) | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 21 | Accept | 2.5141 | 9.7279 | 2.0755 | 2.0919 | none | true | 1.6685e-05 | [ 1 26 262] | | 22 | Accept | 6.4076 | 0.14882 | 2.0755 | 2.0954 | none | true | 217.93 | [ 84 5 219] | | 23 | Accept | 2.5138 | 0.47751 | 2.0755 | 2.0961 | none | true | 0.96622 | [ 2 39 4] | | 24 | Accept | 2.5142 | 0.87751 | 2.0755 | 2.094 | none | true | 9.0804e-07 | [ 3 175 248] | | 25 | Accept | 2.5142 | 0.5026 | 2.0755 | 2.1354 | none | true | 5.0142e-08 | [ 56 191 2] | | 26 | Accept | 2.5142 | 0.33674 | 2.0755 | 2.0926 | none | true | 4.9375e-08 | [ 5 55 24] | | 27 | Accept | 2.5133 | 2.6878 | 2.0755 | 2.0913 | none | true | 0.67351 | [ 22 290 40] | | 28 | Accept | 6.4103 | 0.25932 | 2.0755 | 2.1512 | relu | true | 261.52 | [ 1 49 138] | | 29 | Accept | 2.5187 | 1.8487 | 2.0755 | 2.1511 | none | true | 3.6616e-07 | [ 1 2 5] | | 30 | Accept | 3.3174 | 4.1796 | 2.0755 | 2.1509 | sigmoid | false | 0.38429 | [ 2 109] |

__________________________________________________________ Optimization completed. MaxObjectiveEvaluations of 30 reached. Total function evaluations: 30 Total elapsed time: 135.0068 seconds Total objective function evaluation time: 116.9241 Best observed feasible point: Activations Standardize Lambda LayerSizes ___________ ___________ _______ _________________ relu true 0.38848 263 79 62 Observed objective function value = 2.0755 Estimated objective function value = 2.0714 Function evaluation time = 18.8465 Best estimated feasible point (according to models): Activations Standardize Lambda LayerSizes ___________ ___________ _______ __________ relu true 0.01569 15 Estimated objective function value = 2.1509 Estimated function evaluation time = 2.2528

Mdl = RegressionNeuralNetwork PredictorNames: {'Acceleration' 'Displacement' 'Horsepower' 'Model_Year' 'Origin' 'Weight'} ResponseName: 'MPG' CategoricalPredictors: 5 ResponseTransform: 'none' NumObservations: 314 HyperparameterOptimizationResults: [1×1 BayesianOptimization] LayerSizes: 15 Activations: 'relu' OutputLayerActivation: 'none' Solver: 'LBFGS' ConvergenceInfo: [1×1 struct] TrainingHistory: [1000×7 table] Properties, Methods

Find the mean squared error of the resulting model on the test data set.

`testMSE = loss(Mdl,carsTest,"MPG")`

testMSE = 8.2362

### Custom Hyperparameter Optimization in Neural Network

Create a neural network with low error by using the `OptimizeHyperparameters`

argument. This argument causes `fitrnet`

to search for hyperparameters that give a model with low cross-validation error. Use the `hyperparameters`

function to specify larger-than-default values for the number of layers used and the layer size range.

`carbig`

data set, which contains measurements of cars made in the 1970s and early 1980s. Create a table containing the predictor variables `Acceleration`

, `Displacement`

, and so on, as well as the response variable `MPG`

.

load carbig cars = table(Acceleration,Displacement,Horsepower, ... Model_Year,Origin,Weight,MPG);

Delete rows of `cars`

where the table has missing values.

cars = rmmissing(cars);

Categorize the cars based on whether they were made in the USA.

cars.Origin = categorical(cellstr(cars.Origin)); cars.Origin = mergecats(cars.Origin,["France","Japan",... "Germany","Sweden","Italy","England"],"NotUSA");

Partition the data into training and test sets. Use approximately 80% of the observations to train a neural network model, and 20% of the observations to test the performance of the trained model on new data. Use `cvpartition`

to partition the data.

rng("default") % For reproducibility of the data partition c = cvpartition(height(cars),"Holdout",0.20); trainingIdx = training(c); % Training set indices carsTrain = cars(trainingIdx,:); testIdx = test(c); % Test set indices carsTest = cars(testIdx,:);

List the hyperparameters available for this problem of fitting the `MPG`

response.

params = hyperparameters("fitrnet",carsTrain,"MPG"); for ii = 1:length(params) disp(ii);disp(params(ii)) end

1 optimizableVariable with properties: Name: 'NumLayers' Range: [1 3] Type: 'integer' Transform: 'none' Optimize: 1 2 optimizableVariable with properties: Name: 'Activations' Range: {'relu' 'tanh' 'sigmoid' 'none'} Type: 'categorical' Transform: 'none' Optimize: 1 3 optimizableVariable with properties: Name: 'Standardize' Range: {'true' 'false'} Type: 'categorical' Transform: 'none' Optimize: 1 4 optimizableVariable with properties: Name: 'Lambda' Range: [3.1847e-08 318.4713] Type: 'real' Transform: 'log' Optimize: 1 5 optimizableVariable with properties: Name: 'LayerWeightsInitializer' Range: {'glorot' 'he'} Type: 'categorical' Transform: 'none' Optimize: 0 6 optimizableVariable with properties: Name: 'LayerBiasesInitializer' Range: {'zeros' 'ones'} Type: 'categorical' Transform: 'none' Optimize: 0 7 optimizableVariable with properties: Name: 'Layer_1_Size' Range: [1 300] Type: 'integer' Transform: 'log' Optimize: 1 8 optimizableVariable with properties: Name: 'Layer_2_Size' Range: [1 300] Type: 'integer' Transform: 'log' Optimize: 1 9 optimizableVariable with properties: Name: 'Layer_3_Size' Range: [1 300] Type: 'integer' Transform: 'log' Optimize: 1 10 optimizableVariable with properties: Name: 'Layer_4_Size' Range: [1 300] Type: 'integer' Transform: 'log' Optimize: 0 11 optimizableVariable with properties: Name: 'Layer_5_Size' Range: [1 300] Type: 'integer' Transform: 'log' Optimize: 0

To try more layers than the default of 1 through 3, set the range of `NumLayers`

(optimizable variable 1) to its maximum allowable size, `[1 5]`

. Also, set `Layer_4_Size`

and `Layer_5_Size`

(optimizable variables 10 and 11, respectively) to be optimized.

params(1).Range = [1 5]; params(10).Optimize = true; params(11).Optimize = true;

Set the range of all layer sizes (optimizable variables 7 through 11) to `[1 400]`

instead of the default `[1 300]`

.

for ii = 7:11 params(ii).Range = [1 400]; end

Train a regression neural network using the `OptimizeHyperparameters`

argument set to `params`

. For reproducibility, set the `AcquisitionFunctionName`

to `"expected-improvement-plus"`

in a `HyperparameterOptimizationOptions`

structure. To attempt to get a better solution, set the number of optimization steps to 60 instead of the default 30.

rng("default") % For reproducibility Mdl = fitrnet(carsTrain,"MPG","OptimizeHyperparameters",params, ... "HyperparameterOptimizationOptions", ... struct("AcquisitionFunctionName","expected-improvement-plus", ... "MaxObjectiveEvaluations",60))

|============================================================================================================================================| | Iter | Eval | Objective: | Objective | BestSoFar | BestSoFar | Activations | Standardize | Lambda | LayerSizes | | | result | log(1+loss) | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 1 | Best | 4.9294 | 0.37573 | 4.9294 | 4.9294 | sigmoid | false | 70.242 | [ 3 22 223] | | 2 | Best | 2.2088 | 3.236 | 2.2088 | 2.317 | relu | true | 0.089397 | [ 2 95] | | 3 | Accept | 2.8283 | 24.682 | 2.2088 | 2.293 | sigmoid | false | 2.5899e-07 | [303 60 59] | | 4 | Accept | 3.5251 | 3.0859 | 2.2088 | 2.2879 | relu | false | 5.1748e-05 | [102 5 15 1] | | 5 | Accept | 2.2299 | 2.5145 | 2.2088 | 2.215 | relu | true | 0.095678 | [ 2 68] | | 6 | Accept | 2.2385 | 3.1034 | 2.2088 | 2.2141 | relu | true | 0.0011241 | [ 2 70] | | 7 | Best | 2.2064 | 5.1877 | 2.2064 | 2.2061 | relu | true | 0.0024416 | [ 2 142 3] | | 8 | Best | 2.1881 | 12.327 | 2.1881 | 2.1866 | relu | true | 0.12839 | [ 2 391 13 9 5] | | 9 | Accept | 2.5199 | 35.72 | 2.1881 | 2.1864 | sigmoid | false | 0.075565 | [359 37 180 237] | | 10 | Accept | 2.2575 | 1.9543 | 2.1881 | 2.1878 | relu | true | 4.6653 | [ 3 379 15] | | 11 | Accept | 2.318 | 24.503 | 2.1881 | 2.1878 | relu | true | 8.2075 | [395 319 2] | | 12 | Best | 2.1367 | 5.7994 | 2.1367 | 2.1367 | tanh | true | 0.26306 | [ 7 387] | | 13 | Best | 2.1278 | 32.184 | 2.1278 | 2.1278 | tanh | true | 0.11523 | [188 384] | | 14 | Accept | 3.6718 | 2.192 | 2.1278 | 2.128 | tanh | true | 6.9356e-08 | [ 36 1 8 7 2] | | 15 | Accept | 3.8085 | 3.8516 | 2.1278 | 2.128 | relu | true | 3.7844e-08 | [ 6 36 125 9 3] | | 16 | Accept | 3.9831 | 0.70135 | 2.1278 | 2.1284 | tanh | true | 4.4955e-08 | [ 1 1 2] | | 17 | Accept | 2.4223 | 7.5687 | 2.1278 | 2.1283 | tanh | true | 0.32753 | [ 1 304 1 14] | | 18 | Accept | 2.5724 | 46.844 | 2.1278 | 2.1283 | sigmoid | false | 1.279e-05 | [163 18 153 397 54] | | 19 | Accept | 2.4896 | 0.44247 | 2.1278 | 2.1283 | tanh | true | 0.17448 | 4 | | 20 | Accept | 6.3945 | 0.47977 | 2.1278 | 2.1301 | relu | true | 120.84 | [ 31 290 2 353 6] | |============================================================================================================================================| | Iter | Eval | Objective: | Objective | BestSoFar | BestSoFar | Activations | Standardize | Lambda | LayerSizes | | | result | log(1+loss) | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 21 | Accept | 2.3364 | 8.1685 | 2.1278 | 2.1299 | tanh | true | 7.6591e-07 | [ 2 106 18 14] | | 22 | Accept | 2.6024 | 0.6812 | 2.1278 | 2.1298 | relu | true | 25.566 | [ 21 4 43 9] | | 23 | Accept | 2.5118 | 11.301 | 2.1278 | 2.1295 | relu | true | 0.035722 | [ 23 3 252 9 139] | | 24 | Accept | 3.1632 | 16.476 | 2.1278 | 2.1294 | relu | true | 3.5669e-08 | [ 51 384 3] | | 25 | Accept | 3.7306 | 9.7752 | 2.1278 | 2.1294 | tanh | false | 0.00033652 | [ 2 342 22 14] | | 26 | Accept | 2.5837 | 103.6 | 2.1278 | 2.1292 | tanh | true | 3.4317e-06 | [321 398 1 48] | | 27 | Accept | 3.0652 | 11.356 | 2.1278 | 2.1288 | tanh | true | 3.3367 | [397 2 4 48] | | 28 | Accept | 2.164 | 7.5118 | 2.1278 | 2.1287 | relu | true | 2.1131 | [ 6 377 41 3] | | 29 | Accept | 2.9281 | 24.125 | 2.1278 | 2.1295 | tanh | true | 0.0012995 | [ 49 378 2 5 34] | | 30 | Accept | 3.0625 | 41.892 | 2.1278 | 2.1294 | sigmoid | false | 1.1774e-07 | [383 98 11 62] | | 31 | Accept | 2.2319 | 16.882 | 2.1278 | 2.1296 | tanh | true | 4.8403e-05 | [ 2 370 2 21] | | 32 | Accept | 3.1019 | 18.47 | 2.1278 | 2.1289 | relu | true | 0.0078827 | [ 45 236 43 2 32] | | 33 | Accept | 2.3527 | 28.175 | 2.1278 | 2.1289 | tanh | true | 0.13475 | [ 78 398 33 3] | | 34 | Accept | 5.0888 | 0.67564 | 2.1278 | 2.1286 | sigmoid | false | 68.173 | [ 2 2 241 277 86] | | 35 | Accept | 4.1318 | 0.24724 | 2.1278 | 2.1278 | sigmoid | false | 2.0176e-05 | [ 4 322 24 389] | | 36 | Accept | 6.4115 | 0.16916 | 2.1278 | 2.128 | tanh | true | 287.01 | [ 37 96] | | 37 | Accept | 2.3624 | 5.6031 | 2.1278 | 2.1281 | relu | true | 0.011705 | [ 5 4 64 22 3] | | 38 | Accept | 2.8284 | 33.494 | 2.1278 | 2.1281 | tanh | true | 0.00087091 | [181 372] | | 39 | Accept | 2.6526 | 17.28 | 2.1278 | 2.1282 | tanh | true | 8.183e-06 | [ 65 166 2 5] | | 40 | Accept | 2.1757 | 15.176 | 2.1278 | 2.1283 | relu | true | 0.1285 | [ 5 317 19 103 2] | |============================================================================================================================================| | Iter | Eval | Objective: | Objective | BestSoFar | BestSoFar | Activations | Standardize | Lambda | LayerSizes | | | result | log(1+loss) | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 41 | Accept | 2.8841 | 153.56 | 2.1278 | 2.1282 | tanh | true | 6.6894e-05 | [307 397 133 371 5] | | 42 | Accept | 2.3161 | 47.177 | 2.1278 | 2.1282 | tanh | true | 0.087382 | [207 371 79 5 7] | | 43 | Accept | 4.1315 | 0.47072 | 2.1278 | 2.1282 | sigmoid | false | 3.6353e-07 | [ 7 3 2 29 41] | | 44 | Accept | 3.7069 | 17.063 | 2.1278 | 2.128 | sigmoid | false | 0.00026719 | [ 3 139 5 166 130] | | 45 | Accept | 2.287 | 1.2637 | 2.1278 | 2.1283 | relu | true | 4.0693 | [ 90 5] | | 46 | Accept | 3.7154 | 11.965 | 2.1278 | 2.1283 | relu | true | 2.5591e-05 | [ 16 325 126 2] | | 47 | Accept | 2.1667 | 2.2913 | 2.1278 | 2.1283 | relu | true | 0.011687 | 13 | | 48 | Accept | 2.1653 | 31.795 | 2.1278 | 2.1283 | tanh | true | 0.24716 | [ 40 319 133 24 58] | | 49 | Accept | 2.6663 | 38.88 | 2.1278 | 2.1283 | tanh | true | 0.064217 | [191 323 10 61 235] | | 50 | Accept | 2.6362 | 117.99 | 2.1278 | 2.1283 | tanh | true | 0.049467 | [236 390 170 44 34] | | 51 | Accept | 2.4353 | 31.025 | 2.1278 | 2.1283 | tanh | true | 2.6302 | [ 6 367 24 319 164] | | 52 | Accept | 2.4374 | 97.848 | 2.1278 | 2.1283 | tanh | true | 0.57638 | [ 12 383 327 4 16] | | 53 | Accept | 3.2495 | 26.383 | 2.1278 | 2.1283 | relu | true | 0.021542 | [195 390 3 300 1] | | 54 | Accept | 2.2173 | 1.597 | 2.1278 | 2.1283 | relu | true | 0.0024904 | 3 | | 55 | Best | 2.0862 | 14.871 | 2.0862 | 2.0862 | relu | true | 0.44677 | [ 50 299 3] | | 56 | Accept | 2.225 | 15.368 | 2.0862 | 2.0866 | tanh | true | 0.2768 | [391 26] | | 57 | Best | 2.0835 | 33.089 | 2.0835 | 2.0839 | relu | true | 0.34249 | [139 75 354 148] | | 58 | Best | 2.0617 | 9.1899 | 2.0617 | 2.0625 | relu | true | 0.30028 | [ 32 156 7 21] | | 59 | Accept | 3.3891 | 22.179 | 2.0617 | 2.0618 | relu | true | 0.018025 | [143 122 339 1 3] | | 60 | Accept | 2.4109 | 47.063 | 2.0617 | 2.0617 | tanh | true | 3.4447e-06 | [ 1 334 262 96 14] |

__________________________________________________________ Optimization completed. MaxObjectiveEvaluations of 60 reached. Total function evaluations: 60 Total elapsed time: 1350.397 seconds Total objective function evaluation time: 1308.8923 Best observed feasible point: Activations Standardize Lambda LayerSizes ___________ ___________ _______ _______________________ relu true 0.30028 32 156 7 21 Observed objective function value = 2.0617 Estimated objective function value = 2.0617 Function evaluation time = 9.1899 Best estimated feasible point (according to models): Activations Standardize Lambda LayerSizes ___________ ___________ _______ _______________________ relu true 0.30028 32 156 7 21 Estimated objective function value = 2.0617 Estimated function evaluation time = 10.7649

Mdl = RegressionNeuralNetwork PredictorNames: {'Acceleration' 'Displacement' 'Horsepower' 'Model_Year' 'Origin' 'Weight'} ResponseName: 'MPG' CategoricalPredictors: 5 ResponseTransform: 'none' NumObservations: 314 HyperparameterOptimizationResults: [1×1 BayesianOptimization] LayerSizes: [32 156 7 21] Activations: 'relu' OutputLayerActivation: 'none' Solver: 'LBFGS' ConvergenceInfo: [1×1 struct] TrainingHistory: [1000×7 table] Properties, Methods

Find the mean squared error of the resulting model on the test data set.

`testMSE = loss(Mdl,carsTest,"MPG")`

testMSE = 7.0740

## Input Arguments

`Tbl`

— Sample data

table

Sample data used to train the model, specified as a table. Each row of `Tbl`

corresponds to one observation, and each column corresponds to one predictor variable.
Optionally, `Tbl`

can contain one additional column for the response
variable. Multicolumn variables and cell arrays other than cell arrays of character
vectors are not allowed.

If

`Tbl`

contains the response variable, and you want to use all remaining variables in`Tbl`

as predictors, then specify the response variable by using`ResponseVarName`

.If

`Tbl`

contains the response variable, and you want to use only a subset of the remaining variables in`Tbl`

as predictors, then specify a formula by using`formula`

.If

`Tbl`

does not contain the response variable, then specify a response variable by using`Y`

. The length of the response variable and the number of rows in`Tbl`

must be equal.

`ResponseVarName`

— Response variable name

name of variable in `Tbl`

Response variable name, specified as the name of a variable in
`Tbl`

. The response variable must be a numeric vector.

You must specify `ResponseVarName`

as a character vector or string
scalar. For example, if `Tbl`

stores the response variable
`Y`

as `Tbl.Y`

, then specify it as
`'Y'`

. Otherwise, the software treats all columns of
`Tbl`

, including `Y`

, as predictors when
training the model.

**Data Types: **`char`

| `string`

`formula`

— Explanatory model of response variable and subset of predictor variables

character vector | string scalar

Explanatory model of the response variable and a subset of the predictor variables,
specified as a character vector or string scalar in the form
`"Y~x1+x2+x3"`

. In this form, `Y`

represents the
response variable, and `x1`

, `x2`

, and
`x3`

represent the predictor variables.

To specify a subset of variables in `Tbl`

as predictors for
training the model, use a formula. If you specify a formula, then the software does not
use any variables in `Tbl`

that do not appear in
`formula`

.

The variable names in the formula must be both variable names in `Tbl`

(`Tbl.Properties.VariableNames`

) and valid MATLAB^{®} identifiers. You can verify the variable names in `Tbl`

by
using the `isvarname`

function. If the variable names
are not valid, then you can convert them by using the `matlab.lang.makeValidName`

function.

**Data Types: **`char`

| `string`

`X`

— Predictor data

numeric matrix

Predictor data used to train the model, specified as a numeric matrix.

By default, the software treats each row of `X`

as one
observation, and each column as one predictor.

The length of `Y`

and the number of observations in
`X`

must be equal.

To specify the names of the predictors in the order of their appearance in
`X`

, use the `PredictorNames`

name-value
argument.

**Note**

If you orient your predictor matrix so that observations correspond to columns and
specify `'ObservationsIn','columns'`

, then you might experience a
significant reduction in computation time.

**Data Types: **`single`

| `double`

**Note**

The software treats `NaN`

, empty character vector
(`''`

), empty string (`""`

),
`<missing>`

, and `<undefined>`

elements as
missing values, and removes observations with any of these characteristics:

Missing value in the response (for example,

`Y`

or`ValidationData`

`{2}`

)At least one missing value in a predictor observation (for example, row in

`X`

or`ValidationData{1}`

)`NaN`

value or`0`

weight (for example, value in`Weights`

or`ValidationData{3}`

)

### Name-Value Arguments

Specify optional pairs of arguments as
`Name1=Value1,...,NameN=ValueN`

, where `Name`

is
the argument name and `Value`

is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.

*
Before R2021a, use commas to separate each name and value, and enclose*
`Name`

*in quotes.*

**Example: **```
fitrnet(X,Y,'LayerSizes',[10
10],'Activations',["relu","tanh"])
```

specifies to create a neural network with two
fully connected layers, each with 10 outputs. The first layer uses a rectified linear unit
(ReLU) activation function, and the second uses a hyperbolic tangent activation
function.

**Neural Network Options**

`LayerSizes`

— Sizes of fully connected layers

`10`

(default) | positive integer vector

Sizes of the fully connected layers in the neural network model, specified as a
positive integer vector. The *i*th element of
`LayerSizes`

is the number of outputs in the
*i*th fully connected layer of the neural network model.

`LayerSizes`

does not include the size of the final fully
connected layer. For more information, see Neural Network Structure.

**Example: **`'LayerSizes',[100 25 10]`

`Activations`

— Activation functions for fully connected layers

`'relu'`

(default) | `'tanh'`

| `'sigmoid'`

| `'none'`

| string array | cell array of character vectors

Activation functions for the fully connected layers of the neural network model, specified as a character vector, string scalar, string array, or cell array of character vectors with values from this table.

Value | Description |
---|---|

`'relu'` | Rectified linear unit (ReLU) function — Performs a threshold operation on each element of the input, where any value less than zero is set to zero, that is, $$f\left(x\right)=\{\begin{array}{cc}x,& x\ge 0\\ 0,& x<0\end{array}$$ |

`'tanh'` | Hyperbolic tangent (tanh) function — Applies the |

`'sigmoid'` | Sigmoid function — Performs the following operation on each input element: $$f(x)=\frac{1}{1+{e}^{-x}}$$ |

`'none'` | Identity function — Returns each input element without performing any transformation, that is, |

If you specify one activation function only, then

`Activations`

is the activation function for every fully connected layer of the neural network model, excluding the final fully connected layer (see Neural Network Structure).If you specify an array of activation functions, then the

*i*th element of`Activations`

is the activation function for the*i*th layer of the neural network model.

**Example: **`'Activations','sigmoid'`

`LayerWeightsInitializer`

— Function to initialize fully connected layer weights

`'glorot'`

(default) | `'he'`

Function to initialize the fully connected layer weights, specified as
`'glorot'`

or `'he'`

.

Value | Description |
---|---|

`'glorot'` | Initialize the weights with the Glorot initializer [1] (also
known as the Xavier initializer). For each layer, the Glorot initializer
independently samples from a uniform distribution with zero mean and
variable `2/(I+O)` , where `I` is the input
size and `O` is the output size for the layer. |

`'he'` | Initialize the weights with the He initializer [2]. For each
layer, the He initializer samples from a normal distribution with zero mean
and variance `2/I` , where `I` is the input
size for the layer. |

**Example: **`'LayerWeightsInitializer','he'`

`LayerBiasesInitializer`

— Type of initial fully connected layer biases

`'zeros'`

(default) | `'ones'`

Type of initial fully connected layer biases, specified as
`'zeros'`

or `'ones'`

.

If you specify the value

`'zeros'`

, then each fully connected layer has an initial bias of 0.If you specify the value

`'ones'`

, then each fully connected layer has an initial bias of 1.

**Example: **`'LayerBiasesInitializer','ones'`

**Data Types: **`char`

| `string`

`ObservationsIn`

— Predictor data observation dimension

`'rows'`

(default) | `'columns'`

Predictor data observation dimension, specified as `'rows'`

or
`'columns'`

.

**Note**

If you orient your predictor matrix so that observations correspond to columns and
specify `'ObservationsIn','columns'`

, then you might experience a
significant reduction in computation time. You cannot specify
`'ObservationsIn','columns'`

for predictor data in a
table.

**Example: **`'ObservationsIn','columns'`

**Data Types: **`char`

| `string`

`Lambda`

— Regularization term strength

`0`

(default) | nonnegative scalar

Regularization term strength, specified as a nonnegative scalar. The software composes the objective function for minimization from the mean squared error (MSE) loss function and the ridge (L2) penalty term.

**Example: **`'Lambda',1e-4`

**Data Types: **`single`

| `double`

`Standardize`

— Flag to standardize predictor data

`false`

or `0`

(default) | `true`

or `1`

Flag to standardize the predictor data, specified as a numeric or logical
`0`

(`false`

) or `1`

(`true`

). If you set `Standardize`

to
`true`

, then the software centers and scales each numeric predictor
variable by the corresponding column mean and standard deviation. The software does
not standardize the categorical predictors.

**Example: **`'Standardize',true`

**Data Types: **`single`

| `double`

| `logical`

**Convergence Control Options**

`Verbose`

— Verbosity level

`0`

(default) | `1`

Verbosity level, specified as `0`

or `1`

. The
`'Verbose'`

name-value argument controls the amount of diagnostic
information that `fitrnet`

displays at the command
line.

Value | Description |
---|---|

`0` | `fitrnet` does not display diagnostic
information. |

`1` | `fitrnet` periodically displays diagnostic
information. |

By default, `StoreHistory`

is set to
`true`

and `fitrnet`

stores the diagnostic
information inside of `Mdl`

. Use
`Mdl.TrainingHistory`

to access the diagnostic information.

**Example: **`'Verbose',1`

**Data Types: **`single`

| `double`

`VerboseFrequency`

— Frequency of verbose printing

`1`

(default) | positive integer scalar

Frequency of verbose printing, which is the number of iterations between printing to the command window, specified as a positive integer scalar. A value of 1 indicates to print diagnostic information at every iteration.

**Note**

To use this name-value argument, set `Verbose`

to
`1`

.

**Example: **`'VerboseFrequency',5`

**Data Types: **`single`

| `double`

`StoreHistory`

— Flag to store training history

`true`

or `1`

(default) | `false`

or `0`

Flag to store the training history, specified as a numeric or logical
`0`

(`false`

) or `1`

(`true`

). If `StoreHistory`

is set to
`true`

, then the software stores diagnostic information inside of
`Mdl`

, which you can access by using
`Mdl.TrainingHistory`

.

**Example: **`'StoreHistory',false`

**Data Types: **`single`

| `double`

| `logical`

`InitialStepSize`

— Initial step size

`[]`

(default) | positive scalar | `'auto'`

Initial step size, specified as a positive scalar or `'auto'`

. By
default, `fitrnet`

does not use the initial step size to determine
the initial Hessian approximation used in training the model (see Training Solver). However, if you
specify an initial step size $${\Vert {s}_{0}\Vert}_{\infty}$$, then the initial inverse-Hessian approximation is $$\frac{{\Vert {s}_{0}\Vert}_{\infty}}{{\Vert \nabla {\mathcal{L}}_{0}\Vert}_{\infty}}I$$. $$\nabla {\mathcal{L}}_{0}$$ is the initial gradient vector, and $$I$$ is the identity matrix.

To have `fitrnet`

determine an initial step size automatically,
specify the value as `'auto'`

. In this case, the function determines
the initial step size by using $${\Vert {s}_{0}\Vert}_{\infty}=0.5{\Vert {\eta}_{0}\Vert}_{\infty}+0.1$$. $${s}_{0}$$ is the initial step vector, and $${\eta}_{0}$$ is the vector of unconstrained initial weights and biases.

**Example: **`'InitialStepSize','auto'`

**Data Types: **`single`

| `double`

| `char`

| `string`

`IterationLimit`

— Maximum number of training iterations

`1e3`

(default) | positive integer scalar

Maximum number of training iterations, specified as a positive integer scalar.

The software returns a trained model regardless of whether the training routine
successfully converges. `Mdl.ConvergenceInfo`

contains convergence
information.

**Example: **`'IterationLimit',1e8`

**Data Types: **`single`

| `double`

`GradientTolerance`

— Relative gradient tolerance

`1e-6`

(default) | nonnegative scalar

Relative gradient tolerance, specified as a nonnegative scalar.

Let $${\mathcal{L}}_{t}$$ be the loss function at training iteration *t*, $$\nabla {\mathcal{L}}_{t}$$ be the gradient of the loss function with respect to the weights and
biases at iteration *t*, and $$\nabla {\mathcal{L}}_{0}$$ be the gradient of the loss function at an initial point. If $$\mathrm{max}\left|\nabla {\mathcal{L}}_{t}\right|\le a\cdot \text{GradientTolerance}$$, where $$a=\mathrm{max}\left(1,\mathrm{min}\left|{\mathcal{L}}_{t}\right|,\mathrm{max}\left|\nabla {\mathcal{L}}_{0}\right|\right)$$, then the training process terminates.

**Example: **`'GradientTolerance',1e-5`

**Data Types: **`single`

| `double`

`LossTolerance`

— Loss tolerance

`1e-6`

(default) | nonnegative scalar

Loss tolerance, specified as a nonnegative scalar.

If the function loss at some iteration is smaller than
`LossTolerance`

, then the training process terminates.

**Example: **`'LossTolerance',1e-8`

**Data Types: **`single`

| `double`

`StepTolerance`

— Step size tolerance

`1e-6`

(default) | nonnegative scalar

Step size tolerance, specified as a nonnegative scalar.

If the step size at some iteration is smaller than
`StepTolerance`

, then the training process terminates.

**Example: **`'StepTolerance',1e-4`

**Data Types: **`single`

| `double`

`ValidationData`

— Validation data for training convergence detection

cell array | table

Validation data for training convergence detection, specified as a cell array or table.

During the training process, the software periodically estimates the validation
loss by using `ValidationData`

. If the validation loss increases
more than `ValidationPatience`

times in a row, then the software
terminates the training.

You can specify `ValidationData`

as a table if you use a table
`Tbl`

of predictor data that contains the response variable. In
this case, `ValidationData`

must contain the same predictors and
response contained in `Tbl`

. The software does not apply weights to
observations, even if `Tbl`

contains a vector of weights. To
specify weights, you must specify `ValidationData`

as a cell
array.

If you specify `ValidationData`

as a cell array, then it must
have the following format:

`ValidationData{1}`

must have the same data type and orientation as the predictor data. That is, if you use a predictor matrix`X`

, then`ValidationData{1}`

must be an*m*-by-*p*or*p*-by-*m*matrix of predictor data that has the same orientation as`X`

. The predictor variables in the training data`X`

and`ValidationData{1}`

must correspond. Similarly, if you use a predictor table`Tbl`

of predictor data, then`ValidationData{1}`

must be a table containing the same predictor variables contained in`Tbl`

. The number of observations in`ValidationData{1}`

and the predictor data can vary.`ValidationData{2}`

must match the data type and format of the response variable, either`Y`

or`ResponseVarName`

. If`ValidationData{2}`

is an array of responses, then it must have the same number of elements as the number of observations in`ValidationData{1}`

. If`ValidationData{1}`

is a table, then`ValidationData{2}`

can be the name of the response variable in the table. If you want to use the same`ResponseVarName`

or`formula`

, you can specify`ValidationData{2}`

as`[]`

.Optionally, you can specify

`ValidationData{3}`

as an*m*-dimensional numeric vector of observation weights or the name of a variable in the table`ValidationData{1}`

that contains observation weights. The software normalizes the weights with the validation data so that they sum to 1.

If you specify `ValidationData`

and want to display the
validation loss at the command line, set `Verbose`

to
`1`

.

`ValidationFrequency`

— Number of iterations between validation evaluations

`1`

(default) | positive integer scalar

Number of iterations between validation evaluations, specified as a positive integer scalar. A value of 1 indicates to evaluate validation metrics at every iteration.

**Note**

To use this name-value argument, you must specify
`ValidationData`

.

**Example: **`'ValidationFrequency',5`

**Data Types: **`single`

| `double`

`ValidationPatience`

— Stopping condition for validation evaluations

`6`

(default) | nonnegative integer scalar

Stopping condition for validation evaluations, specified as a nonnegative integer
scalar. Training stops if the validation loss is greater than or equal to the minimum
validation loss computed so far, `ValidationPatience`

times in a row.
You can check the `Mdl.TrainingHistory`

table to see the running
total of times that the validation loss is greater than or equal to the minimum
(`Validation Checks`

).

**Example: **`'ValidationPatience',10`

**Data Types: **`single`

| `double`

**Other Regression Options**

`CategoricalPredictors`

— Categorical predictors list

vector of positive integers | logical vector | character matrix | string array | cell array of character vectors | `'all'`

Categorical predictors list, specified as one of the values in this table. The descriptions assume that the predictor data has observations in rows and predictors in columns.

Value | Description |
---|---|

Vector of positive integers |
Each entry in the vector is an index value indicating that the corresponding predictor is
categorical. The index values are between 1 and If |

Logical vector |
A |

Character matrix | Each row of the matrix is the name of a predictor variable. The names must match the entries in `PredictorNames` . Pad the names with extra blanks so each row of the character matrix has the same length. |

String array or cell array of character vectors | Each element in the array is the name of a predictor variable. The names must match the entries in `PredictorNames` . |

`"all"` | All predictors are categorical. |

By default, if the
predictor data is in a table (`Tbl`

), `fitrnet`

assumes that a variable is categorical if it is a logical vector, categorical vector, character
array, string array, or cell array of character vectors. If the predictor data is a matrix
(`X`

), `fitrnet`

assumes that all predictors are
continuous. To identify any other predictors as categorical predictors, specify them by using
the `CategoricalPredictors`

name-value argument.

For the identified categorical predictors, `fitrnet`

creates
dummy variables using two different schemes, depending on whether a categorical variable
is unordered or ordered. For an unordered categorical variable,
`fitrnet`

creates one dummy variable for each level of the
categorical variable. For an ordered categorical variable,
`fitrnet`

creates one less dummy variable than the number of
categories. For details, see Automatic Creation of Dummy Variables.

**Example: **`'CategoricalPredictors','all'`

**Data Types: **`single`

| `double`

| `logical`

| `char`

| `string`

| `cell`

`PredictorNames`

— Predictor variable names

string array of unique names | cell array of unique character vectors

Predictor variable names, specified as a string array of unique names or cell array of unique
character vectors. The functionality of `'PredictorNames'`

depends on
the way you supply the training data.

If you supply

`X`

and`Y`

, then you can use`'PredictorNames'`

to assign names to the predictor variables in`X`

.The order of the names in

`PredictorNames`

must correspond to the predictor order in`X`

. Assuming that`X`

has the default orientation, with observations in rows and predictors in columns,`PredictorNames{1}`

is the name of`X(:,1)`

,`PredictorNames{2}`

is the name of`X(:,2)`

, and so on. Also,`size(X,2)`

and`numel(PredictorNames)`

must be equal.By default,

`PredictorNames`

is`{'x1','x2',...}`

.

If you supply

`Tbl`

, then you can use`'PredictorNames'`

to choose which predictor variables to use in training. That is,`fitrnet`

uses only the predictor variables in`PredictorNames`

and the response variable during training.`PredictorNames`

must be a subset of`Tbl.Properties.VariableNames`

and cannot include the name of the response variable.By default,

`PredictorNames`

contains the names of all predictor variables.A good practice is to specify the predictors for training using either

`'PredictorNames'`

or`formula`

, but not both.

**Example: **`'PredictorNames',{'SepalLength','SepalWidth','PetalLength','PetalWidth'}`

**Data Types: **`string`

| `cell`

`ResponseName`

— Response variable name

`"Y"`

(default) | character vector | string scalar

Response variable name, specified as a character vector or string scalar.

If you supply

`Y`

, then you can use`ResponseName`

to specify a name for the response variable.If you supply

`ResponseVarName`

or`formula`

, then you cannot use`ResponseName`

.

**Example: **`"ResponseName","response"`

**Data Types: **`char`

| `string`

`Weights`

— Observation weights

nonnegative numeric vector | name of variable in `Tbl`

Observation weights, specified as a nonnegative numeric vector or the name of a
variable in `Tbl`

. The software weights each observation in
`X`

or `Tbl`

with the corresponding value in
`Weights`

. The length of `Weights`

must equal
the number of observations in `X`

or
`Tbl`

.

If you specify the input data as a table `Tbl`

, then
`Weights`

can be the name of a variable in
`Tbl`

that contains a numeric vector. In this case, you must
specify `Weights`

as a character vector or string scalar. For
example, if weights vector `W`

is stored as `Tbl.W`

,
then specify it as `'W'`

. Otherwise, the software treats all columns
of `Tbl`

, including `W`

, as predictors when
training the model.

By default, `Weights`

is `ones(n,1)`

, where
`n`

is the number of observations in `X`

or
`Tbl`

.

`fitrnet`

normalizes the weights to sum to 1.

**Data Types: **`single`

| `double`

| `char`

| `string`

**Note**

You cannot use any cross-validation name-value argument together with the
`'OptimizeHyperparameters'`

name-value argument. You can modify the
cross-validation for `'OptimizeHyperparameters'`

only by using the
`'HyperparameterOptimizationOptions'`

name-value argument.

**Cross-Validation Options**

`CrossVal`

— Flag to train cross-validated model

`'off'`

(default) | `'on'`

Flag to train a cross-validated model, specified as `'on'`

or
`'off'`

.

If you specify `'on'`

, then the software trains a cross-validated
model with 10 folds.

You can override this cross-validation setting using the
`CVPartition`

, `Holdout`

,
`KFold`

, or `Leaveout`

name-value argument.
You can use only one cross-validation name-value argument at a time to create a
cross-validated model.

Alternatively, cross-validate later by passing `Mdl`

to
`crossval`

.

**Example: **`'Crossval','on'`

**Data Types: **`char`

| `string`

`CVPartition`

— Cross-validation partition

`[]`

(default) | `cvpartition`

partition object

Cross-validation partition, specified as a `cvpartition`

partition object
created by `cvpartition`

. The partition object
specifies the type of cross-validation and the indexing for the training and validation
sets.

To create a cross-validated model, you can specify only one of these four name-value
arguments: `CVPartition`

, `Holdout`

,
`KFold`

, or `Leaveout`

.

**Example: **Suppose you create a random partition for 5-fold cross-validation on 500
observations by using `cvp = cvpartition(500,'KFold',5)`

. Then, you can
specify the cross-validated model by using
`'CVPartition',cvp`

.

`Holdout`

— Fraction of data for holdout validation

scalar value in the range (0,1)

Fraction of the data used for holdout validation, specified as a scalar value in the range
(0,1). If you specify `'Holdout',p`

, then the software completes these
steps:

Randomly select and reserve

`p*100`

% of the data as validation data, and train the model using the rest of the data.Store the compact, trained model in the

`Trained`

property of the cross-validated model.

To create a cross-validated model, you can specify only one of these four name-value
arguments: `CVPartition`

, `Holdout`

,
`KFold`

, or `Leaveout`

.

**Example: **`'Holdout',0.1`

**Data Types: **`double`

| `single`

`KFold`

— Number of folds

`10`

(default) | positive integer value greater than 1

Number of folds to use in a cross-validated model, specified as a positive integer value
greater than 1. If you specify `'KFold',k`

, then the software completes
these steps:

Randomly partition the data into

`k`

sets.For each set, reserve the set as validation data, and train the model using the other

`k`

– 1 sets.Store the

`k`

compact, trained models in a`k`

-by-1 cell vector in the`Trained`

property of the cross-validated model.

To create a cross-validated model, you can specify only one of these four name-value
arguments: `CVPartition`

, `Holdout`

,
`KFold`

, or `Leaveout`

.

**Example: **`'KFold',5`

**Data Types: **`single`

| `double`

`Leaveout`

— Leave-one-out cross-validation flag

`'off'`

(default) | `'on'`

Leave-one-out cross-validation flag, specified as `'on'`

or
`'off'`

. If you specify `'Leaveout','on'`

, then
for each of the *n* observations (where *n* is the
number of observations, excluding missing observations, specified in the
`NumObservations`

property of the model), the software completes
these steps:

Reserve the one observation as validation data, and train the model using the other

*n*– 1 observations.Store the

*n*compact, trained models in an*n*-by-1 cell vector in the`Trained`

property of the cross-validated model.

`CVPartition`

, `Holdout`

,
`KFold`

, or `Leaveout`

.

**Example: **`'Leaveout','on'`

**Hyperparameter Optimization Options**

`OptimizeHyperparameters`

— Parameters to optimize

`'none'`

(default) | `'auto'`

| `'all'`

| string array or cell array of eligible parameter names | vector of `optimizableVariable`

objects

Parameters to optimize, specified as one of the following:

`'none'`

— Do not optimize.`'auto'`

— Use`{'Activations','Lambda','LayerSizes','Standardize'}`

.`'all'`

— Optimize all eligible parameters.String array or cell array of eligible parameter names.

Vector of

`optimizableVariable`

objects, typically the output of`hyperparameters`

.

The optimization attempts to minimize the cross-validation loss (error) for
`fitrnet`

by varying the parameters. To control the
cross-validation type and other aspects of the optimization, use the
`HyperparameterOptimizationOptions`

name-value argument.

**Note**

The values of `'OptimizeHyperparameters'`

override any values you specify
using other name-value arguments. For example, setting
`'OptimizeHyperparameters'`

to `'auto'`

causes
`fitrnet`

to optimize hyperparameters corresponding to the
`'auto'`

option and to ignore any specified values for the
hyperparameters.

The eligible parameters for `fitrnet`

are:

`Activations`

—`fitrnet`

optimizes`Activations`

over the set`{'relu','tanh','sigmoid','none'}`

.`Lambda`

—`fitrnet`

optimizes`Lambda`

over continuous values in the range`[1e-5,1e5]/NumObservations`

, where the value is chosen uniformly in the log transformed range.`LayerBiasesInitializer`

—`fitrnet`

optimizes`LayerBiasesInitializer`

over the two values`{'zeros','ones'}`

.`LayerWeightsInitializer`

—`fitrnet`

optimizes`LayerWeightsInitializer`

over the two values`{'glorot','he'}`

.`LayerSizes`

—`fitrnet`

optimizes over the three values`1`

,`2`

, and`3`

fully connected layers, excluding the final fully connected layer.`fitrnet`

optimizes each fully connected layer separately over`1`

through`300`

sizes in the layer, sampled on a logarithmic scale.**Note**When you use the

`LayerSizes`

argument, the iterative display shows the size of each relevant layer. For example, if the current number of fully connected layers is`3`

, and the three layers are of sizes`10`

,`79`

, and`44`

respectively, the iterative display shows`LayerSizes`

for that iteration as`[10 79 44]`

.**Note**To access up to five fully connected layers or a different range of sizes in a layer, use

`hyperparameters`

to select the optimizable parameters and ranges.`Standardize`

—`fitrnet`

optimizes`Standardize`

over the two values`{true,false}`

.

Set nondefault parameters by passing a vector of
`optimizableVariable`

objects that have nondefault values. As an
example, this code sets the range of `NumLayers`

to ```
[1
5]
```

and optimizes `Layer_4_Size`

and
`Layer_5_Size`

:

load carsmall params = hyperparameters('fitrtree',[Horsepower,Weight],MPG); params(1).Range = [1 5]; params(10).Optimize = true; params(11).Optimize = true;

Pass `params`

as the value of
`OptimizeHyperparameters`

. For an example, see Custom Hyperparameter Optimization in Neural Network.

By default, the iterative display appears at the command line,
and plots appear according to the number of hyperparameters in the optimization. For the
optimization and plots, the objective function is log(1 + cross-validation loss). To control the iterative display, set the `Verbose`

field of
the `'HyperparameterOptimizationOptions'`

name-value argument. To control the
plots, set the `ShowPlots`

field of the
`'HyperparameterOptimizationOptions'`

name-value argument.

For an example, see Minimize Cross-Validation Error in Neural Network.

**Example: **`'OptimizeHyperparameters','auto'`

`HyperparameterOptimizationOptions`

— Options for optimization

structure

Options for optimization, specified as a structure. This argument modifies the effect of the
`OptimizeHyperparameters`

name-value argument. All fields in the
structure are optional.

Field Name | Values | Default |
---|---|---|

`Optimizer` | `'bayesopt'` — Use Bayesian optimization. Internally, this setting calls`bayesopt` .`'gridsearch'` — Use grid search with`NumGridDivisions` values per dimension.`'randomsearch'` — Search at random among`MaxObjectiveEvaluations` points.
| `'bayesopt'` |

`AcquisitionFunctionName` |
`'expected-improvement-per-second-plus'` `'expected-improvement'` `'expected-improvement-plus'` `'expected-improvement-per-second'` `'lower-confidence-bound'` `'probability-of-improvement'`
Acquisition functions whose names include
| `'expected-improvement-per-second-plus'` |

`MaxObjectiveEvaluations` | Maximum number of objective function evaluations. | `30` for `'bayesopt'` and
`'randomsearch'` , and the entire grid for
`'gridsearch'` |

`MaxTime` | Time limit, specified as a positive real scalar. The time limit is in seconds, as
measured by | `Inf` |

`NumGridDivisions` | For `'gridsearch'` , the number of values in each dimension. The value can be
a vector of positive integers giving the number of
values for each dimension, or a scalar that
applies to all dimensions. This field is ignored
for categorical variables. | `10` |

`ShowPlots` | Logical value indicating whether to show plots. If `true` , this field plots
the best observed objective function value against the iteration number. If you
use Bayesian optimization (`Optimizer` is
`'bayesopt'` ), then this field also plots the best
estimated objective function value. The best observed objective function values
and best estimated objective function values correspond to the values in the
`BestSoFar (observed)` and ```
BestSoFar
(estim.)
``` columns of the iterative display, respectively. You can
find these values in the properties `ObjectiveMinimumTrace` and `EstimatedObjectiveMinimumTrace` of
`Mdl.HyperparameterOptimizationResults` . If the problem
includes one or two optimization parameters for Bayesian optimization, then
`ShowPlots` also plots a model of the objective function
against the parameters. | `true` |

`SaveIntermediateResults` | Logical value indicating whether to save results when `Optimizer` is
`'bayesopt'` . If
`true` , this field overwrites a
workspace variable named
`'BayesoptResults'` at each
iteration. The variable is a `BayesianOptimization` object. | `false` |

`Verbose` | Display at the command line: `0` — No iterative display`1` — Iterative display`2` — Iterative display with extra information
For details, see the | `1` |

`UseParallel` | Logical value indicating whether to run Bayesian optimization in parallel, which requires Parallel Computing Toolbox™. Due to the nonreproducibility of parallel timing, parallel Bayesian optimization does not necessarily yield reproducible results. For details, see Parallel Bayesian Optimization. | `false` |

`Repartition` | Logical value indicating whether to repartition the cross-validation at every
iteration. If this field is The setting
| `false` |

Use no more than one of the following three options. | ||

`CVPartition` | A `cvpartition` object, as created by `cvpartition` | `'Kfold',5` if you do not specify a cross-validation
field |

`Holdout` | A scalar in the range `(0,1)` representing the holdout fraction | |

`Kfold` | An integer greater than 1 |

**Example: **`'HyperparameterOptimizationOptions',struct('MaxObjectiveEvaluations',60)`

**Data Types: **`struct`

## Output Arguments

`Mdl`

— Trained neural network regression model

`RegressionNeuralNetwork`

object | `RegressionPartitionedModel`

object

Trained neural network regression model, returned as a `RegressionNeuralNetwork`

or `RegressionPartitionedModel`

object.

If you set any of the name-value arguments `CrossVal`

,
`CVPartition`

, `Holdout`

,
`KFold`

, or `Leaveout`

, then
`Mdl`

is a `RegressionPartitionedModel`

object.
Otherwise, `Mdl`

is a `RegressionNeuralNetwork`

model.

To reference properties of `Mdl`

, use dot notation.

## More About

### Neural Network Structure

The default neural network regression model has the following layer structure.

Structure | Description |
---|---|

| Input — This layer corresponds to the predictor data in
`Tbl` or `X` . |

First fully connected layer — This layer has 10 outputs by default.
You can widen the layer or add more fully connected layers to the network by specifying the `LayerSizes` name-value argument.You can find the weights and biases for this layer in the `Mdl.LayerWeights{1}` and`Mdl.LayerBiases{1}` properties of`Mdl` , respectively.
| |

ReLU activation function —
You can change the activation function by specifying the `Activations` name-value argument.
| |

Final fully connected layer — This layer has one output.
You can find the weights and biases for this layer in the `Mdl.LayerWeights{end}` and`Mdl.LayerBiases{end}` properties of`Mdl` , respectively.
| |

Output — This layer corresponds to the predicted response values. |

For an example that shows how a regression neural network model with this layer structure returns predictions, see Predict Using Layer Structure of Regression Neural Network Model.

## Tips

Always try to standardize the numeric predictors (see

`Standardize`

). Standardization makes predictors insensitive to the scales on which they are measured.After training a model, you can generate C/C++ code that predicts responses for new data. Generating C/C++ code requires MATLAB Coder™. For details, see Introduction to Code Generation.

## Algorithms

### Training Solver

`fitrnet`

uses a limited-memory Broyden-Fletcher-Goldfarb-Shanno
quasi-Newton algorithm (LBFGS) [3] as its loss function
minimization technique, where the software minimizes the mean squared error (MSE). The LBFGS
solver uses a standard line-search method with an approximation to the Hessian.

## References

[1] Glorot, Xavier, and Yoshua Bengio.
“Understanding the difficulty of training deep feedforward neural networks.” In
*Proceedings of the thirteenth international conference on artificial intelligence
and statistics*, pp. 249–256. 2010.

[2] He, Kaiming, Xiangyu Zhang,
Shaoqing Ren, and Jian Sun. “Delving deep into rectifiers: Surpassing human-level
performance on imagenet classification.” In *Proceedings of the IEEE
international conference on computer vision*, pp. 1026–1034. 2015.

[3] Nocedal, J. and S. J. Wright.
*Numerical Optimization*, 2nd ed., New York: Springer,
2006.

## Extended Capabilities

### Automatic Parallel Support

Accelerate code by automatically running computation in parallel using Parallel Computing Toolbox™.

To perform parallel hyperparameter optimization, use the
`'HyperparameterOptimizationOptions', struct('UseParallel',true)`

name-value argument in the call to the `fitrnet`

function.

For more information on parallel hyperparameter optimization, see Parallel Bayesian Optimization.

For general information about parallel computing, see Run MATLAB Functions with Automatic Parallel Support (Parallel Computing Toolbox).

## Version History

**Introduced in R2021a**

## Open Example

You have a modified version of this example. Do you want to open this example with your edits?

## MATLAB Command

You clicked a link that corresponds to this MATLAB command:

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

# Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

## How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

### Americas

- América Latina (Español)
- Canada (English)
- United States (English)

### Europe

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)