Error using trainNetwork Invalid training data. For cell array input, responses must be an N-by-1 cell array of sequences, where N is the number of sequences. The spatial and

Error using trainNetwork
Invalid training data. For cell array input, responses must be an N-by-1 cell array of sequences, where N is the number of sequences. The spatial and channel dimensions of the sequences must be the same as the output size of the last layer (1).
% Edit - running code here
load CycleAgeingData.mat
numHiddenUnits = 50;
inputSize1 = size(Data{1},1)
inputSize1 = 7
% layers = [
% sequenceInputLayer(numChannels)
% lstmLayer(128)
% fullyConnectedLayer(numChannels)
% regressionLayer];
%
layers = [ ...
sequenceInputLayer(inputSize1)
lstmLayer(50, 'OutputMode', 'sequence')
fullyConnectedLayer(7)
dropoutLayer(0.011547480894612765)
fullyConnectedLayer(1)
regressionLayer];
% layersLSTM = [ ...
% sequenceInputLayer(inputSize1)
% lstmLayer(numHiddenUnits)
% fullyConnectedLayer(1)
% regressionLayer
% ];
% cell1x = num2cell(features', 1)';
% targets=cap6/cap6(1)
% cell1yB = num2cell(targets);
numChannels = size(Data{1},1)
numChannels = 7
numObservations = numel(Data);
idxTrain = 1:floor(0.7*numObservations);
idxval = floor(0.7*numObservations)+1:numObservations-2
idxval = 1x3
11 12 13
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
idxTest = floor(0.7*numObservations)+4:numObservations;
dataTrain = Data(idxTrain);
dataVal = Data(idxval)
dataVal = 3x1 cell array
{7x10 double} {7x32 double} {7x4 double}
dataTest = Data(idxTest);
%trainindx=(1:24)
trainindx = 1x24
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
%validindx=(25:29)
validindx = 1x5
25 26 27 28 29
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
%testindx=(30:34)
testindx = 1x5
30 31 32 33 34
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
traincell2yB = target(idxTrain, :);
valcell2yB = target(idxval, :);
testcell2yB = target(idxTest, :);
options = trainingOptions('rmsprop', ...
'MaxEpochs', 1500, ...
'MiniBatchSize', 50, ...
'InitialLearnRate', 0.00036008553147273947, ...
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropPeriod', 125, ...
'LearnRateDropFactor', 0.02, ...
'Shuffle', 'every-epoch', ...
'ValidationData', {dataVal, valcell2yB}, ...
'ValidationFrequency', 50, ...
'Verbose', 1, ...
'Plots', 'training-progress');
% options = trainingOptions('rmsprop', ...
% 'InitialLearnRate', 0.001, ...
% 'MaxEpochs',500, ...
% 'MiniBatchSize',50, ...
% 'Plots','training-progress', 'ValidationData', {valcell1x, valcell1yB});
% options = trainingOptions('adam', ...
% 'InitialLearnRate', 0.001, ...
% 'MaxEpochs',500, ...
% 'MiniBatchSize',50, ...
% 'Plots','training-progress', 'ValidationData', {valcell1x, valcell1yB});
netLSTM1 = trainNetwork(dataTrain, traincell2yB, layers, options);
Error using trainNetwork (line 191)
Invalid training data. For cell array input, responses must be an N-by-1 cell array of sequences, where N is the number of sequences. The spatial and channel dimensions of the sequences must be the same as the output size of the last layer (1).
Here is my data

9 Comments

Could you provide a description of what your input data is?
The input data consists of features extracted from various lithium-ion batteries. I want to merge these data sets to train the model. I have attached the workspace and saved the input in 'Data'
Are the features captured in the rows or columns of Data? Each cell contains 7 rows, but a variable number of columns. I would expect the number of features to be constant.
Note the following about numeric feature input:
The numeric array must be an N-by-numFeatures numeric array, where N is the number of observations and numFeatures is the number of features of the input data.
Features captured in rows, a constant number of 7 Featurs (such as mean, std ). for columns, vary because each cell whose degradation has been studied under different conditions and the target output is the capacity, for example, capacity for (7*40) is (1*40)
Ok. Note in the instructions I copied from the doc that features need to be in the columns, and observations in the rows.
@Cris LaPierre, i think if the features convert to the cloumns that can be effect the sequenceInputLayer
For vector sequence input, InputSize is a scalar corresponding to the number of features. (reference)
MATLAB already expects the number of columns to correspond to the number of features. You will need to update the input to inputSize so that it returns the number of columns instead of rows.
@Cris LaPierre i think Each cell has the format M x L, where M is the number of features, that remains fixed for all the cells, and L is the variable length of the training data. and the i have use number of features as inputsize
Sorry, I now understand you are trying to perform sequence-to-sequence regression. That changes some things. You might find this example useful. Sequence to Sequence Regression using Deep Learning

Sign in to comment.

 Accepted Answer

I believe the issue is because the sequence length is not the same in each sequence.
There are 2 reasons for this. First, your response vectors are Nx1, but need to be transponsed to 1xN so that the training and response sequences are the same length. Second, one of your sequences has a different response length.

3 Comments

Here's a working example. I had to disable the plots option to run the code here.
load CycleAgeingData.mat
% Transpose response data
target = cellfun(@transpose,target,'UniformOutput',false);
% Hack to get code to run here.
% Correct length difference
Data{5} = Data{5}(:,1:length(target{5}));
numHiddenUnits = 50;
numChannels = size(Data{1},1)
numChannels = 7
numResponses = size(target{1},1)
numResponses = 1
layers = [ ...
sequenceInputLayer(numChannels)
lstmLayer(numHiddenUnits, 'OutputMode', 'sequence')
fullyConnectedLayer(7)
dropoutLayer(0.011547480894612765)
fullyConnectedLayer(numResponses)
regressionLayer];
numObservations = numel(Data);
idxTrain = 1:floor(0.7*numObservations);
idxval = floor(0.7*numObservations)+1:numObservations-2
idxval = 1x3
11 12 13
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
idxTest = floor(0.7*numObservations)+4:numObservations;
dataTrain = Data(idxTrain);
dataVal = Data(idxval)
dataVal = 3x1 cell array
{7x10 double} {7x32 double} {7x4 double}
dataTest = Data(idxTest);
traincell2yB = target(idxTrain, :);
valcell2yB = target(idxval, :);
testcell2yB = target(idxTest, :);
options = trainingOptions('adam', ...
'MaxEpochs', 1500, ...
'MiniBatchSize', 50, ...
'InitialLearnRate', 0.00036008553147273947, ...
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropPeriod', 125, ...
'LearnRateDropFactor', 0.02, ...
'Shuffle', 'every-epoch', ...
'ValidationData', {dataVal, valcell2yB}, ...
'ValidationFrequency', 50, ...
'Verbose', 1);
% 'Plots', 'training-progress');
netLSTM1 = trainNetwork(dataTrain, traincell2yB, layers, options)
Training on single CPU. |======================================================================================================================| | Epoch | Iteration | Time Elapsed | Mini-batch | Validation | Mini-batch | Validation | Base Learning | | | | (hh:mm:ss) | RMSE | RMSE | Loss | Loss | Rate | |======================================================================================================================| | 1 | 1 | 00:00:00 | 3.63 | 2.64 | 6.6028 | 3.4722 | 0.0004 | | 50 | 50 | 00:00:00 | 2.11 | 1.67 | 2.2178 | 1.4003 | 0.0004 | | 100 | 100 | 00:00:01 | 1.38 | 1.24 | 0.9512 | 0.7710 | 0.0004 | | 150 | 150 | 00:00:01 | 1.23 | 1.18 | 0.7562 | 0.6937 | 7.2017e-06 | | 200 | 200 | 00:00:02 | 1.22 | 1.18 | 0.7454 | 0.6929 | 7.2017e-06 | | 250 | 250 | 00:00:02 | 1.20 | 1.18 | 0.7247 | 0.6919 | 7.2017e-06 | | 300 | 300 | 00:00:03 | 1.21 | 1.18 | 0.7364 | 0.6919 | 1.4403e-07 | | 350 | 350 | 00:00:03 | 1.21 | 1.18 | 0.7336 | 0.6919 | 1.4403e-07 | | 400 | 400 | 00:00:03 | 1.21 | 1.18 | 0.7288 | 0.6919 | 2.8807e-09 | | 450 | 450 | 00:00:04 | 1.19 | 1.18 | 0.7068 | 0.6919 | 2.8807e-09 | | 500 | 500 | 00:00:04 | 1.19 | 1.18 | 0.7081 | 0.6919 | 2.8807e-09 | | 550 | 550 | 00:00:05 | 1.19 | 1.18 | 0.7128 | 0.6919 | 5.7614e-11 | | 600 | 600 | 00:00:05 | 1.23 | 1.18 | 0.7504 | 0.6919 | 5.7614e-11 | | 650 | 650 | 00:00:06 | 1.19 | 1.18 | 0.7118 | 0.6919 | 1.1523e-12 | | 700 | 700 | 00:00:06 | 1.20 | 1.18 | 0.7216 | 0.6919 | 1.1523e-12 | | 750 | 750 | 00:00:06 | 1.20 | 1.18 | 0.7177 | 0.6919 | 1.1523e-12 | | 800 | 800 | 00:00:07 | 1.22 | 1.18 | 0.7394 | 0.6919 | 2.3045e-14 | | 850 | 850 | 00:00:07 | 1.21 | 1.18 | 0.7283 | 0.6919 | 2.3045e-14 | | 900 | 900 | 00:00:07 | 1.20 | 1.18 | 0.7226 | 0.6919 | 4.6091e-16 | | 950 | 950 | 00:00:08 | 1.20 | 1.18 | 0.7142 | 0.6919 | 4.6091e-16 | | 1000 | 1000 | 00:00:08 | 1.20 | 1.18 | 0.7176 | 0.6919 | 4.6091e-16 | | 1050 | 1050 | 00:00:09 | 1.18 | 1.18 | 0.7013 | 0.6919 | 9.2182e-18 | | 1100 | 1100 | 00:00:09 | 1.21 | 1.18 | 0.7280 | 0.6919 | 9.2182e-18 | | 1150 | 1150 | 00:00:10 | 1.20 | 1.18 | 0.7255 | 0.6919 | 1.8436e-19 | | 1200 | 1200 | 00:00:10 | 1.19 | 1.18 | 0.7097 | 0.6919 | 1.8436e-19 | | 1250 | 1250 | 00:00:10 | 1.20 | 1.18 | 0.7196 | 0.6919 | 1.8436e-19 | | 1300 | 1300 | 00:00:11 | 1.19 | 1.18 | 0.7129 | 0.6919 | 3.6873e-21 | | 1350 | 1350 | 00:00:11 | 1.19 | 1.18 | 0.7120 | 0.6919 | 3.6873e-21 | | 1400 | 1400 | 00:00:12 | 1.21 | 1.18 | 0.7278 | 0.6919 | 7.3746e-23 | | 1450 | 1450 | 00:00:12 | 1.19 | 1.18 | 0.7091 | 0.6919 | 7.3746e-23 | | 1500 | 1500 | 00:00:12 | 1.20 | 1.18 | 0.7189 | 0.6919 | 7.3746e-23 | |======================================================================================================================| Training finished: Max epochs completed.
netLSTM1 =
SeriesNetwork with properties: Layers: [6x1 nnet.cnn.layer.Layer] InputNames: {'sequenceinput'} OutputNames: {'regressionoutput'}
@Cris LaPierre, i am try this solution and i get the same error but i am agrre with you i think the error come from the lengthsequance of the target output

Sign in to comment.

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Asked:

on 1 May 2024

Commented:

on 1 May 2024

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!