- globalAveragePooling1dLayer: https://www.mathworks.com/help/releases/R2023b/deeplearning/ref/nnet.cnn.layer.globalaveragepooling1dlayer.html
- functionLayer: https://www.mathworks.com/help/releases/R2023b/deeplearning/ref/nnet.cnn.layer.functionlayer.html
- trainnet: https://in.mathworks.com/help/releases/R2023b/deeplearning/ref/trainnet.html
Training Data type error for a CNN using trainnet function
17 views (last 30 days)
Show older comments
Trying to use a convolution1dLayer for my sequence input data put when I try to train it i get the error:
"Error using trainnet
Invalid targets. Network expects numeric or categorical targets, but received a cell array."
I've looked at many exemples of how the data must be structed but even if is in the same format, it doesn't work.
For the predictors I'm doing a test with only 4 observations, each one with 4 features and 36191 points:
For the targets there are also for observations with only one target each and also 36191 points:
I can't understand why it doesn't accept it, like I said, its equal to many other exemples. I leave down here the code for the CNN-LSTM network and the trainnet function:
lgraph = layerGraph();
tempLayers = [
sequenceInputLayer(4,"Name","input")
convolution1dLayer(4,32,"Name","conv1d","Padding","same")
globalAveragePooling1dLayer("Name","gapool1d")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = lstmLayer(25,"Name","lstm");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = lstmLayer(25,"Name","lstm_1");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
concatenationLayer(1,2,"Name","concat")
lstmLayer(55,"Name","lstm_2")
dropoutLayer(0.5,"Name","drop")
fullyConnectedLayer(1,"Name","fc")
sigmoidLayer("Name","sigmoid")];
lgraph = addLayers(lgraph,tempLayers);
% clean up helper variable
clear tempLayers;
lgraph = connectLayers(lgraph,"gapool1d","lstm");
lgraph = connectLayers(lgraph,"gapool1d","lstm_1");
lgraph = connectLayers(lgraph,"lstm","concat/in1");
lgraph = connectLayers(lgraph,"lstm_1","concat/in2");
plot(lgraph);
epochs = 800;
miniBatchSize = 128;
LRDropPeriod = 200;
InitialLR = 0.01;
LRDropFactor = 0.1;
valFrequency = 30;
options = trainingOptions("adam", ...
MaxEpochs=epochs, ...
SequencePaddingDirection="left", ...
Shuffle="every-epoch", ...
GradientThreshold=1, ...
InitialLearnRate=InitialLR, ...
LearnRateSchedule="piecewise", ...
LearnRateDropPeriod=LRDropPeriod, ...
LearnRateDropFactor=LRDropFactor, ...
MiniBatchSize=miniBatchSize, ...
Plots="training-progress", ...
Metrics="rmse", ...
Verbose=0, ...
ExecutionEnvironment="parallel");
CNN_LTSM = trainnet(trainDataX, trainDataY, dlnetwork(lgraph),"mse",options);
using version 2023b
0 Comments
Accepted Answer
Milan Bansal
on 23 Jul 2024
Hi Zowie Silva,
I understand that you are facing an error while using trainnet function for training a sequence-to-sequence neural network problem.
Use "analyzeNetwork" to look into the structure of your neural network.
From the above screenshot, it can be observed that the model is losing the "T" dimension after "gapool1D" layer. Due to which the model is expecting a numerical or categorical target value instead of sequence.
The Neural Network built by you is using globalAveragePooling1dLayer. As per the documentation of globalAveragePooling1dLayer, for time series and vector sequence input (data with three dimensions corresponding to the "C" (channel), "B" (batch), and "T" (time) dimensions), the layer pools over the "T" (time) dimension. That means, after the pooling, only "C" and "B" dimension will remain.
If you wish to do the average pooling along channel dimension, you can create a functionLayer and replace the "gapool1D" layer with it. Please refer to the following code snippet to modify your model:
lgraph = layerGraph();
functionLayer = functionLayer(@(X) mean(X, 1), 'Name', 'avgPoolChannel'); % defining function layer
tempLayers = [
sequenceInputLayer(4,"Name","input")
convolution1dLayer(4,32,"Name","conv1d","Padding","same") % using the function layer
functionLayer];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = lstmLayer(25,"Name","lstm");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = lstmLayer(25,"Name","lstm_1");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
concatenationLayer(1,2,"Name","concat")
lstmLayer(55,"Name","lstm_2")
dropoutLayer(0.5,"Name","drop")
fullyConnectedLayer(1,"Name","fc")
sigmoidLayer("Name","sigmoid")];
lgraph = addLayers(lgraph,tempLayers);
% clean up helper variable
clear tempLayers;
lgraph = connectLayers(lgraph,"avgPoolChannel","lstm"); % connect to function layer
lgraph = connectLayers(lgraph,"avgPoolChannel","lstm_1"); % connect to function layer
lgraph = connectLayers(lgraph,"lstm","concat/in1");
lgraph = connectLayers(lgraph,"lstm_1","concat/in2");
Please refer to the following documentation links to learn more about:
Hope this helps!
More Answers (0)
See Also
Categories
Find more on Image Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!