unable to incorporate own design Loss function in r2024a

Switching from r2023b to r2024 I made some changes in my Net (CNN). e.g. modified input/output and replace RegressionLayer with SoftmaxLayer, using trainnet function, etc.
I expected better performance, perspective compatibility (RegressionLayre is not more recommended) and have a vision of my Net optimization with use of Prune approach etc.
To the contrary to the previous version I am not able to involve my own Loss function (as it was done in previous version).
The (siplified) code is below, the used synthax was inspired by example:
The error message is:
Error using trainnet (line 46)
Error calling function during training.
Error in callMyLoss (line 55)
myTrainedNet = trainnet(Y,target,net, @(Y,target) myOwnLoss(name,Y,target),options);
Caused by:
Error using myOwnLoss
The specified superclass 'nnet.layer.softmaxLayer' contains a parse error, cannot be found on MATLAB's
search
path, or is shadowed by another file with the same name.
Error in callMyLoss>@(Y,target)myOwnLoss(name,Y,target) (line 55)
myTrainedNet = trainnet(Y,target,net, @(Y,target) myOwnLoss(name,Y,target),options);
Error in nnet.internal.cnn.util.UserCodeException.fevalUserCode (line 11)
[varargout{1:nargout}] = feval(F, varargin{:});
classdef myOwnLoss < nnet.layer.softmaxLayer
% own Loss
methods
%function layer = sseClassificationLayer(name)
function layer = myOwnLoss(name)
% layer = sseClassificationLayer(name) creates a sum of squares
% error classification layer and specifies the layer name.
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = 'my own Loss v.2024a';
end
function loss = forwardLoss(layer, Y, T)
%%% function loss = forwardLoss(Yo, To)
% loss = forwardLoss(layer, Y, T) returns the Tdiff loss between
% the predictions Y and the training targets T.
disp("myLoss");
aa=1;
% just something very simple
loss = sum(Y-T,'all');
end
% original backwardLoss
function dX = backwardLoss(layer, Y, T)
numObservations = size( Y, 3);
dX = (Y - T)./numObservations;
end
end
end
%=======================eof=========================
classdef myOwnLoss < nnet.layer.softmaxLayer
% own Loss
methods
%function layer = sseClassificationLayer(name)
function layer = myOwnLoss(name)
% layer = sseClassificationLayer(name) creates a sum of squares
% error classification layer and specifies the layer name.
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = 'my own Loss v.2024a';
end
function loss = forwardLoss(layer, Y, T)
%%% function loss = forwardLoss(Yo, To)
% loss = forwardLoss(layer, Y, T) returns the Tdiff loss between
% the predictions Y and the training targets T.
disp("myLoss");
aa=1;
% just something very simple
loss = sum(Y-T,'all');
end
% original backwardLoss
function dX = backwardLoss(layer, Y, T)
numObservations = size( Y, 3);
dX = (Y - T)./numObservations;
end
end
end
%=======================eof=========================

 Accepted Answer

The error occurs since "softmaxLayer" is actually a function and cannot act as a base class for a class. You can confirm this by using the following command:
which softmaxLayer
And then opening the file path:
function layer = softmaxLayer( nameValueArgs )
% softmaxLayer Softmax layer
% ...
end
I also tried running your example code in MATLAB R2023b and am receiving the same error, suggesting that it does not work in R2023b as well.
According to the documentation, it is recommended to define a custom loss function as a MATLAB function instead of a class: https://www.mathworks.com/help/deeplearning/ug/define-custom-training-loops-loss-functions-and-networks.html#mw_40e667e2-1ea1-4793-b079-8bc763144200. The function should have the syntax "loss = f(Y,T)", where "Y" and "T" are the predictions and targets, respectively. Using a function has several benefits such as supporting automatic differentiation if all your operations are compatible with "dlarray".
If you want a more complex loss function which takes inputs other than the predictions and targets, you need to use a custom training loop with the custom loss function, as detailed in the following example: https://www.mathworks.com/help/deeplearning/ug/train-generative-adversarial-network.html#TrainGenerativeAdversarialNetworkGANExample-9.
Hope this helps!

7 Comments

Hi Malay,
thank you for the quick reaction, you are right, but it is not exactly my case.
Originally (under r2023b) I had time series and (after training) get some probabilities for every input point of them (it was realized by "regessionLayer" working in sequence-to-sequence mode).
At the input was "inputsequenceLayer", followed by most outer loop constructed by folding and unfolding layers. In such a way the time series were converted into (pseudo)images and process by Unet (constructed by "unet3dLayers"). At the end the conversion was reverted back to time series. The Net was trained with "trainNetwork"
And I was optionally able to use my own-design Loss function.
Following the new recommendation, I switch to "image mode", use "inputLayer", train Net with "trainnet" and as output layer use "softmaxLayer" (the Net is constructed by "unet3d").
It works, at least formally. Now I want to call my own Loss function (as before), but I am not able to.
Is my problem/desire now clearer?
Regards Petr
Could you please try replacing your class-based loss function with a function-based one and then call "trainnet" as follows?
trainnet(Y,target,net, @myOwnLoss,options);
tested - it doesn't work neither, however with message error:
Error using deep.internal.train.Trainer/train (line 74)
File: fevalUserCode.m Line: 11 Column: 6
Invalid expression. When calling a function or indexing a variable, use parentheses. Otherwise, check for
mismatched delimiters.
Error in deep.internal.train.trainnet (line 54)
net = train(trainer, net, mbq);
Error in trainnet (line 42)
[net,info] = deep.internal.train.trainnet(mbq, net, loss, options, ...
Error in callMyLoss (line 55)
myTrainedNet = trainnet(Y,target,net, @(Y,target) myOwnLoss2(layer,Y,target),options);
Could you share the training code and any data files that you might be using so that I can take a closer look?
Dear Malay,
I don't fully understand your requirement (it has nothing to do with the formal problem of calling myLoss fcn, I'd say), but I've prepared an answer.
An article about our work was recently published
As the supplement we provided also a demo code, data example and trained net (all items belong to the "old" version MTL) - this material is available via
We use own Loss (a combination of RMS, correlation and minimizing time-shift-of-probability-maximas - the setup is based on our numerical experiments - cf. the article).
Recently we switched to the latest MATLAB version (r2024a) expecting future compatibility and faster performance. Last but not least: I believed in improvement of the proposed method efficiency and robustness via Prune approach. Unfortunately, it is not available for some layers required by 3D-Unet (particulary 3D convolutional operations) - at least in the current MTL version :-( .
I prepared a version (AECubeDemo2 - below) using new-version Net design (the same demo-data are used, however reformatted in the code).
If problem with given links, let me know I'll sent the items directly.
Thank you for the interest and the care
Regards Petr
function predictedProbab = AECubeDemo2
%
% updated version - use r2024a CNN formalism (prepared for own Loss fce
% utilistaion proble discussion.
%
%
% "Accoustic Emission Cube Demo "
% Accoustic Emission multiple evetnst detection with use of CubeNet/U-Net CNN
%
%
%
% the code is suplement to the article:
% Petr Kolář and Matěj Petružálek:
% Discrimination of doubled Acoustic Emission events using Neural Networks
%
% submited to Ultrasonic
%
% input: data file indetification is presribed below
% output: predicted probabilities from testing data
% note: the interpretation of these probabilities is not included in this
% demo code and should be done by user
%
% required files:
% * code (function AECubeDemo)
% * (testing) data (examples quotted in the article)
% * trained RNN for onset(s) detecion
% * trained RNN for OT prediction
% (all these files are part of the package)
%
%
%
% created by P. Kolar kolar@ig.cas.cz
%
%
% compatibility: created under MATLAB R2022b
% required: MATLAB core
% Statistics and Machine Learning Toolbox
% Signal Processing Toolbox
%
% version 1.0 / 04/10/2023
%
% how the Net was created:
%
% l=unet3dLayers([4 4 100 1],3,'EncoderDepth',2,'NumFirstEncoderFilters',16)
%
close all
close(findall(0,'tag','NNET_CNN_TRAININGPLOT_FIGURE'));
netName='C2NN400_50-50_L123f.mat'; % net trainde on large data
train=0; % no training - pre-trained Net is used
train=1; % training on available data (seimograms) is performed
if train
netName='trainedNet_2024a'; % Net trained on actual data
end
% plot outputs
plot1=0;
plot1=1;
rng('default') % for reprodacibility
% input data files
files=what('d:\Data2count\dataAE_SyntDoubleAg_4mw'); % directory with input files
len1=400; % sub-seismogram length
dt=1/10e6; % sampling
%%{
nFiles=length(files.mat);
iFile=randperm(nFiles);
% data division
nT=floor(0.7*nFiles); % 0.0 - 0.7 for training
nV=floor(0.8*nFiles); % 0.7 - 0.8 for validation
% position of channes in Cube
posTab=[2 1; 2 2; 3 3; 3 4;...
1 1; 1 2; 2 3; 2 4;...
3 1; 3 2; 4 3; 4 4;...
4 2; 4 1; 1 3; 1 4;];
signal=[];
probab=[];
info=[];
for i=1:nT
jj=iFile(i);
str=[files.path,'\',files.mat{jj}];
out=isfile(str);
if out ~= 1
aa=1;
end
[signal1,probab1,info1]=getSignal3D(str,posTab,len1);
lenS=length(signal1);
signal=[signal signal1];
probab=[probab probab1];
info=[info info1];
aa=1;
end
signalTrain=signal';
probabTrain=probab';
infoTrain=info';
signal=[];
probab=[];
info=[];
for i=nT+1:nV
jj=iFile(i);
str=[files.path,'/',files.mat{jj}];
[signal1,probab1,info1]=getSignal3D(str,posTab,len1);
lenS=length(signal1);
signal=[signal signal1];
probab=[probab probab1];
info=[info info1];
aa=1;
end
signalValid=signal';
probabValid=probab';
infoValid=info';
signal=[];
probab=[];
info=[];
iY=1;
for i=nV+1:nFiles
jj=iFile(i);
str=[files.path,'/',files.mat{jj}];
[signal1,probab1,info1,signal0,info0,probab0]=getSignal3D(str,posTab,len1);
lenS=length(signal1);
signal=[signal signal1];
probab=[probab probab1];
info=[info info1];
Ysignal{iY}=signal0;
Yprobab{iY}=probab0;
Yinfo{iY}=info0;
iY=iY+1;
aa=1;
end
signalTest=signal';
probabTest=probab';
infoTest=info';
disp("DATA was red");
%%{
% an example of input data is ploted
% the seismogram number may be modified, if required
len=length(signalTrain);
np=min([2000 round(0.7*len)-1]);
if nV > 0
hF1=plotCube('',signalTrain{np},probabTrain{np});
end
%............................................................
%
% NEW net formalism
%
aa=1;
net = dlnetwork;
% Add Layer Branches
% Add branches to the dlnetwork. Each branch is a linear array of layers.
tempNet = [
inputLayer([4 4 400 3 1],"SSSCB","Name","encoderImageInputLayer")
convolution3dLayer([3 3 3],4,"Name","Encoder-Stage-1-Conv-1","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Encoder-Stage-1-BN-1")
reluLayer("Name","Encoder-Stage-1-ReLU-1")
convolution3dLayer([3 3 3],8,"Name","Encoder-Stage-1-Conv-2","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Encoder-Stage-1-BN-2")
reluLayer("Name","Encoder-Stage-1-ReLU-2")];
net = addLayers(net,tempNet);
tempNet = [
maxPooling3dLayer([2 2 2],"Name","Encoder-Stage-1-MaxPool","Stride",[2 2 2])
convolution3dLayer([3 3 3],8,"Name","Encoder-Stage-2-Conv-1","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Encoder-Stage-2-BN-1")
reluLayer("Name","Encoder-Stage-2-ReLU-1")
convolution3dLayer([3 3 3],16,"Name","Encoder-Stage-2-Conv-2","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Encoder-Stage-2-BN-2")
reluLayer("Name","Encoder-Stage-2-ReLU-2")
dropoutLayer(0.5,"Name","Encoder-Stage-2-DropOut")];
net = addLayers(net,tempNet);
tempNet = [
maxPooling3dLayer([2 2 2],"Name","Encoder-Stage-2-MaxPool","Stride",[2 2 2])
convolution3dLayer([3 3 3],16,"Name","LatentNetwork-Bridge-Conv-1","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","LatentNetworkBridge-BN-1")
reluLayer("Name","LatentNetwork-Bridge-ReLU-1")
convolution3dLayer([3 3 3],32,"Name","LatentNetwork-Bridge-Conv-2","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","LatentNetworkBridge-BN-2")
reluLayer("Name","LatentNetwork-Bridge-ReLU-2")
dropoutLayer(0.5,"Name","LatentNetwork-Bridge-DropOut")
transposedConv3dLayer([2 2 2],32,"Name","Decoder-Stage-1-UpConv","BiasLearnRateFactor",2,"Stride",[2 2 2],"WeightsInitializer","he")
reluLayer("Name","Decoder-Stage-1-UpReLU")];
net = addLayers(net,tempNet);
tempNet = crop3dLayer("Name","encoderDecoderSkipConnectionCrop2");
net = addLayers(net,tempNet);
tempNet = [
concatenationLayer(4,2,"Name","encoderDecoderSkipConnectionFeatureMerge2")
convolution3dLayer([3 3 3],16,"Name","Decoder-Stage-1-Conv-1","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Decoder-Stage-1-BN-1")
reluLayer("Name","Decoder-Stage-1-ReLU-1")
convolution3dLayer([3 3 3],16,"Name","Decoder-Stage-1-Conv-2","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Decoder-Stage-1-BN-2")
reluLayer("Name","Decoder-Stage-1-ReLU-2")
transposedConv3dLayer([2 2 2],16,"Name","Decoder-Stage-2-UpConv","BiasLearnRateFactor",2,"Stride",[2 2 2],"WeightsInitializer","he")
reluLayer("Name","Decoder-Stage-2-UpReLU")];
net = addLayers(net,tempNet);
tempNet = crop3dLayer("Name","encoderDecoderSkipConnectionCrop1");
net = addLayers(net,tempNet);
tempNet = [
concatenationLayer(4,2,"Name","encoderDecoderSkipConnectionFeatureMerge1")
convolution3dLayer([3 3 3],8,"Name","Decoder-Stage-2-Conv-1","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Decoder-Stage-2-BN-1")
reluLayer("Name","Decoder-Stage-2-ReLU-1")
convolution3dLayer([3 3 3],8,"Name","Decoder-Stage-2-Conv-2","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Decoder-Stage-2-BN-2")
reluLayer("Name","Decoder-Stage-2-ReLU-2")
convolution3dLayer([1 1 1],3,"Name","encoderDecoderFinalConvLayer")
softmaxLayer("Name","FinalNetworkSoftmax-Layer")];
net = addLayers(net,tempNet);
% clean up helper variable
clear tempNet;
% Connect Layer Branches
% Connect all the branches of the network to create the network graph.
net = connectLayers(net,"Encoder-Stage-1-ReLU-2","Encoder-Stage-1-MaxPool");
net = connectLayers(net,"Encoder-Stage-1-ReLU-2","encoderDecoderSkipConnectionCrop1/in");
net = connectLayers(net,"Encoder-Stage-2-DropOut","Encoder-Stage-2-MaxPool");
net = connectLayers(net,"Encoder-Stage-2-DropOut","encoderDecoderSkipConnectionCrop2/in");
net = connectLayers(net,"Decoder-Stage-1-UpReLU","encoderDecoderSkipConnectionCrop2/ref");
net = connectLayers(net,"Decoder-Stage-1-UpReLU","encoderDecoderSkipConnectionFeatureMerge2/in2");
net = connectLayers(net,"encoderDecoderSkipConnectionCrop2","encoderDecoderSkipConnectionFeatureMerge2/in1");
net = connectLayers(net,"Decoder-Stage-2-UpReLU","encoderDecoderSkipConnectionCrop1/ref");
net = connectLayers(net,"Decoder-Stage-2-UpReLU","encoderDecoderSkipConnectionFeatureMerge1/in2");
net = connectLayers(net,"encoderDecoderSkipConnectionCrop1","encoderDecoderSkipConnectionFeatureMerge1/in1");
net = initialize(net);
%Plot Layers
figure; hold on
plot(net);
aa=1;
%..........................................................
% training opitons
sv1=[];
pv1=[];
nn= length(signalValid);
for ii=1:nn
% sv1=[sv1;signalValid{ii}];
% pv1=[pv1;probabValid{ii}];
sv1(1:4,1:4,1:400,1:3,ii)=signalValid{ii};
pv1(1:4,1:4,1:400,1:3,ii)=probabValid{ii};
end
% %
maxEpochs = 10;
miniBatchSize = 20;
options = trainingOptions('sgdm', ...
'MaxEpochs',maxEpochs, ...
'MiniBatchSize',miniBatchSize, ...
'InitialLearnRate',0.0045, ...
'GradientThreshold',0.5, ...
'Shuffle','every-epoch', ...
'Plots','training-progress',...
'ExecutionEnvironment','cpu',...
'ValidationData',{sv1,pv1}, ...
'Metrics',"rmse", ...
'OutputNetwork','best-validation-loss',...
'Verbose',0);
tic
%%{
if train
% % % old ver.
% % netCNN_3dUnetL2 = trainNetwork(signalTrain,probabTrain,layers,options);
% % save(netName,'netCNN_3dUnetL2');
% % disp(' trained Cube-net was saved')
% new version (incl. data reformating).
st1=[];
pt1=[];
nn= length(signalTrain);
for ii=1:nn
st1(1:4,1:4,1:400,1:3,ii)=signalTrain{ii};
pt1(1:4,1:4,1:400,1:3,ii)=probabTrain{ii};
end
netCNN_3dUnetL2 = trainnet(st1,pt1,net,@(st1,pt1) myLoss24a(st1,pt1),options); % DOES NOT WORK !!
% netCNN_3dUnetL2 = trainnet(st1,pt1,net,"mse",options); % IT WORKS !!
end
toc
load(netName);
disp(['net ',netName,' - net was loaded']);
Yplus=1:14;
Yplus=repmat(Yplus',[1 1024]);
nEvT=nFiles-nV; % nummber of predicted events
predictedProbab=zeros(nEvT,14,1024);
hFp=[];
for i=1:nEvT
YpredP=zeros(14,1024);
Ysig1=Ysignal{i};
Yprobab1=Yprobab{i};
if plot1
if plot1==2, close(hFp); end
hFp=figure('Tag','Onpred'); hold on
end
for ii=1:8 % shift loop
ii1=(ii-1)*78+1;
ii2=ii1+len1-1;
Ysig1s=Ysig1(:,:,ii1:ii2,:);
[Ypred1_3d] = predict(netCNN_3dUnetL2,Ysig1s);
% Cube data decomposition into maxtrix
for jj=1:14
pos1=posTab(jj,1);
pos2=posTab(jj,2);
tmp=squeeze(Ypred1_3d(pos1,pos2,:,1:3));
tmp=smoothdata(tmp,1);
tmp=tmp(:,2) + (1 -tmp(:,1) - tmp(:,3));
tmp(tmp<0)=0;
tmp0=squeeze(YpredP(jj,ii1:ii2));
YpredP(jj,ii1:ii2)= max([tmp0; tmp']);
if ii == 1
tmp=squeeze(Yprobab1(pos1,pos2,:,1:3));
tmp=tmp(:,2) + (1 -tmp(:,1) - tmp(:,3));
tmp(tmp<0)=0;
YprobabP(jj,:)= tmp;
end
aa=1;
end
if plot1
plot((YpredP+Yplus)');
plot((0.45*YprobabP+Yplus)',':b');
strTit=(['evNo.: ',num2str(i),', 2nd ev.shift: ',num2str(Yinfo{i}.shift)]);
title({' Predicted probab. (full) and Targed (dotted)';strTit});
xlabel('samples');
ylabel('chanels');
end
aa=1;
end
on0=Yinfo{i}.on0;
on1=Yinfo{i}.on1;
Yinfo1=Yinfo{i};
if plot1
disp(['evNo./2nd ev.shift ',num2str([i Yinfo1.shift])]);
end
predictedProbab(i,1:14,:) = YpredP;
end
return
end
%-------------------------------------------------
function [signal3,probab3,info3,signal0,info0,probab0]=getSignal3D(str,posTab,len1)
%
% read data
%
% input: file_name in 'str'
% output: xx3 - subwindowed signals/probab/infos for training/validataion
% xx0 - the whole signals/probab/infos for interpratation (prediction)
%
tmp = load(str);
cNNdatS2 = tmp.cNNdatS2;
sigma=4;
sigma2=800;
x=1:1024;
tmp=cNNdatS2;
tmp.sig=[];
info0=tmp;
info0.pP1=zeros(1,14);
info0.pP2=zeros(1,14);
nSig = 1;
shiftSig1 = 64;
on0=double(cNNdatS2.on0);
on1=double(cNNdatS2.on1);
NN=12; % numbr of shifts to get sub-seismograms
signal3=num2cell(zeros(1,NN));
probab3=num2cell(zeros(1,NN));
info3=num2cell(zeros(1,NN));
signal0=zeros(4,4,1024,3);
probab0=zeros(4,4,1024,3);
for j=1:NN
shiftSig0 = round(1+rand(1)*shiftSig1/2);
% shiftSig0 = 1;
sig2=zeros(14,len1+1,3);
probab2=zeros(14,len1+1,3);
probab14=zeros(14,1024,3);
for i=1:14
sig1=cNNdatS2.sig(i,:);
% for basic signal
p0=exp(-(x-on0(i)).^2/(2*sigma));
p2=exp(-(x-on0(i)).^2/(2*sigma2));
pi=-p0+1;
p2i=-p2+1;
n0(1:1024)=0;
n0(1:on0(i)-2*sigma-1)=1;
% disp([i j]);
fin1=1024-(on0(i)+2*sigma);
n0(on0(i)-2*sigma : on0(i))=pi(on0(i)-2*sigma:on0(i));
n0(on0(i)+2*sigma:end)=p2i(on0(i):on0(i)+fin1);
e0(1:1024)=1;
e0(1:on0(i)-1)=0;
e0(on0(i):on0(i)+2*sigma)=pi(on0(i) : on0(i)+2*sigma);
e0(on0(i)+2*sigma:end)=p2(on0(i) : on0(i)+fin1);
% for shifted signal
p1=exp(-(x-on1(i)).^2/(2*sigma));
p2=exp(-(x-on1(i)).^2/(2*sigma2));
pi=-p1+1;
p2i=-p2+1;
n1(1:1024)=0;
n1(1:on1(i)-2*sigma-1)=1;
% disp([i j]);
fin1=1024-(on1(i)+2*sigma);
n1(on1(i)-2*sigma : on1(i))=pi(on1(i)-2*sigma:on1(i));
n1(on1(i)+2*sigma:end)=p2i(on1(i):on1(i)+fin1);
e1(1:1024)=1;
e1(1:on1(i)-1)=0;
e1(on1(i):on1(i)+2*sigma)=pi(on1(i) : on1(i)+2*sigma);
e1(on1(i)+2*sigma:end)=p2(on1(i) : on1(i)+fin1);
n=min([n0;n1]);
p=max([p0;p1]);
e=max([e0;e1]);
prb1=[n; p; e];
% % % plot of probabilities (if desired)
% % %
% % aa=1;
% % if info0.nEv0 ==955 && info0.nEv1==1121
% % if i==3 && info0.shift == -120
% %
% % figure; hold on
% % plot(prb1');
% % s=sum(prb1,1);
% % pTot= 1 +p -n -e;
% % plot(pTot,':','LineWidth',1.5);
% % % plot(s,':');
% % sig1=sig1/max(abs(sig1));
% % plot(sig1-1.1)
% % legend(' pN',' pP',' pC',' P',' signal');
% % set(gca,'ylim',[-2.2 2.1]);
% % xlabel('samples');
% % aa=1;
% % end
% % end
sigEnd = (j-1)*shiftSig1+1+shiftSig0 + len1;
if sigEnd > 1024, break, end
for k=1:3
sig2(i,:,k)=sig1((j-1)*shiftSig1+1+shiftSig0:sigEnd);
probab2(i,:,k)=prb1(k,(j-1)*shiftSig1+1+shiftSig0:sigEnd);
probab14(i,:,1:3)=prb1';
end
end
if sigEnd > 1024, break, end
signal1=zeros(4,4,len1,3);
probab1=zeros(4,4,len1,3);
for i=1:16
pos=posTab(i,:);
if i <=14
signal1(pos(1),pos(2),1:len1,1:3)=sig2(i,1:len1,:);
probab1(pos(1),pos(2),1:len1,1:3)=probab2(i,1:len1,:);
end
if i==13
ii=15;
pos=posTab(ii,:);
signal1(pos(1),pos(2),1:len1,1:3)=sig2(i,1:len1,:);
probab1(pos(1),pos(2),1:len1,1:3)=probab2(i,1:len1,:);
elseif i==14
ii=16;
pos=posTab(ii,:);
signal1(pos(1),pos(2),1:len1,1:3)=sig2(i,1:len1,:);
probab1(pos(1),pos(2),1:len1,1:3)=probab2(i,1:len1,:);
end
end
signal3{nSig}=signal1;
probab3{nSig}=probab1;
info1=info0;
info1.on0=info0.on0-(j-1)*shiftSig1+1+shiftSig0;
info1.on1=info0.on1-(j-1)*shiftSig1+1+shiftSig0;
info1.TO0o=info0.TO0o-(j-1)*shiftSig1+1+shiftSig0;
info1.TO1o=info0.TO1o-(j-1)*shiftSig1+1+shiftSig0;
info3{nSig}=info1;
%
% the whole signal
%
if j==1
for i=1:16
pos=posTab(i,:);
if i <=14
signal0(pos(1),pos(2),:,1)=cNNdatS2.sig(i,:);
signal0(pos(1),pos(2),:,2)=cNNdatS2.sig(i,:);
signal0(pos(1),pos(2),:,3)=cNNdatS2.sig(i,:);
probab0(pos(1),pos(2),:,1:3)=probab14(i,:,1:3);
end
if i==13
ii=15;
pos=posTab(ii,:);
signal0(pos(1),pos(2),:,1)=cNNdatS2.sig(i,:);
signal0(pos(1),pos(2),:,2)=cNNdatS2.sig(i,:);
signal0(pos(1),pos(2),:,3)=cNNdatS2.sig(i,:);
probab0(pos(1),pos(2),:,1:3)=probab14(i,:,1:3);
elseif i==14
ii=16;
pos=posTab(ii,:);
signal0(pos(1),pos(2),:,1)=cNNdatS2.sig(i,:);
signal0(pos(1),pos(2),:,2)=cNNdatS2.sig(i,:);
signal0(pos(1),pos(2),:,3)=cNNdatS2.sig(i,:);
probab0(pos(1),pos(2),:,1:3)=probab14(i,:,1:3);
end
end
end
% % % possible plot for prove
% % %
% % % if info0.nEv0==955 && info0.nEv1==1121 && info0.shift== -120
% % % plotCube('',signal3{nSig},probab3{nSig});
% % % aa=1;
% % % end
nSig=nSig+1;
aa=1;
end
signal3(nSig:NN)=[];
probab3(nSig:NN)=[];
info3(nSig:NN)=[];
aa=1;
if info0.TO1o < info0.TO0o
info0a=info0;
info0.evid0=info0a.evId1;
info0.evid1=info0a.evId0;
info0.nEv0=info0a.nEv1;
info0.nEv1=info0a.nEv0;
info0.on0=info0a.on1;
info0.on1=info0a.on0;
info0.Mw0=info0a.Mw1;
info0.Mw1=info0a.Mw0;
info0.TO0o=info0a.TO1o;
info0.TO1o=info0a.TO0o;
info0.vX=[info0a.vX(2,:);info0a.vX(1,:)];
end
end
%===============eof=============================================
In your code, the following line:
netCNN_3dUnetL2 = trainnet(st1,pt1,net,@(st1,pt1) myLoss24a(st1,pt1),options);
Needs to be:
netCNN_3dUnetL2 = trainnet(st1,pt1,net,@myLoss24a,options);
Notice that only the name of the loss function is passed as a function handle with no arguments. MATLAB will automatically figure out what are the targets and the predictions that should be passed to the loss function based on the other arguments to the "trainnet" function.
I have attached a working version of the script which has a dummy loss function at the end. Please replace it with your own loss function. Also note that the loss function needs to return a scalar value.
GREAT !! It works, thank you very much.
I woudl appreciate to have such an example in the MATLAB original documentation.
By the way, is there any way to reduce size (input, outputs, connection ..) of a trained net wchich is based on Unet3d architecture (Prune)? But it is probably new question - I'll witl think it over after obtainnig some results by the current configuration ...

Sign in to comment.

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Products

Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!