It sounds like you want to define a custom loss function that depends on the input data X that you feed to the network. As you noticed, this is tricky because the custom loss function passed to trainnet can only receive the outputs Y and the targets T to the network but does not directly receive the inputs X. In other words, the signature of the custom loss function must be @(Y,T)myCustomLossFunction(Y,T).
To be able to use the inputs X in your custom loss function, as a workaround, you can incorporate X in the targets T and then unpack them in the custom loss function. For example you could define the targets T as follows:
X = myInputData;
actualTargets = myTargetData;
T = [actualTargets; X];
you can then unpack them in your custom loss function as follows:
function loss = myCustomLossFunction(Y, T)
actualTargets = T(1,:);
X = T(2,:);
[...
end
Here is a full example of how you can train a simple multi-layer perceptron to predict a value that minimizes the deviation from a target T = 2*X+0.5 and the input X itself. You can see that if the inputWeight value is equal to 0.5, then the network will learn to predict a line that is exactly in between the input and the target. If you increase the value of inputWeight to 0.9 you will see that the predicted line will be much closer to the input X.
X = dlarray(rand(1,numSamples),"CB");
actualTargets = 2 * X + 0.5;
targets = [actualTargets; X];
featureInputLayer(inputSize)
fullyConnectedLayer(numHiddenUnits)
fullyConnectedLayer(outputSize)
function loss = customLossFcn(Y, T)
deviationFromTarget = mean((Y - actualTarget).^2, "all");
deviationFromInput = mean((Y - inputX).^2, "all");
loss = (1-inputWeight) * deviationFromTarget + inputWeight * deviationFromInput;
opts = trainingOptions('adam', MiniBatchSize = 50, MaxEpochs=20, Verbose=false);
net = trainnet(inputs, targets, layers, @(Y, T) customLossFcn(Y,T), opts);
TSorted = sort(actualTargets,2);
Y = predict(net, XSorted);
legend('Input', 'Target', 'Predicted');