Why does my neural network always output 0.05?
Show older comments
Hi,
I'm trying to implement a neural network, replicating that from a research paper.
There are 150 input variables, with values in the range [0,1]. The net has 4 hidden layers, the first 3 have 128 nodes, the final layer 20 nodes.
As per the paper, I'm creating the network by sequentially training auto encoders, then stacking them to create the final network.
However, whatever input I provide, every value is mapped to ~0.05 in the final output.
Is this an issue anyone has seen, and are there any suggestions as to why it may be occurring?
Code below, with some random numbers added as input so it will execute.
X = rand(150, 3000);
noNodes = 128;
ae1 = trainAutoencoder(X, noNodes);
f1 = encode(ae1, X);
ae2 = trainAutoencoder(f1, noNodes, 'ScaleData', false);
f2 = encode(ae2, f1);
ae3 = trainAutoencoder(f2, noNodes, 'ScaleData', false);
f3 = encode(ae3, f2);
ae4 = trainAutoencoder(f3, 20, 'ScaleData', false);
dnn = stack(ae1, ae2, ae3, ae4);
Y = dnn(X);
Thanks,
David.
1 Comment
Answers (1)
Vandana Rajan
on 28 Feb 2017
0 votes
Hi,
Could you please try the code by reducing the value of 'SparsityRegularization' (not 'SparsityProportion') or just setting it off to 0?
2 Comments
mctaff
on 28 Feb 2017
Vandana Rajan
on 1 Mar 2017
Hi Mctaff,
When training a sparse autoencoder, we optimize a loss function that has two terms:
1) A mean squared error term.
2) A sparsity regulariser which tries to force the average activation value to the value defined by ‘SparsityProportion’.
The parameter ‘SparsityRegularization’ controls the weighting of the second term in the loss function. If it is too high for a particular problem, then the training procedure can collapse into a pathological solution to a problem, where it simply forces the value of the activations to match the value of ‘SparsityProportion’. This is what appears to be happening when running your code.
You can prevent this by reducing the value of ‘SparsityRegularization’, or switching it off entirely by setting it to 0.
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!