MATLAB Answers

validation performance or test performance?

41 views (last 30 days)
Rita
Rita on 7 Apr 2016
Commented: Rita on 11 Apr 2016
If one divided data to 3 subsets training/validation/test and Which of these can be defined as a criteria to select the network for regression?
1- validation performance
2- test performance
I really would appreciate any advice.

  2 Comments

Muhammad Usman  Saleem
Muhammad Usman Saleem on 7 Apr 2016
For what kind of data you are doing this ?
Muhammad Usman  Saleem
Muhammad Usman Saleem on 7 Apr 2016
what is your climate data. Tell me also source like ECMWF etc? also tell me for which you(either missing or some other data) want to consider validation?
The reason to ask these terms , for the batter solution

Sign in to comment.

Accepted Answer

Greg Heath
Greg Heath on 8 Apr 2016
I have posted MANY detailed explanations of the separate data-division roles of the training, validation and testing subsets in BOTH the NEWSGROUP and ANSWERS.
Try searching with
greg nomenclature
greg nondesign
greg nontraining

  2 Comments

Rita
Rita on 8 Apr 2016
Thanks Greg for your precious posts.I have read most of your posts about hidden neurons and I run ANN and have 10 iterations for 19 different hidden neurons and I have ranked them based on validation performance ( based on your posts )I should take the lowest H and lowest validation performance.
Just one quick question : Do I need to calculate that the performance of the network did not significantly (with 95% significance level) improve after for example 4 neurons and therefore select the best hidden neuron = 4? or just ranking the hidden neurons based on validation perforance and taking the lowest H is enough?
Greg Heath
Greg Heath on 9 Apr 2016
Do I Need to ? Well, it depends on who you want to impress.
My goals are, typically,
1. Obtain an unbiased estimate of Rsqtst >= 0.99 using as few hidden nodes
as possible.
2. Summarize the design details via a Ntrials vs numhiddennodes
dimensioned matrix of Rsq results for each of the 3 datadivision subsets.
The training results are adjusted for the loss in degrees of freedom via
dividing SSEtrn by Ndof = Ntrneq-Nw (instead of Ntrneq) where
Nw is the number of estimated weights.
Rsqtrn = 1 = MSEtrn/mean(var(ttrn',1))
Rsqval = 1 = MSEval/mean(var(tval',1))
Rsqtst = 1 - MSEtst/mean(var(ttst',1))
Rsqtrna = 1 - MSEtrna/mean(var(ttrn',0))
3. Summarize final results via a 3-colored plot with the best Rsq values
for the trn, val, and tst subsets vs number of hidden nodes.
Hope this helps.
Greg

Sign in to comment.

More Answers (2)

Greg Heath
Greg Heath on 8 Apr 2016
I have posted MANY detailed explanations of the separate data-division roles of the training, validation and testing subsets in BOTH the NEWSGROUP and ANSWERS.
Try searching with
NEWSGROUP ANSWERS
GREG NOMENCLATURE 5 3
GREG NONDESIGN 51 43
GREG NONTRAINING 93 112
Hope this helps
Thank you for formally accepting my answer
Greg

  2 Comments

Greg Heath
Greg Heath on 10 Apr 2016
The focus is on
1. Obtaining the smallest net that can achieve Rsq >= 0.99 on
unseen data that has the same summary characteristics as the
design data.
2. Presenting supporting evidence that verifies, without a
doubt, the qualifications of the net.
I can think of no better way to accomplish the above than
via multiple design results summarized via
a. Four Ntrials by numhidden Rsq matrices
b. Four curves on a plot of maximum Rsq vs numhidden
How could a purchaser of the net be satisfied without the supporting multiple design evidence?
Hope this helps clarify the need for multiple design results.
Greg

Sign in to comment.


Muhammad Usman  Saleem
Muhammad Usman Saleem on 7 Apr 2016
1- validation performance

  1 Comment

Muhammad Usman  Saleem
Muhammad Usman Saleem on 7 Apr 2016
If you are doing missing data immputation for climate data then you may use this method.
(1) Makes different interpolations on your kind data
(2) Select performance parameters like , RMS error, AME, R^2

Sign in to comment.

Sign in to answer this question.