resubLoss
Resubstitution classification loss for multiclass error-correcting output codes (ECOC) model
Description
returns the classification loss by resubstitution (L = resubLoss(Mdl)L) for the
multiclass error-correcting output codes (ECOC) model Mdl using the
training data stored in Mdl.X and the corresponding class labels stored
in Mdl.Y. By default, resubLoss uses the classification error to compute L.
The classification loss (L) is a generalization or resubstitution
quality measure. Its interpretation depends on the loss function and weighting scheme, but
in general, better classifiers yield smaller classification loss values.
returns the classification loss with additional options specified by one or more name-value
pair arguments. For example, you can specify the loss function, decoding scheme, and
verbosity level.L = resubLoss(Mdl,Name,Value)
Examples
Compute the resubstitution loss for an ECOC model with SVM binary learners.
Load Fisher's iris data set. Specify the predictor data X and the response data Y.
load fisheriris
X = meas;
Y = species;Train an ECOC model using SVM binary classifiers. Standardize the predictors using an SVM template, and specify the class order.
t = templateSVM('Standardize',true);
classOrder = unique(Y)classOrder = 3×1 cell
{'setosa' }
{'versicolor'}
{'virginica' }
Mdl = fitcecoc(X,Y,'Learners',t,'ClassNames',classOrder);
t is an SVM template object. During training, the software uses default values for empty properties in t. Mdl is a ClassificationECOC model.
Estimate the resubstitution classification error, which is the default classification loss.
L = resubLoss(Mdl)
L = 0.0267
The ECOC model misclassifies 2.67% of the training-sample irises.
Determine the quality of an ECOC model by using a custom loss function that considers the minimal binary loss for each observation.
Load Fisher's iris data set. Specify the predictor data X, the response data Y, and the order of the classes in Y.
load fisheriris X = meas; Y = categorical(species); classOrder = unique(Y) % Class order
classOrder = 3×1 categorical
setosa
versicolor
virginica
rng(1); % For reproducibilityTrain an ECOC model using SVM binary classifiers. Standardize the predictors using an SVM template, and specify the class order.
t = templateSVM('Standardize',true); Mdl = fitcecoc(X,Y,'Learners',t,'ClassNames',classOrder);
t is an SVM template object. During training, the software uses default values for empty properties in t. Mdl is a ClassificationECOC model.
Create a function that takes the minimal loss for each observation, then averages the minimal losses for all observations. S corresponds to the NegLoss output of resubPredict.
lossfun = @(~,S,~,~)mean(min(-S,[],2));
Compute the custom classification loss for the training data.
resubLoss(Mdl,'LossFun',lossfun)ans = 0.0097
The average minimal binary loss for the training data is 0.0065.
Input Arguments
Full, trained multiclass ECOC model, specified as a ClassificationECOC model trained with fitcecoc.
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN, where Name is
the argument name and Value is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose
Name in quotes.
Example: resubLoss(Mdl,'BinaryLoss','hamming','LossFun',@lossfun)
specifies 'hamming' as the binary learner loss function and the custom
function handle @lossfun as the overall loss function.
Binary learner loss function, specified as a built-in loss function name or function handle.
This table describes the built-in functions, where yj is the class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj) is the binary loss formula.
Value Description Score Domain g(yj,sj) "binodeviance"Binomial deviance (–∞,∞) log[1 + exp(–2yjsj)]/[2log(2)] "exponential"Exponential (–∞,∞) exp(–yjsj)/2 "hamming"Hamming [0,1] or (–∞,∞) [1 – sign(yjsj)]/2 "hinge"Hinge (–∞,∞) max(0,1 – yjsj)/2 "linear"Linear (–∞,∞) (1 – yjsj)/2 "logit"Logistic (–∞,∞) log[1 + exp(–yjsj)]/[2log(2)] "quadratic"Quadratic [0,1] [1 – yj(2sj – 1)]2/2 The software normalizes binary losses so that the loss is 0.5 when yj = 0. Also, the software calculates the mean binary loss for each class [1].
For a custom binary loss function, for example
customFunction, specify its function handleBinaryLoss=@customFunction.customFunctionhas this form:bLoss = customFunction(M,s)
Mis the K-by-B coding matrix stored inMdl.CodingMatrix.sis the 1-by-B row vector of classification scores.bLossis the classification loss. This scalar aggregates the binary losses for every learner in a particular class. For example, you can use the mean binary loss to aggregate the loss over the learners for each class.K is the number of classes.
B is the number of binary learners.
For an example of passing a custom binary loss function, see Predict Test-Sample Labels of ECOC Model Using Custom Binary Loss Function.
This table identifies the default BinaryLoss value, which depends on the
score ranges returned by the binary learners.
| Assumption | Default Value |
|---|---|
All binary learners are any of the following:
| "quadratic" |
| All binary learners are SVMs or linear or kernel classification models of SVM learners. | "hinge" |
All binary learners are ensembles trained by
AdaboostM1 or
GentleBoost. | "exponential" |
All binary learners are ensembles trained by
LogitBoost. | "binodeviance" |
You specify to predict class posterior probabilities by setting
FitPosterior=true in fitcecoc. | "quadratic" |
| Binary learners are heterogeneous and use different loss functions. | "hamming" |
To check the default value, use dot notation to display the BinaryLoss property of the trained model at the command line.
Example: BinaryLoss="binodeviance"
Data Types: char | string | function_handle
Decoding scheme that aggregates the binary losses, specified as
"lossweighted" or "lossbased". For more
information, see Binary Loss.
Example: Decoding="lossbased"
Data Types: char | string
Loss function, specified as 'classiferror',
'classifcost', or a function handle.
Specify the built-in function
'classiferror'. In this case, the loss function is the classification error, which is the proportion of misclassified observations.Specify the built-in function
'classifcost'. In this case, the loss function is the observed misclassification cost. If you use the default cost matrix (whose element value is 0 for correct classification and 1 for incorrect classification), then the loss values for'classifcost'and'classiferror'are identical.Or, specify your own function using function handle notation.
Assume that
n = size(X,1)is the sample size andKis the number of classes. Your function must have the signaturelossvalue = lossfun(C,S,W,Cost), where:The output argument
lossvalueis a scalar.You specify the function name (
lossfun).Cis ann-by-Klogical matrix with rows indicating the class to which the corresponding observation belongs. The column order corresponds to the class order inMdl.ClassNames.Construct
Cby settingC(p,q) = 1if observationpis in classq, for each row. Set all other elements of rowpto0.Sis ann-by-Knumeric matrix of negated loss values for the classes. Each row corresponds to an observation. The column order corresponds to the class order inMdl.ClassNames. The inputSresembles the output argumentNegLossofresubPredict.Wis ann-by-1 numeric vector of observation weights. If you passW, the software normalizes its elements to sum to1.Costis aK-by-Knumeric matrix of misclassification costs. For example,Cost = ones(K) – eye(K)specifies a cost of 0 for correct classification and 1 for misclassification.
Specify your function using
'LossFun',@lossfun.
Data Types: char | string | function_handle
Estimation options, specified as a structure array as returned by statset.
To invoke parallel computing you need a Parallel Computing Toolbox™ license.
Example: Options=statset(UseParallel=true)
Data Types: struct
Verbosity level, specified as 0 or 1.
Verbose controls the number of diagnostic messages that the
software displays in the Command Window.
If Verbose is 0, then the software does not display
diagnostic messages. Otherwise, the software displays diagnostic messages.
Example: Verbose=1
Data Types: single | double
More About
The classification error has the form
where:
wj is the weight for observation j. The software renormalizes the weights to sum to 1.
ej = 1 if the predicted class of observation j differs from its true class, and 0 otherwise.
In other words, the classification error is the proportion of observations misclassified by the classifier.
The observed misclassification cost has the form
where:
wj is the weight for observation j. The software renormalizes the weights to sum to 1.
is the user-specified cost of classifying an observation into class when its true class is yj.
The binary loss is a function of the class and classification score that determines how well a binary learner classifies an observation into the class. The decoding scheme of an ECOC model specifies how the software aggregates the binary losses and determines the predicted class for each observation.
Assume the following:
mkj is element (k,j) of the coding design matrix M—that is, the code corresponding to class k of binary learner j. M is a K-by-B matrix, where K is the number of classes, and B is the number of binary learners.
sj is the score of binary learner j for an observation.
g is the binary loss function.
is the predicted class for the observation.
The software supports two decoding schemes:
Loss-based decoding [2] (
Decodingis"lossbased") — The predicted class of an observation corresponds to the class that produces the minimum average of the binary losses over all binary learners.Loss-weighted decoding [3] (
Decodingis"lossweighted") — The predicted class of an observation corresponds to the class that produces the minimum average of the binary losses over the binary learners for the corresponding class.The denominator corresponds to the number of binary learners for class k. [1] suggests that loss-weighted decoding improves classification accuracy by keeping loss values for all classes in the same dynamic range.
The predict, resubPredict, and
kfoldPredict functions return the negated value of the objective
function of argmin as the second output argument
(NegLoss) for each observation and class.
This table summarizes the supported binary loss functions, where yj is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj) is the binary loss function.
| Value | Description | Score Domain | g(yj,sj) |
|---|---|---|---|
"binodeviance" | Binomial deviance | (–∞,∞) | log[1 + exp(–2yjsj)]/[2log(2)] |
"exponential" | Exponential | (–∞,∞) | exp(–yjsj)/2 |
"hamming" | Hamming | [0,1] or (–∞,∞) | [1 – sign(yjsj)]/2 |
"hinge" | Hinge | (–∞,∞) | max(0,1 – yjsj)/2 |
"linear" | Linear | (–∞,∞) | (1 – yjsj)/2 |
"logit" | Logistic | (–∞,∞) | log[1 + exp(–yjsj)]/[2log(2)] |
"quadratic" | Quadratic | [0,1] | [1 – yj(2sj – 1)]2/2 |
The software normalizes binary losses so that the loss is 0.5 when yj = 0, and aggregates using the average of the binary learners [1].
Do not confuse the binary loss with the overall classification loss (specified by the
LossFun name-value argument of the resubLoss and
resubPredict object functions), which measures how well an ECOC
classifier performs as a whole.
References
[1] Allwein, E., R. Schapire, and Y. Singer. “Reducing multiclass to binary: A unifying approach for margin classifiers.” Journal of Machine Learning Research. Vol. 1, 2000, pp. 113–141.
[2] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recog. Lett. Vol. 30, Issue 3, 2009, pp. 285–297.
[3] Escalera, S., O. Pujol, and P. Radeva. “On the decoding process in ternary error-correcting output codes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 32, Issue 7, 2010, pp. 120–134.
Extended Capabilities
To run in parallel, specify the Options name-value argument in the call to
this function and set the UseParallel field of the
options structure to true using
statset:
Options=statset(UseParallel=true)
For more information about parallel computing, see Run MATLAB Functions with Automatic Parallel Support (Parallel Computing Toolbox).
This function fully supports GPU arrays. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2014b
See Also
ClassificationECOC | loss | predict | resubPredict | fitcecoc
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)