Class: ClassificationNaiveBayes
Classification loss for naive Bayes classifiers by resubstitution
returns
the in-sample classification loss with additional options specified
by one or more L
= resubLoss(Mdl
,Name,Value
)Name,Value
pair arguments.
Mdl
— Fully trained naive Bayes classifierClassificationNaiveBayes
modelA fully trained naive Bayes classifier, specified as a ClassificationNaiveBayes
model
trained by fitcnb
.
Specify optional
comma-separated pairs of Name,Value
arguments. Name
is
the argument name and Value
is the corresponding value.
Name
must appear inside quotes. You can specify several name and value
pair arguments in any order as
Name1,Value1,...,NameN,ValueN
.
'LossFun'
— Loss function'classiferror'
(default) | 'binodeviance'
| 'exponential'
| 'hinge'
| 'logit'
| 'mincost'
| 'quadratic'
| function handleLoss function, specified as the comma-separated pair consisting
of 'LossFun'
and a built-in, loss-function name
or function handle.
The following table lists the available loss functions. Specify one using its corresponding character vector or string scalar.
Value | Description |
---|---|
'binodeviance' | Binomial deviance |
'classiferror' | Classification error |
'exponential' | Exponential |
'hinge' | Hinge |
'logit' | Logistic |
'mincost' | Minimal expected misclassification cost (for classification scores that are posterior probabilities) |
'quadratic' | Quadratic |
'mincost'
is appropriate for
classification scores that are posterior probabilities.
Naive Bayes models return posterior probabilities as
classification scores by default (see predict
).
Specify your own function using function handle notation.
Suppose that n
be the number of observations
in X
and K
be the number of
distinct classes (numel(Mdl.ClassNames)
, Mdl
is
the input model). Your function must have this signature
lossvalue = lossfun
(C,S,W,Cost)
The output argument lossvalue
is
a scalar.
You choose the function name (lossfun
).
C
is an n
-by-K
logical
matrix with rows indicating which class the corresponding observation
belongs. The column order corresponds to the class order in Mdl.ClassNames
.
Construct C
by setting C(p,q) =
1
if observation p
is in class q
,
for each row. Set all other elements of row p
to 0
.
S
is an n
-by-K
numeric
matrix of classification scores. The column order corresponds to the
class order in Mdl.ClassNames
. S
is
a matrix of classification scores, similar to the output of predict
.
W
is an n
-by-1
numeric vector of observation weights. If you pass W
,
the software normalizes them to sum to 1
.
Cost
is a K-by-K
numeric
matrix of misclassification costs. For example, Cost = ones(K)
- eye(K)
specifies a cost of 0
for correct
classification, and 1
for misclassification.
Specify your function using 'LossFun',@
.lossfun
For more details on loss functions, see Classification Loss.
Data Types: char
| string
| function_handle
L
— Classification lossClassification loss, returned as a scalar. L
is
a generalization or resubstitution quality measure. Its interpretation
depends on the loss function and weighting scheme, but, in general,
better classifiers yield smaller loss values.
Load Fisher's iris data set.
load fisheriris X = meas; % Predictors Y = species; % Response
Train a naive Bayes classifier. It is good practice to specify the class order. Assume that each predictor is conditionally, normally distributed given its label.
Mdl = fitcnb(X,Y,'ClassNames',{'setosa','versicolor','virginica'});
Mdl
is a trained ClassificationNaiveBayes
classifier.
Estimate the default resubstitution loss, which is the in-sample minimum misclassification cost.
L = resubLoss(Mdl)
L = 0.0400
The average, in-sample cost of classification is 0.04.
Load Fisher's iris data set.
load fisheriris X = meas; % Predictors Y = species; % Response
Train a naive Bayes classifier. It is good practice to specify the class order. Assume that each predictor is conditionally, normally distributed given its label.
Mdl = fitcnb(X,Y,'ClassNames',{'setosa','versicolor','virginica'});
Mdl
is a trained ClassificationNaiveBayes
classifier.
Estimate the in-sample proportion of misclassified observations.
L = resubLoss(Mdl,'LossFun','classiferror')
L = 0.0400
The naive Bayes classifier misclassifies 4% of the training observations.
Classification loss functions measure the predictive inaccuracy of classification models. When you compare the same type of loss among many models, a lower loss indicates a better predictive model.
Consider the following scenario.
L is the weighted average classification loss.
n is the sample size.
For binary classification:
y_{j} is the observed class label. The software codes it as –1 or 1, indicating the negative or positive class, respectively.
f(X_{j}) is the raw classification score for observation (row) j of the predictor data X.
m_{j} = y_{j}f(X_{j}) is the classification score for classifying observation j into the class corresponding to y_{j}. Positive values of m_{j} indicate correct classification and do not contribute much to the average loss. Negative values of m_{j} indicate incorrect classification and contribute significantly to the average loss.
For algorithms that support multiclass classification (that is, K ≥ 3):
y_{j}^{*}
is a vector of K – 1 zeros, with 1 in the
position corresponding to the true, observed class
y_{j}. For example,
if the true class of the second observation is the third class and
K = 4, then
y^{*}_{2}
= [0 0 1 0]′. The order of the classes corresponds to the order in
the ClassNames
property of the input
model.
f(X_{j})
is the length K vector of class scores for
observation j of the predictor data
X. The order of the scores corresponds to the
order of the classes in the ClassNames
property
of the input model.
m_{j} = y_{j}^{*}′f(X_{j}). Therefore, m_{j} is the scalar classification score that the model predicts for the true, observed class.
The weight for observation j is w_{j}. The software normalizes the observation weights so that they sum to the corresponding prior class probability. The software also normalizes the prior probabilities so they sum to 1. Therefore,
$$\sum _{j=1}^{n}{w}_{j}}=1.$$
Given this scenario, the following table describes the supported loss
functions that you can specify by using the 'LossFun'
name-value pair
argument.
Loss Function | Value of LossFun | Equation |
---|---|---|
Binomial deviance | 'binodeviance' | $$L={\displaystyle \sum _{j=1}^{n}{w}_{j}\mathrm{log}\left\{1+\mathrm{exp}\left[-2{m}_{j}\right]\right\}}.$$ |
Exponential loss | 'exponential' | $$L={\displaystyle \sum _{j=1}^{n}{w}_{j}\mathrm{exp}\left(-{m}_{j}\right)}.$$ |
Classification error | 'classiferror' | $$L={\displaystyle \sum _{j=1}^{n}{w}_{j}}I\left\{{\widehat{y}}_{j}\ne {y}_{j}\right\}.$$ It is the weighted fraction of misclassified observations where $${\widehat{y}}_{j}$$ is the class label corresponding to the class with the maximal posterior probability. I{x} is the indicator function. |
Hinge loss | 'hinge' | $$L={\displaystyle \sum}_{j=1}^{n}{w}_{j}\mathrm{max}\left\{0,1-{m}_{j}\right\}.$$ |
Logit loss | 'logit' | $$L={\displaystyle \sum _{j=1}^{n}{w}_{j}\mathrm{log}\left(1+\mathrm{exp}\left(-{m}_{j}\right)\right)}.$$ |
Minimal cost | 'mincost' | Minimal cost. The software computes the weighted minimal cost using this procedure for observations j = 1,...,n.
The weighted, average, minimum cost loss is $$L={\displaystyle \sum _{j=1}^{n}{w}_{j}{c}_{j}}.$$ |
Quadratic loss | 'quadratic' | $$L={\displaystyle \sum _{j=1}^{n}{w}_{j}{\left(1-{m}_{j}\right)}^{2}}.$$ |
This figure compares the loss functions (except 'mincost'
) for one
observation over m. Some functions are normalized to pass through [0,1].
The posterior probability is the probability that an observation belongs in a particular class, given the data.
For naive Bayes, the posterior probability that a classification is k for a given observation (x_{1},...,x_{P}) is
$$\widehat{P}\left(Y=k|{x}_{1},\mathrm{..},{x}_{P}\right)=\frac{P\left({X}_{1},\mathrm{...},{X}_{P}|y=k\right)\pi \left(Y=k\right)}{P\left({X}_{1},\mathrm{...},{X}_{P}\right)},$$
where:
$$P\left({X}_{1},\mathrm{...},{X}_{P}|y=k\right)$$ is the conditional
joint density of the predictors given they are in class k. Mdl.DistributionNames
stores
the distribution names of the predictors.
π(Y = k)
is the class prior probability distribution. Mdl.Prior
stores
the prior distribution.
$$P\left({X}_{1},\mathrm{..},{X}_{P}\right)$$ is the joint density of the predictors. The classes are discrete, so $$P({X}_{1},\mathrm{...},{X}_{P})={\displaystyle \sum _{k=1}^{K}P}({X}_{1},\mathrm{...},{X}_{P}|y=k)\pi (Y=k).$$
The prior probability of a class is the believed relative frequency with which observations from that class occur in a population.
[1] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, second edition. Springer, New York, 2008.
ClassificationNaiveBayes
| CompactClassificationNaiveBayes
| fitcnb
| loss
| predict
| resubPredict
A modified version of this example exists on your system. Do you want to open this version instead?
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
Select web siteYou can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.