Create, model, and analyze credit scorecards as follows.
creditscorecard object for credit
scorecard analysis by specifying “training” data in
table format. The training data, sometimes called the modeling view,
is the result of multiple data preparation tasks (see About Credit Scorecards) that must be
performed before creating a
You can use optional input arguments for
creditscorecard to specify
scorecard properties such as the response variable and the
GoodLabel. Perform some initial data exploration when
creditscorecard object is created, although data
analysis is usually done in combination with data binning (see step 2). For
more information and examples, see
creditscorecard and step 1 in
Case Study for a Credit Scorecard Analysis.
Bin the data.
Perform manual or automatic binning of the data loaded into
A common starting point is to apply automatic binning to all
or selected variables using
bininfo, and visualize
bin information with respect to bin counts and statistics or association
measures such as Weight of Evidence (WOE) using
plotbins. The bins can be modified or
fine-tuned either manually using
with a different automatic binning algorithm using
autobinning. Bins that show a close-to-linear
trend in the WOE are frequently desired in the credit scorecard context.
Alternatively, with Risk Management Toolbox™, you can use the Binning Explorer app to interactively bin. The Binning Explorer enables you to interactively apply a binning algorithm and modify bins. For more information, see Binning Explorer.
Fit a logistic regression model.
Fit a logistic regression model to the WOE data from the
fitmodel function internally
bins the training data, transforms it into WOE values, maps the response
variable so that
and fits a linear logistic regression model.
a stepwise procedure to determine which predictors should be in the
model, but optional input arguments can also be used, for example,
to fit a full model. For more information and examples, see
fitmodel and step 3 in Case Study for a Credit Scorecard Analysis.
Review and format credit scorecard points.
After fitting the logistic model, use
summarize the scorecard points. By default, the points are unscaled
and come directly from the combination of Weight of Evidence (WOE)
values and model coefficients.
lets you control scaling and rounding of scorecard points. For more
information and examples, see
formatpoints and step 4 in Case Study for a Credit Scorecard Analysis.
Score the data.
score function computes
the scores for the training data.
An optional data input can also be passed to
score, for example, validation data.
The points per predictor for each customer are also provided as an
optional output. For more information and examples, see
score and step 5 in Case Study for a Credit Scorecard Analysis.
Calculate the probability of default for credit scorecard scores.
to calculate the probability of default for training data.
In addition, you can compute likelihood of default for a different
dataset (for example, a validation data set) using the
probdefault function. For more information
and examples, see
step 6 in Case Study for a Credit Scorecard Analysis.
Validate the credit scorecard model.
to validate the quality of the credit scorecard model.
You can obtain the Cumulative Accuracy Profile (CAP), Receiver
Operating Characteristic (ROC), and Kolmogorov-Smirnov (KS) plots
and statistics for a given dataset using the
For more information and examples, see
step 7 in Case Study for a Credit Scorecard Analysis.
For an example of this workflow, see Case Study for a Credit Scorecard Analysis.