The Simulink® Design Verifier™ software may encounter some of these problems when analyzing a large model:
Unsatisfiable objectives — The software proved there are no test cases that exercise these test objectives, and did not generate any test cases.
Undecided objectives — The software was not able to satisfy or falsify these objectives.
Objectives with errors — This problem usually occurs when a model component uses nonlinear arithmetic, which can affect a test objective.
Cannot complete the analysis in the time allotted — This problem may indicate an area of your model where the software encountered problems, or you may need to increase value of the Maximum analysis time parameter.
Analysis hangs — If the number of objectives processed remains constant for a considerable length of time, the software has likely encountered complexity between the model and its objectives.
Does not achieve a high percentage of model coverage — When you run the test cases on the harness model, the percentage of model coverage is insufficient for your design.
The next few sections describe the initial steps to take when analyzing a large model. Although these steps address test generation, you can use a similar approach when detecting design errors or proving properties in a model.
You can use the Test Generation Advisor to summarize test generation compatibility, condition and decision objectives, and dead logic for the model and model components.
The Test Generation Advisor performs a high-level analysis and fast dead logic detection. You can use the results to better understand your model, particularly large models, complex models, or models for which you are uncertain of their compatibility with Simulink Design Verifier. For example, you can:
Identify incompatibilities with test case generation.
Identify complex components that might be time-consuming to analyze.
Determine instances of dead logic.
Get a summary of the component hierarchy.
Get recommended test generation parameters.
To access the Test Generation Advisor, on the Design Verifier tab, in the Mode section, click Test Generation. In the Prepare section, click Advisor. For more information see Use Test Generation Advisor to Identify Analyzable Components.
When you generate test cases, you should generally begin by analyzing the model using the Simulink Design Verifier default parameter values:
Check to see if your model is compatible with Simulink Design Verifier, as described in Check Model Compatibility.
Using the default parameter values, analyze the model. The following table lists the default values for parameters in the Configuration Parameters dialog box that you might change when analyzing large models.
|Maximum analysis time (s)|
If the analysis does not finish within the specified time, the analysis times out and terminates.
|Test suite optimization|
Generates test cases that address more than one test
objective, as with the
|Model coverage objectives|
Generates test cases that achieve condition and decision coverage.
Review the following information in the Simulink Design Verifier log window while the analysis runs:
Number of objectives processed — How many objectives were processed? Did the analysis hang after processing a certain number of objectives? The answers to these questions might give you a clue about where a problem might lie.
Number of objectives satisfied/Number of objectives falsified — Which objectives were falsified?
Time elapsed — Did the analysis time out, or did it finish within the specified maximum analysis time?
When the analysis completes, you can highlight the results in the model and individually review the analysis of each model object, as described in Highlighted Results on the Model. You can also generate and review the Simulink Design Verifier HTML report. This report contains links to the model elements for satisfied and falsified objectives so you can see what portions of the model might have problems. For more information, see Simulink Design Verifier Reports.
For a test-generation analysis, if all the test objectives have been satisfied, run the test cases on the harness model to determine model coverage.
If model coverage is enough for your design, you do not need to do anything else. If the coverage is insufficient, take additional steps to improve the analysis performance, as described in the following sections.
A large percentage of falsified objectives and poor model coverage often indicate that you need to change model parameter values to get complete coverage. This can occur when you have tunable parameters in Constant blocks that are connected to enabled subsystems or to the trigger inputs of Switch blocks. In these situations, configure Simulink Design Verifier parameter support as described in the example Specify Parameter Constraint Values for Full Coverage.
If the analysis satisfied most but not all of the objectives, try the following steps:
Increase the Maximum analysis time parameter. This gives the analysis more time to satisfy all the objectives.
Set the Model coverage objectives parameter
Decision. Selecting this option generates
only test cases that achieve decision coverage. These test cases are
a subset of the
Rerun the analysis and review the report.
If the results are still not satisfactory, try the techniques described in the following sections.
Set the Test suite optimization parameter
Extended), and rerun the Simulink
Design Verifier analysis.
The large model optimization strategies are designed for large,
complex models. The
LargeModel (Nonlinear Extended) strategy
includes improved support for nonlinear arithmetic. These two strategies
may or may not improve the results of your analysis enough to fully
test your design.
If you have outstanding objectives you want the software to generate, continue with the following techniques.
Watch the Objectives processed value in the log window. If about 50 percent of the Maximum analysis time parameter has elapsed and this value does not increase, the model analysis may have trouble processing certain objectives. If the analysis does not progress, take the following steps:
Click Stop in the log window.
A dialog box appears, informing you that the analysis was aborted and asking you if you still want to produce results.
Click Yes to save the results of the analysis so far.
The log window lists the following options, depending on which analysis mode you ran:
Highlight analysis results on model
Generate detailed analysis report
Create harness model
Simulate tests and produce a model coverage report
Click Generate detailed analysis report.
In the HTML report, review the following sections to identify the model elements that are causing problems:
Objectives Undecided when the Analysis was Stopped
Objectives Producing Errors
Review the model elements that have undecided objectives or objectives with errors to see if any of the following problems are present. Consult the respective documentation for specific techniques to improve the analysis.