During validation, models can exhibit undesirable characteristics or a poor fit to the validation data.
Use the tips in these sections to help improve your model performance. Some features, such as low signal-to-noise ratio, varying system properties, or nonstationary disturbances, can produce data for which a good model fit is not possible.
A poor fit in the Model Output plot can be the result of an incorrect model order. System identification is largely a trial-and-error process when selecting model structure and model order. Ideally, you want the lowest-order model that adequately captures the system dynamics. High-order models are more expensive to compute and result in greater parameter uncertainty.
Start by estimating the model order as described in Preliminary Step – Estimating Model Orders and Input Delays. Use the suggested order as a starting point to estimate the lowest possible order with different model structures. After each estimation, monitor the Model Output and Residual Analysis plots, and then adjust your settings for the next estimation.
When a low-order model fits the validation data poorly, estimate a higher-order model to see if the fit improves. For example, if the Model Output plot shows that a fourth-order model gives poor results, estimate an eighth-order model. When a higher-order model improves the fit, you can conclude that higher-order linear models are potentially sufficient for your application.
Use an independent data set to validate your models. If you use the same data set for both estimation and validation, the fit always improves as you increase the model order and you risk overfitting. However, if you use an independent data set to validate your model, the fit eventually deteriorates if the model orders are too high.
Substantial noise in your system can result in a poor model fit. The presence of such noise is indicated when:
A state-space model produces a better fit than an ARX model. While a state-space structure has sufficient flexibility to model noise, an ARX structure is unable to independently model noise and system dynamics. The following ARX model equation shows that A couples the dynamics and the noise terms by appearing in the denominator of both:
A residual analysis plot shows significant autocorrelation of residuals at nonzero lags. For more information about residual analysis, see the topics on the Residual Analysis page.
To model noise more carefully, use either an ARMAX or the Box-Jenkins model structure, both of which model the noise and dynamics terms using different polynomials.
You can test whether a linear model is unstable is by examining the pole-zero plot of the model, which is described in Pole and Zero Plots. The stability threshold for pole values differs for discrete-time and continuous-time models, as follows:
For stable continuous-time models, the real part of the pole is less than 0.
For stable discrete-time models, the magnitude of the pole is less than 1.
Linear trends in estimation data can cause the identified linear models to be unstable. However, detrending the model does not guarantee stability.
If your model is unstable, but you believe that your system is stable, you can.
Force stability during estimation — Set the
option to a value that guarantees a stable model. This setting can result in reduced model
Allow for some instability — Set the stability threshold advanced estimation option to allow for a margin of error:
For continuous-time models, set the value of
Advanced.StabilityThreshold.s. The model is considered stable if the
pole on the far right is to the left of s.
For discrete-time models, set the value of
Advanced.StabilityThreshold.z. The model is considered stable if all of
the poles are inside a circle with a radius of z that is centered at the
To test if a nonlinear model is unstable, plot the simulated model output on top of the validation data. If the simulated output diverges from measured output, the model is unstable. However, agreement between model output and measured output does not guarantee stability.
In some cases, an unstable model is still useful. For example, if your system is unstable without a controller, you can use your model for control design. In this case, you can import the unstable model into Simulink® or Control System Toolbox™ products.
If modeling noise and trying different model structures and orders still results in a poor fit, try adding more inputs that can affect the output. Inputs do not need to be control signals. Any measurable signal can be considered an input, including measurable disturbances.
Include additional measured signals in your input data, and estimate the model again.
If a linear model shows a poor fit to the validation data, consider whether nonlinear effects are present in the system.
You can model the nonlinearities by performing a simple transformation on the input signals to make the problem linear in the new variables. For example, in a heating process with electrical power as the driving stimulus, you can multiply voltage and current measurements to create a power input signal.
If your problem is sufficiently complex and you do not have physical insight into the system, try fitting nonlinear black-box models to your data, see About Identified Nonlinear Models.
For nonlinear ARX and Hammerstein-Wiener models, the Model Output plot does not show a good fit when the nonlinearity estimator has incorrect complexity.
Specify the complexity of piece-wise-linear, wavelet, sigmoid, and custom networks using
NumberOfUnits nonlinear estimator property. A higher number of units
indicates a more complex nonlinearity estimator. When using neural networks, specify the
complexity using the parameters of the network object. For more information, see the Deep Learning
To select the appropriate nonlinearity estimator complexity, first validate the output of a low-complexity model. Next, increase the model complexity and validate the output again. The model fit degrades when the nonlinearity estimator becomes too complex. This degradation in performance is only visible if you use independent estimation and validation data sets