Is it possible to use Simulink model as prediction model in MPC?

Hi.
I have read the documentation on MPC Design including liniear, non-linear and other controllers and checked all examples and all of them use some special analythical "prediction model" for MPC Controller block.
But my question is: since there is usually a Simulink plant model created to validate controller response, why this model could not be used as prediction model in MPC controller? This means that MPC controller simply should run few Simulink simulations every time step to calculate expected plant response with different inputs and then select the optimized values.
I know this will consume a lot of CPU resources, but would simplify MPC design a lot, as no separate "data-driven" or linearized models are required.
One typical example would be a house heater controller. Example https://www.mathworks.com/help/mpc/ug/use-multistage-mpc-with-neural-state-space-prediction-model-for-house-heating.html uses system identification toolbox and neural state space model to design prediction model. But why one should do so if the house plant Simscape model already exists in simulation and can be used directly by MPC Controller for prediction?
Am I'm missing something?

4 Comments

   Let's construct in reality under real world constraints, can you obtain the internal plant distinct states vector accurate spatio-temporal datum? I think they made so because the internal states value-temporal waveforms are closed and can't be extracted during controller run, can the prediction model be constructed with just state and disturbance estimators?
Hi!
I wanted to clarify one thing based on your comment "This means that MPC controller simply should run few Simulink simulations every time step to calculate expected plant response with different inputs and then select the optimized values". What you are describing here if I am reading this correctly is a brute force way to find the optimal solution at each time step; basically you try out a few different inputs, and pick the one the leads to the smaller cost. That said, majority of solvers in Model Predictive Control Toolbox rely on transcription methods to solve the optimization problem. So are you looking a) for an MPC implementation that optimizes based on this brute force approach (using Simulink model for prediction), or b) being able to use the Simulink model for prediction as part of the optimization routines that MPC Toolbox currently uses?
Apologies if this is confusing but there is a subtle difference which can help me bump up the request with development.
Hi. I would not name this as "Brute Force" way, but more like "Monte-Carlo simulation" way. As you know, Monte-Carlo method is a running thousands of simulations with randomized inputs to obtain a set of probable outputs. In Simulink MPC Case only Manipulated Variable and Measured Disturbances need to be randomized or changed by some optimization algorithm until the best solution is found at particular time step as the result of simulations.
And I'm asking for an a) method - e.g executing multiple Simulink models at each time step to predict plant behaivour.

Sign in to comment.

Answers (1)

Hi Pavel,
This is a really good question. The main issue here is that a lot of people using Model Predictive Control Toolbox use it to design controllers that go to the embedded system. In other words, people design and simulate the controller in Simulink, but the final product is generated C code running in the hard real-time embedded system. Hard real-time means that the controller computation needs to finish in the allocated amount of compute time on the embedded microcontroller.
What you are asking about would make most sense for a nonlinear plant model (linear one is relatively easy to deal with - you can linearize Simulink model and just use resulting state-space model as internal prediction model for your linear MPC). For a complex, nonlinear plant model in Simulink of course it is very desirable to just use it as a prediction model for nonlinear MPC as you mention. The problem has to do with how to do this in the embedded software running on resource-constrained micrcontroller. The example you mention is offering a practical approach of creating a fast, AI-based reduced order model, that can execute quickly in the embedded system and that also provides analytical jacobians (very important for speeding up compute).
Hope this helps.
Arkadiy

1 Comment

Thank you for the answer. I fully understand real-time constraint. But today in the age of multicore processors, GPUs, and FPGAs I see a big opportunity in embedded systems. E.g. with Simulink you can generate C, GPU, FPGA code for almost any model, including th plant - this is your prediction model! So if you run a multicore processor or FPGA in your system you can easily lunch many simulations even in parallel.

Sign in to comment.

Asked:

on 26 Dec 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!