- Consider enhancing the approach by implementing custom error handling directly within the Simulink model, in addition to using a try-catch block. You might find it useful to incorporate blocks that detect when the model is nearing an unstable state, allowing you to dynamically reset or adjust parameters to help prevent crashes.
- Utilize Simulink Test to create test cases that can help identify and rectify scenarios that lead to instability before running RL training.
- Start training with a simplified version of your model and gradually increase its complexity. This can help the agent learn stable behaviours before tackling the full model.
- Regularly save checkpoints of the agent's parameters and replay buffer during training. This way, you can resume training from the last stable state rather than starting over.
- Use surrogate models or simplified representations of your Simulink model to perform initial training. Once the agent has learned a stable policy, transfer it to the full model.
Simulink interruption during RL training
6 views (last 30 days)
Show older comments
Hey everyone,
Anyone who has used reinforcement learning (RL) to train on physical models in Simulink knows that during the initial training phase, random exploration often triggers assertions or other instabilities that can cause Simulink to crash or diverge. This makes it very difficult to use the official train function provided by MathWorks, because once Simulink crashes, all the RL experience (replay buffer) is lost—essentially forcing you to start training from scratch each time.
So far, the only workaround I’ve found is to wrap the training process in an external try-catch block. When a failure occurs, I save the current agent parameters and load them again at the start of the next training run. But as many of you know, this slows down training by 100x or more.
Alternatively, one could pre-train on a simpler case and then fine-tune on the full model, but that’s not always feasible.
Has anyone discovered a better way to handle this?
0 Comments
Accepted Answer
Jaimin
on 1 Apr 2025
Hi @gerway
To address instabilities that may lead to crashes or divergence in Simulink, I can recommend a few strategies.
I recommend implementing a custom training loop, as it gives you complete control over the training process. This includes managing episodes, determining how often to save checkpoints, and handling errors. This approach offers the flexibility to incorporate custom logic for stability checks, dynamically adjust exploration parameters, and integrate domain-specific knowledge.
Kindly refer the following links for additional information.
Custom training loop: https://www.mathworks.com/matlabcentral/answers/1673039-setting-up-a-pause-resume-reinforcement-learning-training-using-rl-toolbox
Curriculum Learning: https://www.mathworks.com/help/reinforcement-learning/ug/train-ppo-for-lka-using-curriculum-learning.html
I hope you find this helpful.
More Answers (0)
See Also
Categories
Find more on Training and Simulation in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!