RL Water Tank example by MATLAB does not converge

35 views (last 30 days)
Alp
Alp on 8 Nov 2025 at 4:24
Commented: Alp on 12 Nov 2025 at 18:28
I am following the RL water tank control tutorial by MATLAB: https://www.mathworks.com/help/reinforcement-learning/ug/control-water-level-using-ddpg-agent.html (MATLAB R2025b)
However, even the model is learning at the beginning, towards the end of the training, Q0 value explodes and the reward drops from almost maximum to below zero. I need to obtain stable and good results with the official DDPG water tank control to use it as a baseline in my research, and hence, I prefer not to modify hyperparameters of the network, the reward function and the stopping criteria.
Is anyone able to reproduce good results using the given RL water tank example? Or is it okay if it is not stable in its default configuration?=
Here are my results:
And this is the start of the training, before Q value explodes:
Thank you.

Accepted Answer

sneha
sneha on 12 Nov 2025 at 9:52
Hello,
Yes, this behaviour is normal. The official DDPG water tank example is mainly designed to demonstrate workflow, not guaranteed long-term stability. The Q-value explosion and reward drop occur due to stochastic exploration, critic overestimation, and function-approximation limits in standard DDPG. Even in the official setup, results can vary across runs. You can treat the provided pretrained agent (WaterTankDDPG.mat) as the validated baseline for comparison. It is acceptable if the model becomes unstable in default configuration, as consistent stability was not the primary goal of the example.
You can refer https://www.mathworks.com/help/reinforcement-learning/ug/ddpg-agents.html to know more about DDPG Training Algorithm and Actor and Critic Used by the DDPG Agents.

More Answers (0)

Products


Release

R2025b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!