Training Reinforcement Learning Agents --> Use ResetFcn to delay the agent's behaviour in the environment

2 views (last 30 days)
I would like to train my RL Agent in an environment which is represented by an FMU block in Simulink.
Unfortunately whenever a simulation starts I experience some brief natural oscillations in the states before the system reaches the ideal stedy state for the training.
I would like to tell my agent to wait for the steady state to be reached every time, before starting any experience related to the training.
I know that ResetFcn can be called at the beginning of each simulation, but this is usually used to change parameters in the blocks before the simulation starts; is it possible to use it for my specific purposes instead, i.e. to let some time buffer between the beginning of the simulation and the beginning of my agent's action?
If this is not possible, are there other suitable ways to overcome this problem?

Accepted Answer

Emmanouil Tzorakoleftherakis
You can place the RL Agent block inside a triggered subsystem and set the agent's sample time to -1 (see e.g. here). Then set this subsystem to be executed whenever it makes sense for your problem.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!