- During the training phase, none of the episodes reached 2 ExperienceHorizon’s worth of steps due to reaching termination conditions early. So the policy update might be happening with a combination of steps from different episodes.
- The MaxStepsPerEpisode parameter in the training options could be more than ExperienceHorizon, thereby causing the episode to terminate early. By training options I mean the ones that are passed to the train() function via an argument list, or via a rlTrainingOptions object (https://www.mathworks.com/help/releases/R2023b/reinforcement-learning/ref/rl.option.rltrainingoptions.html)
- The MiniBatchSize parameter defines the size of the chunks that the experience buffer is divided into before running an epoch of training on the policy network. If ExperienceHorizon is less than MiniBatchSize, it could cause issues. So, ensure that ExperienceHorizon is a multiple of MiniBatchSize.
Tuning ExperienceHorizon hyperparamter for PPO agent (Reinforcement Learning)
9 views (last 30 days)
Show older comments
Hello everyone,
I'm trying to train a PPO agent, and I would like to change the value for the ExperienceHorizon hyperparameter (Options for PPO agent - MATLAB - MathWorks Switzerland)
When I try another value than the default, the agent wait for the end of the episode to update its policy. For example, ExperienceHorizon=1024 don't work for me, dispite the episode's lenght of more than 1024 steps. I'm also not using Parallel training.
I also get the same issue if I change the MiniBatchSize from its default value.
Is there anything I've missed about this parameter?
More infos on PPO algorithms: Proximal Policy Optimization (PPO) Agents - MATLAB & Simulink - MathWorks Switzerland
If anyone could help, that would be very nice!
Thanks a lot in advance,
Nicolas
0 Comments
Accepted Answer
Alan
on 1 Aug 2024
Edited: Alan
on 1 Aug 2024
Hi Nicolas,
I could not figure out how to record the episode or step index at which the agent’s policy is updated so I could not verify the behaviour of various combinations of options.
From my understanding, I could think of the following possibilities for updating the policy late:
I hope this helped.
-Alan
More Answers (0)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!