PPO convergence guarantee in RL toolbox

7 views (last 30 days)
Hi,
I am testing my environment using the PPO algorithm in RL toolbox, I recently viewed this paper: https://arxiv.org/abs/2012.01399 which listed some assumptions on the convergence guranteen of PPO, some of them are for the environment itself (like the transition kernel...) and some are for the functions and parameters of the algorithm (like the learning rate alpha, the update function h...)
I am not sure if the PPO algorithm in the RL toolbox satisfies the assumptions of the convergence for the functions and parameters of the algorithm, because I did not find any direct mentioning of convergence in the official mathwork website, so I wonder how the algorithm is designed such that convergence is being considered.
Do I need to look into the train() function to see how those parameters and functions are designed?
Thank you

Accepted Answer

Karan Singh
Karan Singh on 17 Jun 2024
Hi Haochen,
The Proximal Policy Optimization algorithm in MATLAB's Reinforcement Learning Toolbox is based on the foundational principles from the original PPO papers by Schulman et al. (2017), as referenced in the documentation (https://www.mathworks.com/help/reinforcement-learning/ug/proximal-policy-optimization-agents.html).
It's crafted to adhere to the core assumptions necessary for the algorithm's convergence under certain conditions. However, the success of PPO, like many RL algorithms, hinges on several factors, including hyperparameter settings, the environment's complexity, and the implementation details.
Regarding the source code, accessing the detailed internals of the implementation might not be possible.

More Answers (0)

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!