Clear Filters
Clear Filters

Is it possible to realize self-supervised RL by adding auxiliary loss to the loss of Critic of PPO agent?

24 views (last 30 days)
I am trying to realize self-supervised (SS) RL in MATLAB by using PPO agent. The SS RL can improve exploration and thereby enhance the convergence. In particular, it can be explained as follows:
  1. At step t, in addition to the original head of Critic that output the value via fullyConnectedLayer(1), there is an additional layer that is parallel to the original head of Critic and connected to the main body of critic, which outputs the the prediction of future state, denoted by , via fullyConnectedLayer(N) with N being the dimension of .
  2. Then, such a prediction of future state will be used to calculate the SS loss by comparing it with the real future state, i.e., , where is the real future state.
  3. Later, such a SS loss will be sampled and thereafter added to the original loss of Critic , i.e., 5-b in https://ww2.mathworks.cn/help/reinforcement-learning/ug/proximal-policy-optimization-agents.html, as follows
,
which requires to additionally add an auxiliary loss to the original loss of Critic.
So, is it possible to realize the above SS RL while avoiding significant modification in the source code of RL toolbox? Thank you!

Answers (1)

Ronit
Ronit on 13 Aug 2024 at 8:10
Hi Gavid,
Yes, it is possible to implement self-supervised (SS) RL with a PPO agent in MATLAB and add an auxiliary loss to the critic's loss function. You can achieve this with some customization without significantly modifying the source code. Here's an approach to achieve this:
  1. Extend the Critic Network: Add an additional output layer to the critic network to predict the future state.
  2. Compute the Self-Supervised Loss: Calculate the SS loss based on the predicted future state and the actual future state (as mentioned in point 2 of the question).
  3. Modify the Loss Function: Integrate the SS loss into the original critic loss function (as mentioned in the point 3 of the question). To achieve this, you need to customize the training loop. This involves defining a custom loss function and updating the critic network parameters accordingly.
Please refer to the following documentation to design and use custom loss functions in general:
You can also refer to the following MATLAB Answer that is related to creating custom loss function:
Hope this helps!

Products


Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!