Exploration in Deep Reinforcement Learning

7 views (last 30 days)
Bhooshan V
Bhooshan V on 14 Apr 2022
Edited: Ayush Aniket on 6 May 2025
I am trying to reimplement REINFORCE algorithm with custom training loop for a specific problem. To the best of my knowledge, I have not come across any exploration technique in the given example. How do I implement exploration/exploitation in case of neural networks?

Answers (1)

Ayush Aniket
Ayush Aniket on 6 May 2025
Edited: Ayush Aniket on 6 May 2025
In the provided example explicit exploration techniques are not directly implemented. However, exploration is inherently handled by the policy architecture and its training process as explained below:
1. Stochastic Policy (Softmax Output) - The actor network ends with a softmaxLayer, creating a categorical probability distribution over actions for each state, which is used to create a policy using rlStochasticActorPolicy function. At each step, the action is sampled from this probability distribution. This means:
  • Actions with higher probabilities are more likely to be chosen (exploitation).
  • Less probable actions can still be chosen (exploration).
This sampling mechanism is the main source of exploration in REINFORCE and policy gradient methods.
2. Entropy Regularization - In the custom loss function ( actorLossFunction ), an entropy loss term is added:
entropyLoss = -sum(actionProbabilities.*actionLogProbabilities.*mask, "all");
loss = (pgLoss + 1e-4*entropyLoss)/(sum(mask));
This term encourages the policy to maintain uncertainty (higher entropy) in its action distribution, which naturally promotes exploration, especially early in training. As training progresses, the policy becomes more confident (lower entropy) as it learns which actions yield higher rewards (exploitation).
3. Switch to Exploitation After Training - During simulation, the policy parameter is set:
policy.UseMaxLikelihoodAction = true;
This means the policy always selects the most likely (greedy) action—pure exploitation for evaluation.
If you want to further control exploration/exploitation, here are some options:
  • Increase entropy regularization (increase coefficient from 1e-4 to a higher value) to promote more exploration.
  • Decrease entropy regularization for more exploitation.
  • Implement ε-greedy - With probability ε, select a random action; with probability 1-ε, sample from the policy.

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Products


Release

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!