Multi-Step (D)DQN using Parallelization

7 views (last 30 days)
David Braun
David Braun on 5 Sep 2022
Answered: Ayush Modi on 20 Oct 2023
I have noticed that with the current implementation of DQN it is not possible to use both multi-step returns (NumStepsToLookAhead>1) and parallelization. However I noticed that having multi-steps is essential for my application. Still, I would love to make use of all of my cpu cores.
Thus, I am wondering if it is possible to implement a custom DQN agent that allows for this. My goal is to arrive at an implementation where multiple workers generate experience samples. Learning is performed centrally and the updated policy is returned to the worker regularily.
Is this a reasonable idea? If yes, does anybody have an idea how I can implement this without generating too much code duplicates of the default dqn implementation?
Thank you very much.

Answers (1)

Ayush Modi
Ayush Modi on 20 Oct 2023
Hi David,
As per my understanding, you would like to generate experience samples at worker nodes by training the model with local data and then send these model parameters to central server to train the central model.
You can achieve this by using the concept of Federated Learning.
Please refer to the following MathWorks documentation for more information on Federated Learning:
You can create a custom DQN/DDQN model as well.
I hope this resolves the issue you were facing.

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Products


Release

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!