Which neural network can be used in RL agent?
Show older comments
My observations and actions are available in the form of matrix. Is it allowed to use U-Net as actor and critic in DDPG network or I have to convert these observation and actions to vector format and use a simple feed forward neural network?
3 Comments
Umar
on 9 Jul 2024
Hi Sania,
In the context of DPG networks, while it is technically feasible to employ a U-Net architecture as both the actor and critic, it is more common and practical to convert the matrix-formatted observations and actions into vectors and utilize a standard feedforward neural network. This conversion simplifies the network design and aligns with the typical structure of DDPG implementations and b transforming the data into vector format, you can effectively leverage the capabilities of a feedforward neural network for actor-critic reinforcement learning tasks. Please let me know if you have further questions.
Sania Gul
on 9 Jul 2024
Umar
on 9 Jul 2024
No problem, Sania.
Answers (0)
Categories
Find more on Policies and Value Functions in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!