Which neural network can be used in RL agent?

My observations and actions are available in the form of matrix. Is it allowed to use U-Net as actor and critic in DDPG network or I have to convert these observation and actions to vector format and use a simple feed forward neural network?

3 Comments

Hi Sania,
In the context of DPG networks, while it is technically feasible to employ a U-Net architecture as both the actor and critic, it is more common and practical to convert the matrix-formatted observations and actions into vectors and utilize a standard feedforward neural network. This conversion simplifies the network design and aligns with the typical structure of DDPG implementations and b transforming the data into vector format, you can effectively leverage the capabilities of a feedforward neural network for actor-critic reinforcement learning tasks. Please let me know if you have further questions.
Thank u so much Umer for such a quick response. I will let u know if I have any :-)
No problem, Sania.

Sign in to comment.

Answers (0)

Categories

Asked:

on 9 Jul 2024

Commented:

on 9 Jul 2024

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!