Community Profile

photo

Emmanouil Tzorakoleftherakis


Last seen: Today

MathWorks

250 total contributions since 2018

Emmanouil Tzorakoleftherakis's Badges

  • Personal Best Downloads Level 1
  • Pro
  • Knowledgeable Level 4
  • GitHub Submissions Level 1
  • First Submission
  • 6 Month Streak
  • Revival Level 2
  • First Answer

View details...

Contributions in
View by

Answered
Where to update actions in environment?
Reinforcement Learning Toolbox agents expect a static action space, so fixed number of options at each time step. To create a dy...

4 hours ago | 0

Answered
How to check the weight and bias which taked by getLearnableParameters?
Can you provide some more details? What does 'wrong answer' mean? How do you know the weights you are seeing are not correct? Ar...

4 hours ago | 0

Answered
Gradient in RL DDPG Agent
If you put a break point right before 'gradient' is called in this example, you can step in and see the function implementation....

4 hours ago | 0

Answered
Soft Actor Critic deploy mean path only
Hello, Please take a look at this option here which was added in R2021a to allow exactly the behavior you mentioned. Hope this...

5 hours ago | 0

Answered
How to pretrain a stochastic actor network for PPO training?
Hello, Since you already have a dataset, you will have to use Deep Learning Toolbox to get your initial policy. Take a look at ...

5 hours ago | 0

Answered
Failure in training of Reinforcement Learning Reinforcement Learning Onramp
Hello, We are aware and working to fix this issue. In the meantime, can you take a look at the following answere? https://www....

7 days ago | 0

Answered
DQN Agent with 512 discrete actions not learning
I would initially revisit the critic architecture for 2 reasons: 1) Network seems a little simple for a 3->512 mapping 2) This...

8 days ago | 0

Answered
How does the Q-Learning update the qTable by using the reinforcement learning toolbox?
Can you try critic.Options.L2RegularizationFactor=0; This parameter is nonzero by default and likely the reason for the discre...

10 days ago | 0

Answered
File size of saved reinforcement learning agents
Hello, Is this parameter set to true? If yes, then it makes sense that mat files are growing in size as the buffer is being pop...

14 days ago | 0

| accepted

Answered
Saving Trained RL Agent after Training
Setting the IsDone flag to 1 does not erase the trained agent - it actually makes sense that the sim was not showing anything be...

14 days ago | 0

| accepted

Answered
How to Train Multiple Reinforcement Learning Agents In Basic Grid World? (Multiple Agents)
Training multiple agents simultaneously is currently only supported in Simulink. The predefined Grid World environments in Reinf...

14 days ago | 0

| accepted

Answered
How to create a neural network for Multiple Agent with discrete and continuous action?
If you want to specify the neural network structures yourself, there is nothing specific you need to do - simply create two acto...

17 days ago | 0

| accepted

Answered
Is it possible apply Reinfocrement Learning to classify data?
If you already have a labeled dataset, supervised learning is the way to go. Reinforcement learning is more for cases where data...

17 days ago | 0

| accepted

Answered
Combining two deep neural networks to train simultaneously
Hello, You can do this in Simulink - see the following examples for reference. https://www.mathworks.com/help/reinforcement-l...

20 days ago | 1

| accepted

Answered
DQN learns at first but then worsens.
To confirm that this is an exploration issue, can you try setting the EpsilonMin param to a high value? e.g. 0.99. If after doin...

21 days ago | 0

Answered
How to resume train a trained agent?about Q learning agents.
Hello, To see how to iew the table values, take a look at the answer here. Also, you don't have to do anything specific to con...

21 days ago | 0

| accepted

Answered
Reinforcement learning action getting saturated at one range of values
Your scaling layer is not set up correctly. You want to scale to (upper limit-lower limit) and then shift accordingly. scaling...

28 days ago | 0

| accepted

Answered
How can I provide constraints to the actions provided by the Reinforcement Learning Agent?
Hard constraints are not typically supported during training in RL. You can specify limits/constraints as you mention above, but...

1 month ago | 0

| accepted

Answered
Exporting data only works as pdf. Axis labels are getting small and unreadable
You cannot save as .fig from the episode manager plot. If you have the training data though (it's good practice to save this dat...

1 month ago | 1

| accepted

Answered
Reinforcement Learning multiple agent validation: Can I have a Simulink model host TWO agents and test them
That should be possible. Did you follow the multi-agent examples? Since the agents are trained already you may want to check the...

1 month ago | 0

| accepted

Answered
Do the actorNet and criticNet share the parameter if the layers have the same name?
No, each network has its own parameters. Shared layers are not supported out of the box, you would have to implement custom trai...

1 month ago | 0

| accepted

Answered
Any RL Toolbox A3C example?
Hello, To get an idea of what an actor/critic architecture may look like, you can use the 'default agent' feature that creates ...

1 month ago | 0

| accepted

Answered
After training my DDPG RL agent and saving it, unexpected simulation output
See answer here

1 month ago | 0

| accepted

Answered
Saved agent always gives constant output no matter how or how much I train it
The problem formulation is not correct. I suspect that even during training, you are seeing a lot of bang bang actions. The bigg...

1 month ago | 1

| accepted

Answered
How can I create a Reinforcement Learning Agent representation based on Recurrent neural network (RNN, LSTM, among others)
Hello, Which release are you using? R2020a and R2020b support LSTM policies for PPO and DQN agents. Starting in R2021a you can ...

1 month ago | 1

| accepted

Answered
Procedure to link state path and action path in a DQL critic reinforcement learning agent?
Hello, Some comments on the points you raise above: 1.There are two ways to create the critic network for DQN as you probabl...

2 months ago | 0

| accepted

Answered
Reinforcement learning DDPG Agent semi active control issue
Hello, This is very open-ended so there could be a lot of ways to improve your setup. My guess is that the issue is very releva...

2 months ago | 1

| accepted

Answered
Save listener Callback in eps format or any high resolution format
Hello, If you are using R2020b, you can use help rlPlotTrainingResults to recreate the Episode manager plot and save it as y...

2 months ago | 0

| accepted

Answered
Input normalization using a reinforcement learning DQN agent
Hello, Normalization through the input layers is not supported for RL training. As a workaround, you can scale the observations...

2 months ago | 1

| accepted

Answered
Export Q-Table from rlAgent
Here is an example load('basicGWQAgent.mat','qAgent') critic = getCritic(qAgent); tableObj = getModel(critic); table = table...

2 months ago | 1

| accepted

Load more