Community Profile

photo

Emmanouil Tzorakoleftherakis


Last seen: Today

MathWorks

172 total contributions since 2018

Emmanouil Tzorakoleftherakis's Badges

  • Knowledgeable Level 3
  • 6 Month Streak
  • Revival Level 2
  • First Answer

View details...

Contributions in
View by

Answered
Customized Action Selection in RL DQN
Hello, I believe this is not possible yet. A potential workaround (although not state dependent) would be to emulate a pdf by p...

4 days ago | 0

Answered
How to save and use the pre-trained DQN agent in the reinforcement learning tool box
Hello, Take a look at this example, and specifically the code snippet below: if doTraining % Train the agent. tr...

4 days ago | 0

Answered
How to generate 32 bit DLL file from codegen in reinforcement learning toolbox?
Hello, I am not sure which release you are using, but if you have access to the R2021a prerelease you may want to try the new f...

4 days ago | 0

Answered
Save data in Deep RL in the Simulink environment for all episodes
Hello, You can always select the signals you want to log and view them later in Simulation Data Inspector. Same goes for the re...

4 days ago | 1

| accepted

Answered
How to compute the gradient of deep Actor network in DRL (with regard to all of its parameters)?
In the link you provide above, the gradients are calculated with the "gradient" function that uses automatic differentiation. So...

4 days ago | 0

Answered
MPC: step response starts with unwanted negative swing when using previewing
It appears that the optimization thinks that moving in the opposite direction first is "optimal". You can change that by adding ...

4 days ago | 0

Answered
RL: Continuous action space, but within a desired range
Hello, There are two ways to enforce this: 1) Using the upper and lower limits in rlNumericSpec when you are creating the acti...

12 days ago | 0

Answered
Reinforcement Learning PPO Problem
Hello, Please take a look at how to create the actor and critic networks for continuous PPO here. It seems there is a dimension...

12 days ago | 0

Answered
Warning: An error occurred while drawing the scene: Error in json_scenetree: Could not find node in replaceChild
My suspicion is that the error does not have much to do with the code you are showing but with how you create your environment. ...

12 days ago | 0

Answered
How to change Agent policy option to continuing update the policy in Reinforcement Learning toolbox?
Hello, Thank you for going over the RL ebooks! This point you mention is a general reference on what you can do after training ...

16 days ago | 0

| accepted

Answered
How do I save Episode Manager training data for *plotting* later
Hi Rajesh, As mentioned in the comment above, if you are using R2020b, you can use >> help rlPlotTrainingResults to recreate...

16 days ago | 1

Answered
Realize MADDPG in Matlab
To create agents that share critics I believe you would have to implement that using a custom agent/training loop (see here and ...

16 days ago | 0

Answered
Train PPO Agent to Swing Up and Balance Pendulum
I think the fully connected layer in the actor may not have enough nodes (actorLayerSizes is not used anywhere). Regardless, yo...

16 days ago | 0

Answered
A mix of rlNumericSpec and rlFiniteSetSpec objects - observation for a RL environment
Hi Krupa, I don't think there is an example that shows how to do that in the documentation right now - I will let the doc team ...

16 days ago | 0

| accepted

Answered
problem with simulation trained DRL agent
Hello, Please see this post that goes over a few potential reasons for discrepancies between training results and simulation re...

25 days ago | 1

Answered
Invalid observation type or size. error in simulink varies on quantization interval constraining observation signals in Simulink (Reinforcement Learning Toolbox)
Hello, This is likely due to numerical effects of rounding that happens when quantizing (see doc here). When quantization inter...

29 days ago | 0

| accepted

Answered
Global parameters / data store memory with RL agent block Simulink
Not sure what error you are seeing, but if you only need to use the value of the previous time step, I think the Memory block is...

1 month ago | 0

Answered
Algebraic loop in vehicle dynamics blockset + RL Agent
Please take a look at this question which is similar. You should be able to remove the algebraic loop by following the methods/l...

1 month ago | 0

| accepted

Answered
Is it possible to use the reinforcement learning toolbox in a Simulink/Adams co-simulation?
Hello, You should be able to use Reinforcement Learning Toolbox for cosimulation. It looks like closing the loops with observat...

1 month ago | 1

| accepted

Answered
RL Agent training SimulationInput error
Hello, It looks to me like you are not implementing the localReset function that you assign to the environment. The ACC example...

1 month ago | 0

Answered
The RL Agent block only supports Normal and Accelerator simulation modes.
Hi Mehmet, As I mentioned in the other thread, assuming you want to train your agent in External Model, I believe this is a cur...

1 month ago | 1

| accepted

Answered
Action Clipping and Scaling in TD3 in Reinforcement Learning
Hello, In general, for DDPG and TD3, it is good practice to include the scalingLayer as the last actor layer to scale/shift the...

1 month ago | 1

| accepted

Answered
Collaborative DDPG/Actor-Critic Example
Hello, As you noticed, as of R2020b we support (decentralized) multi-agent RL but only in Simulink. We are looking to expand th...

1 month ago | 1

| accepted

Answered
Variable Sample Time in Reinforcement Learning
Hello, The current format of training in Reinforcement Learning Toolbox assumes you are taking actions at fixed time intervals ...

1 month ago | 0

| accepted

Answered
reinforcement learning from scrach
Hello, Depending on whether your environment will be in MATLAB on Simulink, the following links would be a good starting point:...

1 month ago | 0

Answered
Problem of using DDPG agent in external mode
The root of the error is likely on Reinforcement Learning Toolbox side, not Polyspace. As the last error line mentions, "The R...

1 month ago | 0

Answered
Training agent using reinforcement learning
Hello, When you train using historical data, it is often a good idea to break down your dataset in smaller pieces. Then, instea...

1 month ago | 0

Answered
Reinforcement Learning Noise Model Mean Attraction Constant
Assuming you are using DDPG, there is some information on the noise model here. I wouldn't worry too much about the mean attract...

2 months ago | 1

| accepted

Answered
Hyperparameter optimization and saving the best agents for Reinforcement Learning
Hello, You can use something like this. We do not have any examples with Reinforcement Learning Toolbox that show how to use th...

2 months ago | 0

| accepted

Answered
Reinforcement Learning experience buffer length and parallelisation toolbox
Hello, There is one big experience buffer on the host, the size of which you determine as usual in your agent options. Each wor...

2 months ago | 0

| accepted

Load more