Answered
Reinforcement Learning PPO Problem
Hello, Please take a look at how to create the actor and critic networks for continuous PPO here. It seems there is a dimension...

5 years ago | 0

Answered
Warning: An error occurred while drawing the scene: Error in json_scenetree: Could not find node in replaceChild
My suspicion is that the error does not have much to do with the code you are showing but with how you create your environment. ...

5 years ago | 0

Answered
How to change Agent policy option to continuing update the policy in Reinforcement Learning toolbox?
Hello, Thank you for going over the RL ebooks! This point you mention is a general reference on what you can do after training ...

5 years ago | 0

| accepted

Answered
How do I save Episode Manager training data for *plotting* later
Hi Rajesh, As mentioned in the comment above, if you are using R2020b, you can use >> help rlPlotTrainingResults to recreate...

5 years ago | 2

Answered
Realize MADDPG in Matlab
To create agents that share critics I believe you would have to implement that using a custom agent/training loop (see here and ...

5 years ago | 1

Answered
Train PPO Agent to Swing Up and Balance Pendulum
I think the fully connected layer in the actor may not have enough nodes (actorLayerSizes is not used anywhere). Regardless, yo...

5 years ago | 0

Answered
A mix of rlNumericSpec and rlFiniteSetSpec objects - observation for a RL environment
Hi Krupa, I don't think there is an example that shows how to do that in the documentation right now - I will let the doc team ...

5 years ago | 2

| accepted

Answered
problem with simulation trained DRL agent
Hello, Please see this post that goes over a few potential reasons for discrepancies between training results and simulation re...

5 years ago | 1

Answered
Invalid observation type or size. error in simulink varies on quantization interval constraining observation signals in Simulink (Reinforcement Learning Toolbox)
Hello, This is likely due to numerical effects of rounding that happens when quantizing (see doc here). When quantization inter...

5 years ago | 0

| accepted

Answered
Global parameters / data store memory with RL agent block Simulink
Not sure what error you are seeing, but if you only need to use the value of the previous time step, I think the Memory block is...

5 years ago | 0

Answered
Algebraic loop in vehicle dynamics blockset + RL Agent
Please take a look at this question which is similar. You should be able to remove the algebraic loop by following the methods/l...

5 years ago | 0

| accepted

Answered
Is it possible to use the reinforcement learning toolbox in a Simulink/Adams co-simulation?
Hello, You should be able to use Reinforcement Learning Toolbox for cosimulation. It looks like closing the loops with observat...

5 years ago | 2

| accepted

Answered
RL Agent training SimulationInput error
Hello, It looks to me like you are not implementing the localReset function that you assign to the environment. The ACC example...

5 years ago | 0

Answered
The RL Agent block only supports Normal and Accelerator simulation modes.
Hi Mehmet, As I mentioned in the other thread, assuming you want to train your agent in External Model, I believe this is a cur...

5 years ago | 1

| accepted

Answered
Action Clipping and Scaling in TD3 in Reinforcement Learning
Hello, In general, for DDPG and TD3, it is good practice to include the scalingLayer as the last actor layer to scale/shift the...

5 years ago | 1

| accepted

Answered
Collaborative DDPG/Actor-Critic Example
Hello, As you noticed, as of R2020b we support (decentralized) multi-agent RL but only in Simulink. We are looking to expand th...

5 years ago | 1

| accepted

Answered
Variable Sample Time in Reinforcement Learning
Hello, The current format of training in Reinforcement Learning Toolbox assumes you are taking actions at fixed time intervals ...

5 years ago | 0

| accepted

Answered
reinforcement learning from scrach
Hello, Depending on whether your environment will be in MATLAB on Simulink, the following links would be a good starting point:...

5 years ago | 0

Answered
Problem of using DDPG agent in external mode
The root of the error is likely on Reinforcement Learning Toolbox side, not Polyspace. As the last error line mentions, "The R...

5 years ago | 0

Answered
Training agent using reinforcement learning
Hello, When you train using historical data, it is often a good idea to break down your dataset in smaller pieces. Then, instea...

5 years ago | 0

Answered
Reinforcement Learning Noise Model Mean Attraction Constant
Assuming you are using DDPG, there is some information on the noise model here. I wouldn't worry too much about the mean attract...

5 years ago | 1

| accepted

Answered
Hyperparameter optimization and saving the best agents for Reinforcement Learning
Hello, You can use something like this. We do not have any examples with Reinforcement Learning Toolbox that show how to use th...

5 years ago | 0

| accepted

Answered
Reinforcement Learning experience buffer length and parallelisation toolbox
Hello, There is one big experience buffer on the host, the size of which you determine as usual in your agent options. Each wor...

5 years ago | 0

| accepted

Answered
reinforcement learning agent simulation is not same with training agent
Hello, Please see this post that explains why simulation results may differ during training and after training. One thing to c...

5 years ago | 0

Answered
Epsilon-greedy Algorithm in RL DQN
Hello, First off, RL typically solves a complex nonlinear optimization problem. So at the end of the day, you will most certain...

5 years ago | 2

| accepted

Answered
reinforcement learning, 3D simulink model
There is nothing specific you need to do for a 3d Simulink model. You can follow any other Simulink example from Reinforcement L...

5 years ago | 0

Answered
How to use GA in Reinforcement Learning instead of Gradient descent?
Hello, Evolutionary RL is not provided out of the box as of now. To use it you would have to implement a custom training loop (...

5 years ago | 0

| accepted

Answered
Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 689) (By RL toolbox)
Hello, Based on the attached files, it seems like you are creating a PPO agent but you are creating a Q network for a critic. I...

5 years ago | 0

Answered
Custom RL environment creation
Hello, Based on the updated files you sent on this post, you are setting this.IsDone, however this is a class variable which is...

5 years ago | 0

| accepted

Answered
Confusion in Critic network architecture design in DDPG
Hello, Does this paper use DDPG as well? Any images that show the network architecture? If it's another algorithm, the critic m...

5 years ago | 0

| accepted

Load more