Answered
How do I find the objective/cost function for the example Valet parking using multistage NLMPC. (https://www.mathworks.com/help/mpc/ug/parking-valet-using-nonlinear-model-pred
Hi, The example you mentioned used MPC on two occasions: 1) On the outer loop for planning through the Vehicle Path Plannerblo...

1 year ago | 0

Answered
Replace RL type (PPO with DPPG) in a Matlab example
PPO is a stochastic agent whereas DDPG is deterministic. This means that you cannot just use actors and critics designed for PPO...

1 year ago | 1

| accepted

Answered
NMPC Controller not buildable for Raspberry Pi
Hard to tell without providing more details but I have a suspicion that you are defining the state and const functions as anonym...

1 year ago | 0

Answered
Regarding Default Terms in DNN
Which algorithm are you using? You can log loss data by following the guidelines here.

1 year ago | 1

Answered
How to start, pause, log information, and continue a simscape simulation?
If you go for #2, why don't you set it so that you have episodes that are 10 seconds long? When each episode ends, change the i...

1 year ago | 0

Answered
how to get the cost function result from model predictive controller?
Please take a look at the doc page of mpcmove. The Info output containts a field called Cost. You can use it to visualize how th...

1 year ago | 0

Answered
The solution obtained with the nlmpcmove function of the mpc toolbox is not "reproducible"?
Hi, For problem 1: I am not sure what's inside that state function but presumably there is some integrator that gives you k+1....

1 year ago | 0

Answered
How to keep actions values at minimum before disturbance and let the agent choose different action values only after the disturbance?
Please take a look here. As of R2022a you can place the RL policy block inside a triggered subsystem and only enable the subsyst...

1 year ago | 0

Answered
How to set multiple stopping or saving criteria for RL agent?
This is currently not possible but keep an eye out on future releases - the development team has been working on this functional...

1 year ago | 0

| accepted

Answered
How to run the simulink model when implementing custom RL training?
The way to do it would be to use runEpisode

1 year ago | 0

| accepted

Answered
How to implement the custom training with DQN agent in Simulink environment?
I would recommend looking at the doc first to see how custom loops/agents are structured. The following links should be helpful:...

1 year ago | 0

| accepted

Answered
Time-varying policy function
Why don't you just train 3 separate policies and pick and choose as needed?

1 year ago | 0

Answered
Reinforcement Learning . Sudden very high Rewards during training of RL model.
You should first check the 'error' signal that you feed in the reward for those episodes. Could be that the error becomes too bi...

1 year ago | 0

| accepted

Answered
DDPG has two different policies
The comparison plot is not set up correctly. The noisy policy also has a noise state which needs to be propagated after each cal...

1 year ago | 0

Answered
Training is getting stuck halfway.
Hi, The error message seems to be longer than what you pasted. It appears there is an indexing error in the step method. Did no...

1 year ago | 0

Answered
How to pass external time-varying parameters to nonlinear MPC models?
Hello, There are two ways of doing this: 1) With Nonlinear MPC, you can set your time-varying parameters as measured disturban...

1 year ago | 1

| accepted

Answered
Why when I set the UseFastRestart = "on" and start train my reinforcement learning agent, the matlab crash manager comes out and matlab hast to close?
Not easy to answer without the crash log. Can you please contact technical support?

1 year ago | 0

Answered
MPC robotic arm with stepper motor control
The prediction model you provided has direct feedthrough which is not currently supported by Model Predictive Control Toolbox. W...

1 year ago | 0

Answered
How to include a model (created by me at Simulink) in Matlab script?
Hi, Currently you cannot use a Simulink model as prediction model for MPC design. This is something we are working towards for ...

1 year ago | 0

Answered
Setting initial conditions in MPC
To get the behavior you mentioned, the initial states of your plant and controller must be the same. If the initial conditions f...

1 year ago | 0

Answered
Model predictive controller (Time domain)?
Why don't you just use a larger sample time as you say? You can set it as long as you need it to be in seconds

1 year ago | 0

| accepted

Answered
Reinforcement learning/Experiecne buffer/Simulink
Why do you want to create your own buffer? If you are using the built-in DDPG agent, the buffer is created automatically for you...

1 year ago | 0

Answered
Non-linear Model Predictive Control Toolbox: manipulated variable remains constant
Well maybe that's the best the controller can do. I suggest removing the constraint on the manipulated variable temporarily and ...

1 year ago | 0

| accepted

Answered
Using NLMPC on vehicle dynamics
The error seems to be in your bus definition. You don't provide that so take a closer look and see if you set things properly. A...

1 year ago | 0

| accepted

Answered
how to improve a model predictive control in order to get a lower cost function for the system?
You basically want to get a more aggressive response if I understand correctly, meaning that your outputs will converge faster t...

1 year ago | 0

| accepted

Answered
About RL Custom Agent/ LQRCustomAgent example
Actually, exp is being indexed in exactly the same way. Only in the first example we are doing it in one line and in the second ...

1 year ago | 1

| accepted

Answered
How to implement LSTM layers in MATLAB's DDPG agent
Hello, You can use lstm layers directly in both actors and critics and the built-in DDPG agent will handle the rest. Take a loo...

1 year ago | 0

| accepted

Answered
Error in creating a custom environment in deep reinforcement learning code
The links below provide more info on how to create custome environments in MATLAB. https://www.mathworks.com/help/reinforcement...

1 year ago | 1

| accepted

Answered
Resume training for PPO agent
PPO does not use an experience buffer so you should be fine loading the saved agent to resume training. If you are using advanta...

1 year ago | 1

| accepted

Load more