Options for DDPG agent
rlDDPGAgentOptions object to specify options for deep
deterministic policy gradient (DDPG) agents. To create a DDPG agent, use
For more information, see Deep Deterministic Policy Gradient (DDPG) Agents.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
creates an options
object for use as an argument when creating a DDPG agent using all default options. You
can modify the object properties using dot notation.
opt = rlDDPGAgentOptions
NoiseOptions — Noise model options
Noise model options, specified as an
object. For more information on the noise model, see Noise Model.
For an agent with multiple actions, if the actions have different ranges and units, it is likely that each action requires different noise model parameters. If the actions have similar ranges and units, you can set the noise parameters for all actions to the same value.
For example, for an agent with two actions, set the standard deviation of each action to a different value while using the same decay rate for both standard deviations.
opt = rlDDPGAgentOptions; opt.NoiseOptions.StandardDeviation = [0.1 0.2]; opt.NoiseOptions.StandardDeviationDecayRate = 1e-4;
ActorOptimizerOptions — Actor optimizer options
CriticOptimizerOptions — Critic optimizer options
Critic optimizer options, specified as an
rlOptimizerOptions object. It allows you to specify training parameters of
the critic approximator such as learning rate, gradient threshold, as well as the
optimizer algorithm and its parameters. For more information, see
TargetSmoothFactor — Smoothing factor for target actor and critic updates
1e-3 (default) | positive scalar less than or equal to 1
Smoothing factor for target actor and critic updates, specified as a positive scalar less than or equal to 1. For more information, see Target Update Methods.
TargetUpdateFrequency — Number of steps between target actor and critic updates
1 (default) | positive integer
Number of steps between target actor and critic updates, specified as a positive integer. For more information, see Target Update Methods.
ResetExperienceBufferBeforeTraining — Option for clearing the experience buffer
true (default) |
Option for clearing the experience buffer before training, specified as a logical value.
SequenceLength — Maximum batch-training trajectory length when using RNN
1 (default) | positive integer
Maximum batch-training trajectory length when using a recurrent neural network, specified as a positive integer. This value must be greater than
1 when using a recurrent neural network and
MiniBatchSize — Size of random experience mini-batch
64 (default) | positive integer
Size of random experience mini-batch, specified as a positive integer. During each training episode, the agent randomly samples experiences from the experience buffer when computing gradients for updating the critic properties. Large mini-batches reduce the variance when computing gradients but increase the computational effort.
NumStepsToLookAhead — Number of future rewards used to estimate the value of the policy
1 (default) | positive integer
Number of future rewards used to estimate the value of the policy, specified as a positive integer. For more information, see , Chapter 7.
Note that if parallel training is enabled (that is if an
rlTrainingOptions option object in which the
UseParallel property is set to
NumStepsToLookAhead must be set to
otherwise an error is generated. This guarantees that experiences are stored
ExperienceBufferLength — Experience buffer size
10000 (default) | positive integer
Experience buffer size, specified as a positive integer. During training, the agent computes updates using a mini-batch of experiences randomly sampled from the buffer.
SampleTime — Sample time of agent
1 (default) | positive scalar |
Sample time of agent, specified as a positive scalar or as
-1. Setting this
-1 allows for event-based simulations.
Within a Simulink® environment, the RL Agent block
in which the agent is specified to execute every
of simulation time. If
block inherits the sample time from its parent subsystem.
Within a MATLAB® environment, the agent is executed every time the environment advances. In
SampleTime is the time interval between consecutive
elements in the output experience returned by
-1, the time interval between
consecutive elements in the returned output experience reflects the timing of the event
that triggers the agent execution.
DiscountFactor — Discount factor
0.99 (default) | positive scalar less than or equal to 1
Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.
|Deep deterministic policy gradient (DDPG) reinforcement learning agent|
Create DDPG Agent Options Object
This example shows how to create a DDPG agent option object.
rlDDPGAgentOptions object that specifies the mini-batch size.
opt = rlDDPGAgentOptions('MiniBatchSize',48)
opt = rlDDPGAgentOptions with properties: NoiseOptions: [1x1 rl.option.OrnsteinUhlenbeckActionNoise] ActorOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] CriticOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] TargetSmoothFactor: 1.0000e-03 TargetUpdateFrequency: 1 ResetExperienceBufferBeforeTraining: 1 SequenceLength: 1 MiniBatchSize: 48 NumStepsToLookAhead: 1 ExperienceBufferLength: 10000 SampleTime: 1 DiscountFactor: 0.9900 InfoToSave: [1x1 struct]
You can modify options using dot notation. For example, set the agent sample time to
opt.SampleTime = 0.5;
DDPG agents use an Ornstein-Uhlenbeck action noise model for exploration.
OrnsteinUhlenbeckActionNoise object has the following numeric value
|Initial value of action|
|Noise mean value|
|Constant specifying how quickly the noise model output is attracted to the mean|
|Decay rate of the standard deviation|
|Initial value of noise standard deviation|
|Minimum standard deviation|
At each sample time step
k, the noise value
updated using the following formula, where
Ts is the agent sample
time, and the initial value v(1) is defined by the
v(k+1) = v(k) + MeanAttractionConstant.*(Mean - v(k)).*Ts + StandardDeviation(k).*randn(size(Mean)).*sqrt(Ts)
At each sample time step, the standard deviation decays as shown in the following code.
decayedStandardDeviation = StandardDeviation(k).*(1 - StandardDeviationDecayRate); StandardDeviation(k+1) = max(decayedStandardDeviation,StandardDeviationMin);
You can calculate how many samples it will take for the standard deviation to be halved using this simple formula.
halflife = log(0.5)/log(1-StandardDeviationDecayRate);
For continuous action signals, it is important to set the noise standard deviation
appropriately to encourage exploration. It is common to set
StandardDeviation*sqrt(Ts) to a value between 1% and 10% of
your action range.
If your agent converges on local optima too quickly, promote agent exploration by increasing
the amount of noise; that is, by increasing the standard deviation. Also, to increase
exploration, you can reduce the
 Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning. Cambridge, Mass: The MIT Press, 2018.
Version HistoryIntroduced in R2019a
R2021a: Property names defining noise probability distribution in the
OrnsteinUhlenbeckActionNoise object have changed
The properties defining the probability distribution of the Ornstein-Uhlenbeck (OU) noise model have been renamed. DDPG agents use OU noise for exploration.
Varianceproperty has been renamed
VarianceDecayRateproperty has been renamed
VarianceMinproperty has been renamed
The default values of these properties remain the same. When an
OrnsteinUhlenbeckActionNoise noise object saved from a previous
MATLAB release is loaded, the values of
VarianceMin are copied in
VarianceMin properties still work, but they are not recommended. To
define the probability distribution of the OU noise model, use the new property names
This table shows how to update your code to use the new property names for
R2020a: Target update method settings for DDPG agents have changed
Target update method settings for DDPG agents have changed. The following changes require updates to your code:
TargetUpdateMethodoption has been removed. Now, DDPG agents determine the target update method based on the
The default value of
TargetUpdateFrequencyhas changed from
To use one of the following target update methods, set the
properties as indicated.
|Smoothing||Less than |
|Periodic||Greater than |
|Periodic smoothing (new method in R2020a)||Greater than ||Less than |
The default target update configuration, which is a smoothing update with a
TargetSmoothFactor value of
0.001, remains the
This table shows some typical uses of
rlDDPGAgentOptions and how
to update your code to use the new option configuration.