# rlDDPGAgentOptions

## Description

Use an `rlDDPGAgentOptions`

object to specify options for deep
deterministic policy gradient (DDPG) agents. To create a DDPG agent, use `rlDDPGAgent`

.

For more information, see Deep Deterministic Policy Gradient (DDPG) Agents.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

## Creation

### Description

creates an options
object for use as an argument when creating a DDPG agent using all default options. You
can modify the object properties using dot notation.`opt`

= rlDDPGAgentOptions

creates the options set `opt`

= rlDDPGAgentOptions(`Name=Value`

)`opt`

and sets its properties using one
or more name-value arguments. For example,
`rlDDPGAgentOptions(DiscountFactor=0.95)`

creates an option set with a
discount factor of `0.95`

. You can specify multiple name-value
arguments.

## Properties

`NoiseOptions`

— Noise model options

`OrnsteinUhlenbeckActionNoise`

object (default) | `GaussianActionNoise`

object

Noise model options, specified as an `OrnsteinUhlenbeckActionNoise`

or `GaussianActionNoise`

object. For more information on the noise
model, see Noise Model.

For an agent with multiple actions, if the actions have different ranges and units, it is likely that each action requires different noise model parameters. If the actions have similar ranges and units, you can set the noise parameters for all actions to the same value.

For example, for an agent with two actions, set the standard deviation of each action to a different value while using the same decay rate for both standard deviations.

opt = rlDDPGAgentOptions; opt.NoiseOptions.StandardDeviation = [0.1 0.2]; opt.NoiseOptions.StandardDeviationDecayRate = 1e-4;

To use Gaussian action noise, first create a default
`GaussianActionNoise`

object. Then, specify any nondefault model
properties using dot
notation.

opt = rlDDPGAgentOptions; opt.NoiseOptions = rl.option.GaussianActionNoise; opt.NoiseOptions.StandardDeviation = 0.05;

`ActorOptimizerOptions`

— Actor optimizer options

`rlOptimizerOptions`

object

Actor optimizer options, specified as an `rlOptimizerOptions`

object. It allows you to specify training parameters of
the actor approximator such as learning rate, gradient threshold, as well as the
optimizer algorithm and its parameters. For more information, see `rlOptimizerOptions`

and `rlOptimizer`

.

**Example: **```
ActorOptimizerOptions =
rlOptimizerOptions(LearnRate=2e-3)
```

`CriticOptimizerOptions`

— Critic optimizer options

`rlOptimizerOptions`

object

Critic optimizer options, specified as an `rlOptimizerOptions`

object. It allows you to specify training parameters of
the critic approximator such as learning rate, gradient threshold, as well as the
optimizer algorithm and its parameters. For more information, see `rlOptimizerOptions`

and `rlOptimizer`

.

**Example: **```
CriticOptimizerOptions =
rlOptimizerOptions(LearnRate=5e-3)
```

`BatchDataRegularizerOptions`

— Batch data regularizer options

`[]`

(default) | `rlBehaviorCloningRegularizerOptions`

object

Batch data regularizer options, specified as an
`rlBehaviorCloningRegularizerOptions`

object. These options are
typically used to train the agent offline, from existing data. If you leave this option
empty, no regularizer is used.

For more information, see `rlBehaviorCloningRegularizerOptions`

.

**Example: **```
BatchDataRegularizerOptions =
rlBehaviorCloningRegularizerOptions(BehaviorCloningRegularizerWeight=10)
```

`TargetSmoothFactor`

— Smoothing factor for target actor and critic updates

`1e-3`

(default) | positive scalar less than or equal to 1

Smoothing factor for target actor and critic updates, specified as a positive scalar less than or equal to 1. For more information, see Target Update Methods.

**Example: **`TargetSmoothFactor=1e-2`

`TargetUpdateFrequency`

— Number of steps between target actor and critic updates

`1`

(default) | positive integer

Number of steps between target actor and critic updates, specified as a positive integer. For more information, see Target Update Methods.

**Example: **`TargetUpdateFrequency=5`

`ResetExperienceBufferBeforeTraining`

— Option for clearing the experience buffer

`false`

(default) | `true`

Option for clearing the experience buffer before training, specified as a logical value.

**Example: **`ResetExperienceBufferBeforeTraining=true`

`SequenceLength`

— Maximum batch-training trajectory length when using RNN

`1`

(default) | positive integer

Maximum batch-training trajectory length when using a recurrent neural network, specified as a positive integer. This value must be greater than `1`

when using a recurrent neural network and `1`

otherwise.

**Example: **`SequenceLength=4`

`MiniBatchSize`

— Size of random experience mini-batch

`64`

(default) | positive integer

Size of random experience mini-batch, specified as a positive integer. During each training episode, the agent randomly samples experiences from the experience buffer when computing gradients for updating the critic properties. Large mini-batches reduce the variance when computing gradients but increase the computational effort.

**Example: **`MiniBatchSize=128`

`NumStepsToLookAhead`

— Number of future rewards used to estimate the value of the policy

`1`

(default) | positive integer

Number of future rewards used to estimate the value of the policy, specified as a positive
integer. Specifically,
if`NumStepsToLookAhead`

is equal
to *N*, the target value of the policy at a
given step is calculated adding the rewards for the following
*N* steps and the discounted
estimated value of the state that caused the
*N*-th reward. This target is also
called *N*-step return.

**Note**

When using a recurrent neural network for the critic,
`NumStepsToLookAhead`

must be
`1`

.

For more information, see [1], Chapter 7.

**Example: **`NumStepsToLookAhead=3`

`ExperienceBufferLength`

— Experience buffer size

`10000`

(default) | positive integer

Experience buffer size, specified as a positive integer. During training, the agent computes updates using a mini-batch of experiences randomly sampled from the buffer.

**Example: **`ExperienceBufferLength=1e6`

`SampleTime`

— Sample time of agent

`1`

(default) | positive scalar | `-1`

Sample time of agent, specified as a positive scalar or as `-1`

. Setting this
parameter to `-1`

allows for event-based simulations.

Within a Simulink^{®} environment, the RL Agent block
in which the agent is specified to execute every `SampleTime`

seconds
of simulation time. If `SampleTime`

is `-1`

, the
block inherits the sample time from its parent subsystem.

Within a MATLAB^{®} environment, the agent is executed every time the environment advances. In
this case, `SampleTime`

is the time interval between consecutive
elements in the output experience returned by `sim`

or
`train`

. If
`SampleTime`

is `-1`

, the time interval between
consecutive elements in the returned output experience reflects the timing of the event
that triggers the agent execution.

**Example: **`SampleTime=-1`

`DiscountFactor`

— Discount factor

`0.99`

(default) | positive scalar less than or equal to 1

Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.

**Example: **`DiscountFactor=0.9`

## Object Functions

`rlDDPGAgent` | Deep deterministic policy gradient (DDPG) reinforcement learning agent |

## Examples

### Create DDPG Agent Options Object

Create an `rlDDPGAgentOptions`

object that specifies the mini-batch size.

opt = rlDDPGAgentOptions(MiniBatchSize=48)

opt = rlDDPGAgentOptions with properties: SampleTime: 1 DiscountFactor: 0.9900 NoiseOptions: [1x1 rl.option.OrnsteinUhlenbeckActionNoise] ExperienceBufferLength: 10000 MiniBatchSize: 48 SequenceLength: 1 ActorOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] CriticOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] NumStepsToLookAhead: 1 TargetSmoothFactor: 1.0000e-03 TargetUpdateFrequency: 1 BatchDataRegularizerOptions: [] ResetExperienceBufferBeforeTraining: 0 InfoToSave: [1x1 struct]

You can modify options using dot notation. For example, set the agent sample time to `0.5`

.

opt.SampleTime = 0.5;

## Algorithms

### Noise Model

**Ornstein-Uhlenbeck Action Noise**

An `OrnsteinUhlenbeckActionNoise`

object has the following numeric value
properties.

Property | Description | Default Value |
---|---|---|

`InitialAction` | Initial value of action | `0` |

`Mean` | Noise mean value | `0` |

`MeanAttractionConstant` | Constant specifying how quickly the noise model output is attracted to the mean | `0.15` |

`StandardDeviationDecayRate` | Decay rate of the standard deviation | `0` |

`StandardDeviation` | Initial value of noise standard deviation | `0.3` |

`StandardDeviationMin` | Minimum standard deviation | `0` |

At each sample time step `k`

, the noise value `v(k)`

is
updated using the following formula, where `Ts`

is the agent sample
time, and the initial value v(1) is defined by the `InitialAction`

parameter.

v(k+1) = v(k) + MeanAttractionConstant.*(Mean - v(k)).*Ts + StandardDeviation(k).*randn(size(Mean)).*sqrt(Ts)

At each sample time step, the standard deviation decays as shown in the following code.

decayedStandardDeviation = StandardDeviation(k).*(1 - StandardDeviationDecayRate); StandardDeviation(k+1) = max(decayedStandardDeviation,StandardDeviationMin);

You can calculate how many samples it will take for the standard deviation to be halved using this simple formula.

halflife = log(0.5)/log(1-StandardDeviationDecayRate);

Note that `StandardDeviation`

is conserved between the end of an
episode and the start of the next one. Therefore, it keeps on uniformly decreasing
over multiple episodes until it reaches
`StandardDeviationMin`

.

For continuous action signals, it is important to set the noise standard deviation
appropriately to encourage exploration. It is common to set
`StandardDeviation*sqrt(Ts)`

to a value between 1% and 10% of
your action range.

If your agent converges on local optima too quickly, promote agent exploration by increasing
the amount of noise; that is, by increasing the standard deviation. Also, to increase
exploration, you can reduce the `StandardDeviationDecayRate`

.

**Gaussian Action Noise**

A `GaussianActionNoise`

object has the following numeric value
properties.

Property | Description | Default Value
(`ExplorationModel` ) | Default Value
(`TargetPolicySmoothModel` ) |
---|---|---|---|

`Mean` | Noise mean value | `0` | `0` |

`StandardDeviationDecayRate` | Decay rate of the standard deviation | `0` | `0` |

`StandardDeviation` | Initial value of noise standard deviation | `sqrt(0.1)` | `sqrt(0.2)` |

`StandardDeviationMin` | Minimum standard deviation, which must be less than
`StandardDeviation` | `0.01` | `0.01` |

`LowerLimit` | Noise sample lower limit | `-Inf` | `-0.5` |

`UpperLimit` | Noise sample upper limit | `Inf` | `0.5` |

At each time step `k`

, the Gaussian noise `v`

is
sampled as shown in the following code.

w = Mean + randn(ActionSize).*StandardDeviation(k); v(k+1) = min(max(w,LowerLimit),UpperLimit);

Where the initial value v(1) is defined by the `InitialAction`

parameter. At each sample time step, the standard deviation decays as shown in the
following code.

decayedStandardDeviation = StandardDeviation(k).*(1 - StandardDeviationDecayRate); StandardDeviation(k+1) = max(decayedStandardDeviation,StandardDeviationMin);

Note that `StandardDeviation`

is conserved between the end of an
episode and the start of the next one. Therefore, it keeps on uniformly decreasing
over multiple episodes until it reaches
`StandardDeviationMin`

.

## References

[1] Sutton, Richard S., and Andrew G.
Barto. *Reinforcement Learning: An Introduction*. Second edition.
Adaptive Computation and Machine Learning. Cambridge, Mass: The MIT Press, 2018.

## Version History

**Introduced in R2019a**

### R2022a: The default value of the `ResetExperienceBufferBeforeTraining`

property has changed

The default value of the `ResetExperienceBufferBeforeTraining`

has
changed from `true`

to `false`

.

When creating a new DDPG agent, if you want to clear the experience buffer before
training, you must specify `ResetExperienceBufferBeforeTraining`

as
`true`

. For example, before training, set the property using dot
notation.

agent.AgentOptions.ResetExperienceBufferBeforeTraining = true;

Alternatively, you can set the property to `true`

in an
`rlDDPGAgentOptions`

object and use this object to create the DDPG
agent.

### R2021a: Property names defining noise probability distribution in the `OrnsteinUhlenbeckActionNoise`

object have changed

The properties defining the probability distribution of the Ornstein-Uhlenbeck (OU) noise model have been renamed. DDPG agents use OU noise for exploration.

The

`Variance`

property has been renamed`StandardDeviation`

.The

`VarianceDecayRate`

property has been renamed`StandardDeviationDecayRate`

.The

`VarianceMin`

property has been renamed`StandardDeviationMin`

.

The default values of these properties remain the same. When an
`OrnsteinUhlenbeckActionNoise`

noise object saved from a previous
MATLAB release is loaded, the values of `Variance`

,
`VarianceDecayRate`

, and `VarianceMin`

are copied in
the `StandardDeviation`

, `StandardDeviationDecayRate`

,
and `StandardDeviationMin`

, respectively.

The `Variance`

, `VarianceDecayRate`

, and
`VarianceMin`

properties still work, but they are not recommended. To
define the probability distribution of the OU noise model, use the new property names
instead.

**Update Code**

This table shows how to update your code to use the new property names for
`rlDDPGAgentOptions`

object `ddpgopt`

.

Not Recommended | Recommended |
---|---|

`ddpgopt.NoiseOptions.Variance = 0.5;` | `ddpgopt.NoiseOptions.StandardDeviation = 0.5;` |

`ddpgopt.NoiseOptions.VarianceDecayRate = 0.1;` | ```
ddpgopt.NoiseOptions.StandardDeviationDecayRate =
0.1;
``` |

`ddpgopt.NoiseOptions.VarianceMin = 0;` | `ddpgopt.NoiseOptions.StandardDeviationMin = 0;` |

### R2020a: Target update method settings for DDPG agents have changed

Target update method settings for DDPG agents have changed. The following changes require updates to your code:

The

`TargetUpdateMethod`

option has been removed. Now, DDPG agents determine the target update method based on the`TargetUpdateFrequency`

and`TargetSmoothFactor`

option values.The default value of

`TargetUpdateFrequency`

has changed from`4`

to`1`

.

To use one of the following target update methods, set the
`TargetUpdateFrequency`

and `TargetSmoothFactor`

properties as indicated.

Update Method | `TargetUpdateFrequency` | `TargetSmoothFactor` |
---|---|---|

Smoothing | `1` | Less than `1` |

Periodic | Greater than `1` | `1` |

Periodic smoothing (new method in R2020a) | Greater than `1` | Less than `1` |

The default target update configuration, which is a smoothing update with a
`TargetSmoothFactor`

value of `0.001`

, remains the
same.

**Update Code**

This table shows some typical uses of `rlDDPGAgentOptions`

and how to
update your code to use the new option configuration.

Not Recommended | Recommended |
---|---|

```
opt =
rlDDPGAgentOptions('TargetUpdateMethod',"smoothing");
``` | `opt = rlDDPGAgentOptions;` |

```
opt =
rlDDPGAgentOptions('TargetUpdateMethod',"periodic");
``` | ```
opt = rlDDPGAgentOptions; opt.TargetUpdateFrequency = 4;
opt.TargetSmoothFactor = 1;
``` |

```
opt = rlDDPGAgentOptions; opt.TargetUpdateMethod = "periodic";
opt.TargetUpdateFrequency = 5;
``` | ```
opt = rlDDPGAgentOptions; opt.TargetUpdateFrequency = 5;
opt.TargetSmoothFactor = 1;
``` |

## Open Example

You have a modified version of this example. Do you want to open this example with your edits?

## MATLAB Command

You clicked a link that corresponds to this MATLAB command:

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

## How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

### Americas

- América Latina (Español)
- Canada (English)
- United States (English)

### Europe

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)