rlOptimizerOptions
Description
Use an rlOptimizerOptions
object to specify an optimization
options set for actors and critics.
Creation
Description
creates a
default optimizer option set to use as a optOpts
= rlOptimizerOptionsCriticOptimizerOptions
or
ActorOptimizerOptions
property of an agent option object, or as a
last argument of rlOptimizer
to create an optimizer object. You can
modify the object properties using dot notation.
creates an options set with the specified properties using one or more name-value
arguments.optOpts
= rlOptimizerOptions(Name=Value
)
Properties
LearnRate
— Learning rate used in training the actor or critic function approximator
0.01
(default) | positive scalar
Learning rate used in training the actor or critic function approximator, specified as a positive scalar. If the learning rate is too low, then training takes a long time. If the learning rate is too high, then training might reach a suboptimal result or diverge.
Example: LearnRate=0.025
GradientThreshold
— Gradient threshold value for the training of the actor or critic function approximator
Inf
(default) | positive scalar
Gradient threshold value used in training the actor or critic function approximator,
specified as Inf
or a positive scalar. If the gradient exceeds this
value, the gradient is clipped as specified by the
GradientThresholdMethod
option. Clipping the gradient limits how
much the network parameters can change in a training iteration.
Example: GradientThreshold=1
GradientThresholdMethod
— Gradient threshold method used in training the actor or critic function approximator
"l2norm"
(default) | "global-l2norm"
| "absolute-value"
Gradient threshold method used in training the actor or critic function approximator. This is the specific method used to clip gradient values that exceed the gradient threshold, and it is specified as one of the following values.
"l2norm"
— If the L2 norm of the vector Glyr containing the gradient components related to the weights or biases of a layer is larger thanGradientThreshold
, then this option scales Glyr by a factor ofGradientThreshold/L
, where L is the L2 norm of Glyr. When you use this option, the L2 norm of Glyr in the returned gradient cannot exceedGradientThreshold
. For example, a fully connected layer has two parameter arrays,Weights
andBias
. The threshold is applied to the L2 norm of the gradient components related toWeights
andBias
separately."global-l2norm"
— If the L2 norm of the gradient G (with respect to all learnable network parameters), is larger thanGradientThreshold
, then this option scales G by a factor of L, where L is the L2 norm of G. When you use this option, the L2 norm of the returned gradient cannot exceedGradientThreshold
."absolute-value"
— If the absolute value of an individual (scalar) partial derivative in the gradient G (with respect to all learnable network parameters), is larger thanGradientThreshold
, then this option scales the partial derivative so that the corresponding component in the returned gradient has magnitude equal toGradientThreshold
and the same sign of the original partial derivative. When you use this option, the absolute value of any component of the returned gradient cannot exceedGradientThreshold
.
For more information, see Gradient Clipping in the
Algorithms section of trainingOptions
in Deep Learning Toolbox™.
Example: GradientThresholdMethod="absolute-value"
L2RegularizationFactor
— Factor for L2 regularization used in training the actor or critic function approximator
0.0001 (default) | nonnegative scalar
Factor for L2 regularization (weight
decay) used in training the actor or critic function approximator, specified as a
nonnegative scalar. For more information, see L2 Regularization in the Algorithms section of trainingOptions
in Deep Learning Toolbox.
To avoid overfitting when using a representation with many parameters, consider
increasing the L2RegularizationFactor
option.
Example: L2RegularizationFactor=0.0005
Algorithm
— Algorithm used for training actor or critic function approximator
"adam"
(default) | "sgdm"
| "rmsprop"
Algorithm used for training the actor or critic function approximator, specified as one of the following values.
"adam"
— Use the Adam (adaptive movement estimation) algorithm. You can specify the decay rates of the gradient and squared gradient moving averages using theGradientDecayFactor
andSquaredGradientDecayFactor
fields of theOptimizerParameters
option."sgdm"
— Use the stochastic gradient descent with momentum (SGDM) algorithm. You can specify the momentum value using theMomentum
field of theOptimizerParameters
option."rmsprop"
— Use the RMSProp algorithm. You can specify the decay rate of the squared gradient moving average using theSquaredGradientDecayFactor
fields of theOptimizerParameters
option.
For more information about these algorithms, see the Algorithms section
of trainingOptions
in Deep Learning Toolbox.
Example: Optimizer="sgdm"
OptimizerParameters
— Parameters for the training algorithm used for training the actor or critic function approximator
OptimizerParameters
object
Parameters for the training algorithm used for training the actor or critic function
approximator, specified as an OptimizerParameters
object with the
following parameters.
Parameter | Description |
---|---|
Momentum | Contribution of previous step, specified as a scalar from 0 to 1. A value of 0 means no contribution from the previous step. A value of 1 means maximal contribution. This parameter applies only when
|
Epsilon | Denominator offset, specified as a positive scalar. The optimizer adds this offset to the denominator in the network parameter updates to avoid division by zero. This parameter applies only when
|
GradientDecayFactor | Decay rate of gradient moving average, specified as a positive scalar from 0 to 1. This parameter applies only when
|
SquaredGradientDecayFactor | Decay rate of squared gradient moving average, specified as a positive scalar from 0 to 1. This parameter applies only when
|
When a particular property of OptimizerParameters
is not
applicable to the optimizer type specified in Algorithm
, that
property is set to "Not applicable"
.
To change property values, create an rlOptimizerOptions
object and
use dot notation to access and change the properties of
OptimizerParameters
.
repOpts = rlRepresentationOptions; repOpts.OptimizerParameters.GradientDecayFactor = 0.95;
Object Functions
rlQAgentOptions | Options for Q-learning agent |
rlSARSAAgentOptions | Options for SARSA agent |
rlDQNAgentOptions | Options for DQN agent |
rlPGAgentOptions | Options for PG agent |
rlDDPGAgentOptions | Options for DDPG agent |
rlTD3AgentOptions | Options for TD3 agent |
rlACAgentOptions | Options for AC agent |
rlPPOAgentOptions | Options for PPO agent |
rlTRPOAgentOptions | Options for TRPO agent |
rlSACAgentOptions | Options for SAC agent |
rlOptimizer | Creates an optimizer object for actors and critics |
Examples
Create Optimizer Options Object
Use rlOprimizerOptions
to create a default optimizer option object to use for the training of a critic function approximator.
myCriticOpts = rlOptimizerOptions
myCriticOpts = rlOptimizerOptions with properties: LearnRate: 0.0100 GradientThreshold: Inf GradientThresholdMethod: "l2norm" L2RegularizationFactor: 1.0000e-04 Algorithm: "adam" OptimizerParameters: [1x1 rl.option.OptimizerParameters]
Using dot notation, change the training algorithm to stochastic gradient descent with momentum and set the value of the momentum parameter to 0.6
.
myCriticOpts.Algorithm = "sgdm";
myCriticOpts.OptimizerParameters.Momentum = 0.6;
Create an AC agent option object, and set its CriticOptimizerOptions
property to myCriticOpts
.
myAgentOpt = rlACAgentOptions; myAgentOpt.CriticOptimizerOptions = myCriticOpts;
You can now use myAgentOpt
as last input argument to rlACAgent
when creating your AC agent.
Create Optimizer Options Object Specifying Property Values
Use rlOprimizerOptions
to create an optimizer option object to use for the training of an actor function approximator. Specify a learning rate of 0.2
and set the GradientThresholdMethod
to "absolute-value"
.
myActorOpts=rlOptimizerOptions(LearnRate=0.2, ... GradientThresholdMethod="absolute-value")
myActorOpts = rlOptimizerOptions with properties: LearnRate: 0.2000 GradientThreshold: Inf GradientThresholdMethod: "absolute-value" L2RegularizationFactor: 1.0000e-04 Algorithm: "adam" OptimizerParameters: [1x1 rl.option.OptimizerParameters]
Using dot notation, change the a GradientThreshold
to 10
.
myActorOpts.GradientThreshold = 10;
Create an AC agent option object and set its ActorOptimizerOptions
property to myActorOpts
.
myAgentOpt = rlACAgentOptions( ...
ActorOptimizerOptions=myActorOpts);
You can now use myAgentOpt
as last input argument to rlACAgent
when creating your AC agent.
Version History
Introduced in R2022a
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)