Options for PG agent
rlPGAgentOptions object to specify options for policy
gradient (PG) agents. To create a PG agent, use
For more information on PG agents, see Policy Gradient Agents.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
opt = rlPGAgentOptions
rlPGAgentOptions object for use as an argument when creating a PG
agent using all default settings. You can modify the object properties using dot
UseBaseline— Use baseline for learning
Option to use baseline for learning, specified as a logical value. When
true, you must specify a critic
network as the baseline function approximator.
In general, for simpler problems with smaller actor networks, PG agents work better without a baseline.
UseDeterministicExploitation— Use action with maximum likelihood
Option to return the action with maximum likelihood for simulation and policy generation,
specified as a logical value. When
true, the action with maximum likelihood is always used in
generatePolicyFunction, which casues the agent to behave
UseDeterministicExploitation is set to
agent samples actions from probability distributions, which causes the agent to behave
SampleTime— Sample time of agent
1(default) | positive scalar
Sample time of agent, specified as a positive scalar.
Within a Simulink® environment, the agent gets executed every
SampleTime seconds of simulation time.
Within a MATLAB® environment, the agent gets executed every time the environment advances. However,
SampleTime is the time interval between consecutive elements in the output experience returned by
DiscountFactor— Discount factor
0.99(default) | positive scalar less than or equal to 1
Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.
EntropyLossWeight— Entropy loss weight
0(default) | scalar value between
Entropy loss weight, specified as a scalar value between
1. A higher loss weight value promotes agent exploration by applying a penalty for being too certain about which action to take. Doing so can help the agent move out of local optima.
For episode step t, the entropy loss function, which is added to the loss function for actor updates, is:
E is the entropy loss weight.
M is the number of possible actions.
μk(St|θμ) is the probability of taking action Ak when in state St following the current policy.
When gradients are computed during training, an additional gradient component is computed for minimizing this loss function.
|Policy gradient reinforcement learning agent|
This example shows how to create and modify a PG agent options object.
Create a PG agent options object, specifying the discount factor.
opt = rlPGAgentOptions('DiscountFactor',0.9)
opt = rlPGAgentOptions with properties: UseBaseline: 1 EntropyLossWeight: 0 UseDeterministicExploitation: 0 SampleTime: 1 DiscountFactor: 0.9000
You can modify options using dot notation. For example, set the agent sample time to
opt.SampleTime = 0.5;