Main Content

Train Biped Robot to Walk Using Evolution Strategy

This example shows how to train a biped robot to walk using evolutionary strategy with a twin-delayed deep deterministic policy gradient (TD3) reinforcement learning (RL) agent. The robot in this example is modeled in Simscape™ Multibody™.

For a related example, see Train Biped Robot to Walk Using Reinforcement Learning Agents. For more information on these agents, see Twin-Delayed Deep Deterministic (TD3) Policy Gradient Agents.

For this example, the agent is trained using the evolution strategy reinforcement learning (ES-RL) algorithm. This algorithm [3] combines the cross entropy method (CEM) with off-policy RL algorithms like SAC, DDPG or TD3. CEM-RL is built on the framework of evolutionary reinforcement learning (ERL) [4] in which a standard evolutionary algorithm selects and evolves a population of actors and generates experiences in the process. These experiences are then added into a reply buffer that is used to train a single gradient-based actor that is considered part of the population.

The ES-RL algorithm proceeds as follows:

  1. A population of actor networks is initialized with random weights. In addition to the population, one additional actor network is initialized alongside a critic network.

  2. The population of actors is then evaluated in an episode of interaction with the environment.

  3. The additional actor and critic are updated on the data buffer populated using population actor evaluation.

  4. The fitness of all actors in the population is computed through their interaction with the environment. The average return over the episode is used as their respective fitness index.

  5. A selection operator selects surviving actors in the population based on their relative fitness scores.

  6. The surviving elite set of actors is used to generate the next population of actors.

Biped Robot Model

The reinforcement learning environment for this example is a biped robot. The training goal is to make the robot walk in a straight line using minimal control effort.

Load the parameters of the model into the MATLAB® workspace.

robotParametersRL

Open the Simulink model.

mdl = "rlWalkingBipedRobot";
open_system(mdl)

The robot is modeled using Simscape Multibody.

For this model:

  • In the neutral 0 radians position, both legs are straight and the ankles are flat.

  • The foot contact is modeled using the Spatial Contact Force (Simscape Multibody) block.

The agent can control this individual joints (ankle, knee, and hip) on both legs of the robot by applying joint torques bounded between [-3,3] N·m. The actual computed action signals are normalized between -1 and 1.

The environment provides the following 29 observations to the agent.

  • Y (lateral) and Z (vertical) translations of the torso center of mass. The translation in the Z direction is normalized to a similar range as the other observations.

  • X (forward), Y (lateral), and Z (vertical) translation velocities.

  • Yaw, pitch, and roll angles of the torso.

  • Yaw, pitch, and roll angular velocities of the torso.

  • Angular positions and velocities of the three joints (ankle, knee, hip) on both legs.

  • Action values from the previous time step.

The episode terminates if either of the following conditions occur.

  • The robot torso center of mass is less than 0.1 m in the Z direction (the robot falls) or more than 1 m in either Y direction (the robot moves too far to the side).

  • The absolute value of the roll, pitch, or yaw is greater than 0.7854 rad.

The following reward function rt, which is provided at every time step, is inspired by [2].

rt=vx-3y2-50zˆ2+25TsTf-0.02iut-1i2

Here:

  • vx is the translation velocity in the X direction (forward toward goal) of the robot.

  • y is the lateral translation displacement of the robot from the target straight line trajectory.

  • zˆ is the normalized vertical translation displacement of the robot center of mass.

  • ut-1i is the torque from joint i from the previous time step.

  • Ts is the sample time of the environment.

  • Tfis the final simulation time of the environment.

This reward function encourages the agent to move forward by providing a positive reward for positive forward velocity. It also encourages the agent to avoid episode termination by providing a constant reward (25 Ts/Tf) at every time step. The other terms in the reward function are penalties for substantial changes in lateral and vertical translations, and for the use of excess control effort.

Create Environment Interface

Create the observation specification.

numObs = 29;
obsInfo = rlNumericSpec([numObs 1]);
obsInfo.Name = "observations";

Create the action specification.

numAct = 6;
actInfo = rlNumericSpec([numAct 1],LowerLimit=-1,UpperLimit=1);
actInfo.Name = "foot_torque";

Create the environment interface for the walking robot model.

blk = mdl + "/RL Agent";
env = rlSimulinkEnv(mdl,blk,obsInfo,actInfo);
env.ResetFcn = @(in) walkerResetFcn(in, ...
    upper_leg_length/100, ...
    lower_leg_length/100, ...
    h/100);

Create RL Agent for Training

This example trains a TD3 agent using an evolutionary-strategy-based gradient-free optimization technique to learn biped locomotion. Create the TD3 agent.

agent = createTD3Agent(numObs,obsInfo,numAct,actInfo,Ts);

The createTD3Agent helper function performs the following actions.

  • Create the actor and critic networks.

  • Specify training options for actor and critic.

  • Create actor and critic using the networks and options defined.

  • Configure agent-specific options.

  • Create the agent.

TD3 Agent

The TD3 algorithm is an extension of DDPG with improvements that make it more robust by preventing overestimation of Q values [3].

  • Two critic networks — TD3 agents learn two critic networks independently and use the minimum value function estimate to update the actor (policy). Doing so avoids overestimation of Q values through the maximum operator in the critic update.

  • Addition of target policy noise — Adding clipped noise to value functions smooths out Q function values over similar actions. Doing so prevents learning an incorrect sharp peak of a noisy value estimate.

  • Delayed policy and target updates — For a TD3 agent, delaying the actor network update allows more time for the Q function to reduce error (get closer to the required target) before updating the policy. Doing so prevents variance in value estimates and results in a higher quality policy update.

The structure of the actor and critic networks used for this agent are the same as the ones used for DDPG agents. For details on the creating the TD3 agent, see the createTD3Agent helper function. For information on configuring TD3 agent options, see rlTD3AgentOptions.

Specify Evolution Strategy Training Options and Train the Agent

Set ES-RL training options as follows:

  • Set PopulationSize, the number of actors that are evaluated in each generation, to 25.

  • Set PercentageEliteSize, the size of the surviving elite population from which next generation actors are generated, to 50% of the total population.

  • Set MaxGenerations, the maximum number of generations for population to evolve, to 2000.

  • Set MaxStepsPerEpisode, the maximum simulation steps per episode run per actor.

  • Set TrainEpochs, the number of epochs for the gradient-based agent.

  • Display the training progress in the Episode Manager dialog box by setting Plots to "training-progress" and disable the command line display by setting Verbose to false (0).

  • Terminate the training when the agent reaches an average score of 250.

For more information and additional options, see rlEvolutionStrategyTrainingOptions.

maxEpisodes = 2000;
maxSteps = floor(Tf/Ts);
trainOpts = rlEvolutionStrategyTrainingOptions(...
    "MaxGeneration", maxEpisodes, ...
    "MaxStepsPerEpisode", maxSteps, ...
    "ScoreAveragingWindowLength", 10, ...
    "Plots", "training-progress", ... 
    "StopTrainingCriteria", "EpisodeReward", ...
    "StopTrainingValue", 250,...
    "PopulationSize",25,...
    "PercentageEliteSize",50,...
    "ReturnedPolicy", 'BestPolicy',...
    "Verbose",0,...
    "SaveAgentCriteria",'none');

trainOpts.TrainEpochs = 50;
trainOpts.EvaluationsPerIndividual = 1;

trainOpts.PopulationUpdateOptions.UpdateMethod = "WeightedMixing";
trainOpts.PopulationUpdateOptions.InitialStandardDeviation = 0.25;
trainOpts.PopulationUpdateOptions.InitialStandardDeviationBias = 0.25;

Train the agent using the trainWithEvolutionStrategy function. This process is computationally intensive and takes several hours to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.

doTraining = false;
if doTraining    
    % Train the agent.
    trainingStats = trainWithEvolutionStrategy(agent,env,trainOpts);
else
    % Load a pretrained agent.
    load("rlWalkingBipedRobotESTD3.mat","saved_agent")
end

In this example, the training was stopped at average reward of 250. The steady increases of the estimates shows the agent potential to converge to the true discounted long-term reward with longer training periods.

Simulate Trained Agents

Fix the random seed for reproducibility.

rng(0)

To validate the performance of the trained agent, simulate it within the biped robot environment. For more information on agent simulation, see rlSimulationOptions and sim.

simOptions = rlSimulationOptions(MaxSteps=maxSteps);
experience = sim(env,saved_agent,simOptions);

The figure shows the simulated biped robot while walking along a line.

References

[1] Lillicrap, Timothy P., Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. "Continuous Control with Deep Reinforcement Learning." Preprint, submitted July 5, 2019. https://arxiv.org/abs/1509.02971.

[2] Heess, Nicolas, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, et al. "Emergence of Locomotion Behaviours in Rich Environments." Preprint, submitted July 10, 2017. https://arxiv.org/abs/1707.02286.

[3] Fujimoto, Scott, Herke van Hoof, and David Meger. "Addressing Function Approximation Error in Actor-Critic Methods." Preprint, submitted October 22, 2018. https://arxiv.org/abs/1802.09477.

[4] Pourchot, Aloïs, and Olivier Sigaud. "CEM-RL: Combining evolutionary and gradient-based methods for policy search." Preprint, submitted February 11, 2019. https://arxiv.org/abs/1810.01222.

[5] Khadka, Shauharda, and Kagan Tumer. "Evolution-guided policy gradient in reinforcement learning." Advances in Neural Information Processing Systems 31 (2018). https://arxiv.org/abs/1805.07917.

See Also

Functions

Objects

Related Topics