RL Agent training SimulationInput error

114 views (last 30 days)
Manel Vilella Vega
Manel Vilella Vega on 16 Dec 2020
Commented: Akimzhan on 11 Apr 2022
I'm having trouble with a mofification of the TrainDDPGAgentForACCExample.mlx example. I have adapted it for my case and mainly the biggest difference is the size of action and observation.
I'm getting the following error messages:
Warning: Evaluation of function '@(in_)localInnerSimFcn(simData,in_,[])' failed with the
following errors. Using empty SimulationOutput object as return value.
Warning: There was an error evaluating the reset function. See the help for the ResetFcn
property for instructions on creating a valid reset function.
Warning: Undefined function 'localResetFcn' for input arguments of type
'Simulink.SimulationInput'.
Warning: Error occurred while executing the listener callback for event EpisodeFinished
defined for class rl.env.SimulinkEnvWithAgent:
Dot indexing is not supported for variables of this type.
Error in rl.train.TrainingManager/update (line 129)
this.TotalEpisodeStepCount = this.TotalEpisodeStepCount + epinfo.StepsTaken;
Error in
rl.train.TrainingManager>@(src,ed)update(this,ed.Data.EpisodeInfo,ed.Data.EpisodeCount,ed.Data.WorkerID)
(line 277)
@(src,ed)
update(this,ed.Data.EpisodeInfo,ed.Data.EpisodeCount,ed.Data.WorkerID));
Error in rl.env.AbstractEnv/notifyEpisodeFinished (line 320)
notify(this,'EpisodeFinished',ed);
Error in rl.env.SimulinkEnvWithAgent/executeSimsWrapper/nestedSimFinishedBC (line 333)
notifyEpisodeFinished(this,...
Error in rl.env.SimulinkEnvWithAgent>@(src,ed)nestedSimFinishedBC(ed) (line 341)
simlist(1) = event.listener(this.SimMgr,'SimulationFinished' ,@(src,ed)
nestedSimFinishedBC(ed));
Error in Simulink.SimulationManager/handleSimulationOutputAvailable
Error in
Simulink.SimulationManager>@(varargin)obj.handleSimulationOutputAvailable(varargin{:})
Error in MultiSim.internal.SimulationRunnerSerial/executeImplSingle
Error in MultiSim.internal.SimulationRunnerSerial/executeImpl
Error in Simulink.SimulationManager/executeSims
Error in Simulink.SimulationManagerEngine/executeSims (line 50)
out = obj.SimulationManager.executeSims(fh);
Error in rl.env.SimulinkEnvWithAgent/executeSimsWrapper (line 358)
executeSims(this.SimEngine,simfh,in);
Error in rl.env.SimulinkEnvWithAgent/simWrapper (line 408)
simouts = executeSimsWrapper(this,policy,in,simfh,simouts,opts);
Error in rl.env.SimulinkEnvWithAgent/simWithPolicy (line 593)
simouts = simWrapper(env,policy,simData,in,opts);
Error in rl.train.seriesTrain (line 16)
[~,simInfo] = simWithPolicy(env,agent,rlSimulationOptions(...
Error in rl.train.TrainingManager/train (line 251)
rl.train.seriesTrain(this);
Error in rl.train.TrainingManager/run (line 155)
train(this);
Error in rl.agent.AbstractAgent/train (line 54)
TrainingStatistics = run(trainMgr);
Error in Dades_manel (line 114)
trainingStats = train(agent,env,trainingOpts);
> In rl.env/AbstractEnv/notifyEpisodeFinished (line 320)
In rl.env.SimulinkEnvWithAgent.executeSimsWrapper/nestedSimFinishedBC (line 333)
In rl.env.SimulinkEnvWithAgent>@(src,ed)nestedSimFinishedBC(ed) (line 341)
In Simulink/SimulationManager/handleSimulationOutputAvailable
In Simulink.SimulationManager>@(varargin)obj.handleSimulationOutputAvailable(varargin{:})
In MultiSim.internal/SimulationRunnerSerial/executeImplSingle
In MultiSim.internal/SimulationRunnerSerial/executeImpl
In Simulink/SimulationManager/executeSims
In Simulink/SimulationManagerEngine/executeSims (line 50)
In rl.env/SimulinkEnvWithAgent/executeSimsWrapper (line 358)
In rl.env/SimulinkEnvWithAgent/simWrapper (line 408)
In rl.env/SimulinkEnvWithAgent/simWithPolicy (line 593)
In rl.train.seriesTrain (line 16)
In rl.train/TrainingManager/train (line 251)
In rl.train/TrainingManager/run (line 155)
In rl.agent.AbstractAgent/train (line 54)
In Dades_manel (line 114)
Error using rl.train.seriesTrain (line 16)
An error occurred while simulating "Mod_SVPWM_SiC" with the agent
"rl.util.PolicyInstance.get()".
Error in rl.train.TrainingManager/train (line 251)
rl.train.seriesTrain(this);
Error in rl.train.TrainingManager/run (line 155)
train(this);
Error in rl.agent.AbstractAgent/train (line 54)
TrainingStatistics = run(trainMgr);
Error in Dades_manel (line 114)
trainingStats = train(agent,env,trainingOpts);
Caused by:
Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 817)
There was an error evaluating the reset function. See the help for the ResetFcn
property for instructions on creating a valid reset function.
Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 817)
Undefined function 'localResetFcn' for input arguments of type
'Simulink.SimulationInput'.
My code is:
Ts = 1e-8;
Tf = 0.02;
mdl = 'Model_test1';
open_system(mdl)
agentblk = [mdl '/RL Agent'];
%% Enviroment interface
% create the observation info
observationInfo = rlNumericSpec([9 1]);
observationInfo.Name = 'observations';
% action Info
actionInfo = rlNumericSpec([3 1],'LowerLimit',0,'UpperLimit',1);
actionInfo.Name = 'acceleration';
% define environment
env = rlSimulinkEnv(mdl,agentblk,observationInfo,actionInfo);
% randomize initial positions of lead car
env.ResetFcn = @(in)localResetFcn(in);
rng('default')
%% DDPG agent
L = 48; % number of neurons
statePath = [
imageInputLayer([9 1 1],'Normalization','none','Name','observation')
fullyConnectedLayer(L,'Name','fc1')
reluLayer('Name','relu1')
fullyConnectedLayer(L,'Name','fc2')
additionLayer(2,'Name','add')
reluLayer('Name','relu2')
fullyConnectedLayer(L,'Name','fc3')
reluLayer('Name','relu3')
fullyConnectedLayer(1,'Name','fc4')];
actionPath = [
imageInputLayer([3 1 1],'Normalization','none','Name','action')
fullyConnectedLayer(L, 'Name', 'fc5')];
criticNetwork = layerGraph(statePath);
criticNetwork = addLayers(criticNetwork, actionPath);
criticNetwork = connectLayers(criticNetwork,'fc5','add/in2');
%% Critic representation
%plot(criticNetwork)
criticOptions = rlRepresentationOptions('LearnRate',1e-3,'GradientThreshold',1,'L2RegularizationFactor',1e-4);
critic = rlRepresentation(criticNetwork,observationInfo,actionInfo,...
'Observation',{'observation'},'Action',{'action'},criticOptions);
%% Create actor
actorNetwork = [
imageInputLayer([9 1 1],'Normalization','none','Name','observation')
fullyConnectedLayer(L,'Name','fc1')
reluLayer('Name','relu1')
fullyConnectedLayer(L,'Name','fc2')
reluLayer('Name','relu2')
fullyConnectedLayer(L,'Name','fc3')
reluLayer('Name','relu3')
fullyConnectedLayer(3,'Name','fc4')
tanhLayer('Name','tanh1')
scalingLayer('Name','ActorScaling1','Scale',2.5,'Bias',-0.5)];
actorOptions = rlRepresentationOptions('LearnRate',1e-4,'GradientThreshold',1,'L2RegularizationFactor',1e-4);
actor = rlRepresentation(actorNetwork,observationInfo,actionInfo,...
'Observation',{'observation'},'Action',{'ActorScaling1'},actorOptions);
%% Create agent
agentOptions = rlDDPGAgentOptions(...
'SampleTime',Ts,...
'TargetSmoothFactor',1e-3,...
'ExperienceBufferLength',1e6,...
'DiscountFactor',0.99,...
'MiniBatchSize',64);
agentOptions.NoiseOptions.Variance = 0.6;
agentOptions.NoiseOptions.VarianceDecayRate = 1e-5;
agent = rlDDPGAgent(actor,critic,agentOptions);
%% Train agent
maxepisodes = 5000;
maxsteps = ceil(Tf/Ts);
trainingOpts = rlTrainingOptions(...
'MaxEpisodes',maxepisodes,...
'MaxStepsPerEpisode',maxsteps,...
'Verbose',false,...
'Plots','training-progress',...
'StopTrainingCriteria','EpisodeReward',...
'StopTrainingValue',221);
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainingOpts);
% save('agent_new.mat','agent')
else
% Load pretrained agent for the example.
load('agent_old.mat','agent')
end
I can't upload the simulink file because it is sensitive data.
Any idea on what the errror could be?
  1 Comment
black_cat
black_cat on 6 Apr 2021
Hey, how did you fix the error msg that says that an "error occurred while executing the listener callback for event EpisodeFinished defined for class rl.env.SimulinkEnvWithAgent ..."?

Sign in to comment.

Answers (1)

Emmanouil Tzorakoleftherakis
Hello,
It looks to me like you are not implementing the localReset function that you assign to the environment. The ACC example at the very bottom shows how you would implement the reset function. I would start with that.
  2 Comments
张 冠宇
张 冠宇 on 15 Nov 2021
may i ask which part can be the output of the resetfuction, only the constant?or other modulars

Sign in to comment.

Products


Release

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!