MATLAB Answers

Is the error stated below a result of using a different version of MATLAB than the one the code in question was created with?

22 views (last 30 days)
Sammy Rossberg
Sammy Rossberg on 9 Jul 2020
Answered: Yahya Madhi on 27 Sep 2020 at 13:44
To preface this question, I tried to follow the steps in the video below and use an RL algorithm to simulate a walking robot using MATLAB R2020a
After downloading the resources in the video from his link in the description ( https://www.mathworks.com/matlabcentral/fileexchange/64227-matlab-and-simulink-robotics-arena-walking-robot?s_eid=PSM_15028 ), I tried to run the "createDDPGNetworks" file as he does as 7:07 in his video, and I got the error:
Unrecognized function or variable 'numObs'.
Error in createDDPGNetworks (line 12)
imageInputLayer([numObs 1 1],'Normalization','none','Name', 'observation')
While others have used his model and had plenty of success, I and a few others have gotten this same error. I was wondering if this is a result of him using MATLAB R2019a and me using MATLAB R2020a.

Answers (2)

Cam Salzberger
Cam Salzberger on 13 Jul 2020
Hello Sammy,
If you notice the workspace before Sebastian runs the script, it already has many variables defined. The createDDPGNetworks script is making use of some of those variables when setting up its neural networks. If you check out the createWalkingAgent2D (or 3D, I'm not sure which he was using), you can see that numObs is defined there.
-Cam

  2 Comments

Sammy Rossberg
Sammy Rossberg on 14 Jul 2020
Hello Cam,
Since I aksed this question I have defined a few variables and solved a few errors, including the one above.
Unfortunately, now when I try running both createWalkingAgent2D or 3D, I get errors referring back to createDDPGNetwork. The error I get can be seen below:
Error using rlRepresentation (line 70)
rlRepresentation will be removed in a future release. Unable to automatically convert rlRepresentation to new representation
object. Use the new representation objects rlValueRepresentation, rlQValueRepresentation, rlDeterministicActorRepresentation, or
rlStochasticActorRepresentation instead.
Error in createDDPGNetworks (line 51)
critic = rlRepresentation(criticNetwork,criticOptions, ...
Error in createWalkingAgent2D (line 31)
createDDPGNetworks;
I went through the MATLAB page that explains these actors and from what I understood I should be able to replace “rlRepresentation” with something like “rlValueRepresentation” while still leaving “rlRepresentationOptions” where it appears. However, when I do that, I still get errors specifically saying:
Error using rlValueRepresentation (line 43)
Too many input arguments.
Error in createDDPGNetworks (line 51)
critic = rlValueRepresentation(criticNetwork,criticOptions, ...
Hopefully this can be easily solved and there aren’t more errors to follow. Thank you for your help so far.
- Sammy

Sign in to comment.


Yahya Madhi
Yahya Madhi on 27 Sep 2020 at 13:44
Hello Sammy
Not sure if you have resolved the issue, but if you have not, follow the following instructions:
  1. Run the script startupWalkingRobot.m
  2. Open the simulink model named walkingRobotRL2D.slx
  3. Run the script robotParametersRL.m
  4. In the script createDDPGNetworks.m edit line 53 to critic = rlQValueRepresentation(criticNetwork,observationInfo,actionInfo,'Observation',{'observation'},'Action',{'action'},criticOptions);
  5. Also in the sript named createDDPGNetworks.m edit line 85 to actor = rlDeterministicActorRepresentation(actorNetwork,observationInfo,actionInfo,'Observation',{'observation'},'Action',{'ActorTanh1'},actorOptions);
  6. Thereafter open the script titled createWalkingAgent2D.m, select the appropriate Speedup Options for your system (lines 6-8). Once changed, run this script.
Hope this helps.

  0 Comments

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!