Overview of MathWorks ADAS and Automated Driving Tools for Student Competitions - MATLAB
Video length is 56:38

Overview of MathWorks ADAS and Automated Driving Tools for Student Competitions

Overview

This webinar will provide students with an introduction to ADAS and automated driving concepts along with an overview of MathWorks tools that can help students get started with scenario generation, algorithm development, and sensor simulation for Automated Driving applications.

Highlights

The following applications will be discussed

  • Scene and scenario design
  • Sensor simulation
  • Localization
  • Tracking and Fusion
  • Planning and Controls
  • Deploy, integrate, test

About the Presenter

Akshra Narasimhan Ramakrishnan is the technical lead for the SAE AutoDrive and FSAE US competitions at MathWorks. In addition to these, she also supports other worldwide ADAS and automotive competitions. Akshra received her Master’s degree from The Ohio State University where she was the Connected and Automated Vehicles Team Lead of the EcoCAR Mobility Challenge.

Recorded: 27 Apr 2023

Hey, everyone. My name is Akshra. I'm a student competition stat lead at the MathWorks, and I help support our ADAS competitions. And this is going to be a presentation on the ADAS tools that we provide for student competitions.

All right, so today's agenda we are going to start with scene and scenario design, talk a little bit about sensor simulation tools available, talk about algorithm design for perception, fusion, localization, planning and controls, and then go a little bit more in detail on some localization stuff that we have. Next, talk about deploying, integrating, and testing everything that we saw, and I will be doing two demos. One will be a getting started with ROS on Simulink, and then the next will just be an automated parking valet demo using ROS in Simulink.

OK, so MathWorks provides tools and dedicated software that allows you to design and simulate scenes and scenarios, sensors, and algorithms for detection, localization, tracking and fusion, planning, decision, and controls. So now let's take a look at some of the examples that we ship in these areas.

Looking into the environments part of this presentation, which is the scenes and scenarios. So we have multiple ways that you can design virtual driving scenarios. You might be familiar with most of these. First, it's the cuboid world, which is a 2D representation of the scenes and scenarios that you would like to create. Next is the 3D representation, which is using Unreal Engine and of course, RoadRunner, which is used to design custom scenes for controls, fusion, planning, perception, et cetera. And cuboid is mostly a scenario ordering tool for lower fidelity scenarios, and then you have Unreal Engine and RoadRunner for higher fidelity scenarios.

And here you can see specific features for all these. For sensing, we have radars, vision, and lane detections for the cuboid world. Same thing for Unreal. We also have extra radars and fisheye cameras. And then of course, the RoadRunner, you can import in HD maps and export OpenDRIVE or to other third party simulators. And a little bit about the Driving Scenario designer app itself. You would have already seen this app, but the idea is that this app helps you graphically build up the scenarios, as well as be able to export these scenarios into the MATLAB workspace and Simulink.

This is a video of the app in action. So as you can see, you can add multiple actors, ego vehicles, give speed and other information for all those actors, and you can of course, add your radars, cameras, and LiDARs to those actors in the scene. These are some examples using the cuboid word. So you can add guardrails, barriers. You can even specify reverse motions. Like I mentioned in the previous slide, you can import OpenStreetMaps and then also export to OpenDRIVE, OpenSCENARIO, and other third party software.

Next is the Unreal Engine part, and this is mostly used to simulate in 3D environment. You can choose from a set of pre-built scenes, which are all listed here-- straight road, curved road, parking lot, et cetera. And these example scenes are just available out of the box in the Automated Driving Toolbox. It is also possible to customize these scenes, and then I've listed some steps that you can do to actually import or customize scenes.

And you can define virtual vehicles in a scenario. You can control movements of the vehicles. You can give X, Y, and Yaw coordinates. You can also change weather, change information about the roads, et cetera.

And then this is a flow chart of how you would co-simulate with Unreal and Simulink. So you would send a back and forth information about vehicle position and orientation, along with scene information between Simulink in Unreal. So you would use Simulink to actually determine the positions of your objects, control your vehicle, and then also configure the 3D environment. And then you would actually look at object positions in Unreal Engine, and you can query the actual 3D environment itself.

So you would start with configuring scenes in the simulation environment, then you can place and move your vehicles in the scene, and then you can set up sensors on these vehicles, and then you would want to simulate those sensor outputs based on environments around the vehicles. Then, of course, obtain ground truth and evaluate algorithm performance.

And then this is a slide on the block execution order during co-simulation. As you can see, the scene priority of minus 1 goes first, and then 0, and then 1. There is a video resource there for you, which is a series on how to use Unreal Engine with Simulink. It is a pretty good series. Please check it out. The link is on the chat.

So you start with simulating your 3D vehicles, and here I have the ground following option enabled, which means the vehicles you select will follow the ground information of the scene. So this is used to initialize and send X, Y, and Yaw data to the simulation 3D scene configuration block, which executes second. So this will receive and send vehicle data to the sensor blocks, which will receive vehicle data and then you can use these to locate and visualize vehicles. This is a good slide to remember, just because you would want to know the priority of everything that you are executing when you start building a scene using Unreal and Simulink.

OK, and then a few examples on what we have new in Unreal Engine, some enhancements. We've made it easier to integrate custom meshes. So if you have specific measures for your vehicle, you can integrate those in Unreal Engine. You can of course, import your own vehicle, and then you can work with it. And then the second example, you can control vehicle headlights and tail lights. And then, of course, like I mentioned, you could control weather and sun position, as well.

Next is RoadRunner. This is one of our newer offerings. So you would mainly use this to design 3D scenes for automated driving algorithms. The core of this platform is RoadRunner itself, which is an intuitive tool for designing scenes. You could design scenes from scratch, you can import data from a variety of formats, including OpenDRIVE and other HD maps. Here, HD maps are Tom Tom or your own custom format. You're going to pull these HD maps using RoadRunner, and, like I mentioned, you can also export to these third party software that are listed here.

OK, this is an example of the types of workflows that you can do using RoadRunner, MATLAB, and Simulink. So you can generate scenarios from recorded data. You can generate scenario variations using MATLAB programmatic API, and then you can also automate testing of these scenarios that you have created in RoadRunner using Simulink test.

And then another workflow is the actor design workflow. This is just how you can integrate between different planning and control methods. If you have sensors in Unreal Engine, you can use them in this course simulation workflow as well. If you have external C/C++ code or Python, you can integrate that with these as well.

This is just an example of how you can easily integrate vehicle dynamics in Simulink. And then things like this example, as you can see it running. We have a path action, follower, vehicle position, speed actions, and actor process. And then this would send reference information, controls, vehicle dynamics have all of the above. And then the idea is that this would automatically generate a trajectory follower, which integrates with RoadRunner Scenario and Simulink.

OK, now that we've talked about environments, and scenes, and scenarios, let's talk a little bit about the vehicles, sensors, and dynamics before we talk about the algorithms. Sensors. So one of the main reasons we simulate in Unreal Engine is to see the effect of those sensors in the real world. So we've been shipping a variety of sensor models since 2017, mostly in the Automated Driving Toolbox, the Radar Toolbox, and the Navigation Toolbox.

So some of these sensors, for example radar, LiDAR, vision, lane detection, they work in both environments. And then some of these sensors, like the fisheye camera, they only work in Unreal. And then similarly, some like radar tracks, some radar-specific signals, they only work in the cuboid world. And the sensors that I talked about, the radar, the fisheye cameras, they all are used to detect objects, but we also have positional sensors, like wheel encoders, GPS systems, IMUs, and inertial navigation systems. So these can be used in both environments since they are used to detect positions.

Here are a few sensor-based examples. This first example goes over how you can synthesize monocular camera sensor data, and the second one talks about synthesizing LiDAR data. So in this video, we have these Simulink models with the sensors attached to them, and then we have the Unreal Engine output in the bottom. And then on the top, we have the camera output from Unreal, and then we have these segmented outputs from the camera sensors.

This is the video for the LiDAR example that I talked about. You can input in various mounting positions for the LiDAR sensor. Again, we have the Unreal output on the left, and then we have the LiDAR point cloud output on the right.

And then once you've added sensors to your vehicles, especially when it comes to cameras, you would need to of course, have a set of camera data that has already been labeled. So we've made it a lot easier for you to try automatic labeling of this data. Here are a few examples specific to automatic labeling. We have the Ground Truth Labeler app. Please check that out.

And the third example here talks about the automated Ground Truth labeling of camera and LiDAR data. So all it would do is name all of the figures in one frame and then the Ground Truth Labeler app will take that off. Identifying these features in the entire video and then it would label them for you.

This is the sense of calibration apps. These are very important when you're starting to build an automated driving system. You want your sensors to be calibrated to your specific configuration. So we have the Camera Calibrator app and the LiDAR Camera Calibrator app, which makes work so much easier for you. Can use these apps to estimate intrinsics, extrinsics, et cetera. And then the LiDAR Camera Calibrator app will help you interactively estimate transformations between LiDAR cameras.

And then talking about logged sensor simulation data. This example goes over how you can visualize the data that you've logged from an Unreal Engine simulation sensor. And then you can log this data, perform offline analysis whenever you want to, play it back, look at all of the edge case scenarios that you might have recorded, and then you can visualize it, as well.

Then as you can see, we have the camera outputs. We have the LiDAR output and then we also have a cuboid world output with a sensor attached to it. And then the cursor, if you can see, it's moving between different frames. And then we also have a GPS output at the bottom. And since this is all made from log recorded data, you're able to actually move between different frames whenever you want to.

And then next, this is just a slide on how you can actually communicate with the 3D simulation environment. This is telling you how to send and receive data between Simulink and Unreal Engine. This specific example is running a double lane change maneuver, so you can use the simulation 3D message set blocks to actually send data to Unreal Engine. And for this example, what we're sending is the traffic light color. And then, of course, you can use the 3D message get to retrieve data from Unreal Engine, and the data that we're retrieving is the number of cones that have been hit.

So a setting the traffic light color. Once it's green, the car starts moving.

We're getting the number of cones hit.

And this example might be especially useful for Formula Student teams, but this is an example that goes over how to design and train a YOLOv2 network for cone detection. The link will be in the chat.

So we're generating ground truth data, then we're converting this ground truth data into training data, then designing the YOLOv2 network, and then, of course, training the network to detect cones. This is just a step-by-step approach on how to do this. You have a Video Labeler app that will help you label all the objects in a video. Giving the MAT file of the recorded data.

Label all the different types of cones, and then we can export this data to workspace.

You have the labeled data in a timetable, and then, of course, you have the label definitions, as well. You have all the different positions of the various colors of cones.

This is the YOLOv2 network itself. And then retraining the network.

As you can see, the network is able to detect yellow and blue cones in the scene.

And then we have an entire video series dedicated to perception and detecting cones. This is the Making Vehicles and Robots See video series-- link is in the chat. So there are a few chapters in this video series that go through what basic operations on images, talks about image segmentation analysis, feature matching and tracking. We also talk a little bit about the basics of point cloud processing using MATLAB Live scripts, and then, of course, exercises, one of which shown here, which is detecting cones from a video.

And the top image is a segmented image of just the cones, and then the bottom image shows you how feature matching and feature tracking is done, and the middle image shows you the results of one of these exercises. I would highly recommend to check this out, especially if you are a beginner to perception. This series goes in great detail about how to detect stuff from a camera video.

Now that we talked about environments and vehicles, let's talk about algorithms. So this slide goes over how you can generate object-level fused track list from camera and LiDAR. This uses a JBB air tracker. We have so many trackers available on the MathWorks website. And then this is a pretty good example, especially if you have camera and LiDAR data and you would like to know how to fuse them to get object-level data of vehicles, as is shown in the scene here.

And then most of you might know what Stateflow is, but if you don't, it is just a graphical way to model state transition, flowcharts, supervisory logic. And you can do fault management. You can also generate code, and we have a Pattern Wizard app that can generate just common flow charts if you just input the pattern you want in it.

And then this is just a slide on the planning and controls algorithms that we have in the MathWorks website. So we have various levels of automation, autonomy, automated driving ranges from ADAS features like adaptive cruise control, emergency braking, to higher level functions like lane changes and parking. So the parking lot example is what we will see at the end of the session today by using ROS and Simulink.

The first example is just an automated parking example using an MPC, but for a truck. Second is if you're playing around with reinforcement learning. The second example goes over a few applications that you can do using reinforcement learning. And then the third is if you are using the dynamic occupancy grade planning and navigation example. You can look at this example in the Navigation Toolbox, which goes about motion planning in urban environments where there's a lot of clutter.

And this is just a slide on the planning algorithms that we have, benefits of each of those algorithms, and what they are used for. So the Navigation Toolbox, it has a number of path and motion planning algorithms. The link has a better description of these planners, but I just wanted to show you everything that we have that you can use to make an informed design decision.

This is just an example that goes over a planning and controls algorithm example for a highway lane change maneuver. So we have scenes and scenarios, and then the planning and controls algorithms getting information from the scenes and scenarios, and of course, we have vehicle dynamics feeding in the ego vehicle information to the planning and controls module. And then in the output to the right, you can see our ego vehicle actually performing a lane change when it detects the lead vehicle slowing down. And we have the same example of using the RoadRunner scenario as well.

And then this is an adaptive MPC example. So an adaptive MPC, it provides a new planned model at each control interval. So here we are just defining an area on the road, which is the red dotted box around the ego vehicle. These are some constraints that we have applied. The ego must not enter when it detects an object. We are continuing to define these specific constraints at every step, depending upon the ego position, and the ego vehicle follows the reference velocity and it avoids all obstacles outside of that constraint.

And then this is a path planning example using Delauney triangulation. This example shows you how to plan a path through cones on a racetrack. This is part of the MPC Toolbox, and it is basically analogous to the first lap part of the Formula Student Driverless competitions. And the specific path planning algorithm that we're talking about here is the Delauney triangulation algorithm. These figures go over what the algorithm actually does, but this example walks you through how to implement that algorithm for this specific track, and of course, plot results.

Since we talked about planning and controls, let's talk a little bit about detection and localization algorithms. And the main focus for this is the LiDAR Toolbox, and the focus of this toolbox is to help you learn how to design SLAM algorithms and design detection algorithms for lanes, semantic segmentations, and how do you process and use point cloud data.

And this is specifically some SLAM-based algorithms. SLAM stands for simultaneous localization and mapping. The first goes over developing a SLAM algorithm using Unreal Engine. The second one talks about how you can use stereo camera to develop a SLAM algorithm, and the third is using a LiDAR to develop the SLAM algorithm using a 3D environment.

Now that we talked about environments, vehicles, and algorithms, let's talk a little bit about how you can deploy, integrate, and test all of these algorithms that you would have built. So we have some examples that will show you how to generate code for lane detection, for controls, for planning. We also provide examples on software in the loop simulations, how you can change a model-in-the-loop simulation to a software-in-the-loop simulation, and then you can get metrics for the simulations, like code coverage, execution time, which might help you make better informed design decisions. You can figure out which part of your code is slowing it down, you can pinpoint that area, and then make changes to that. You can figure out where you can make your code faster, so then you can have faster iterations of the same code.

This is just a slide on the CAN resources that we have using the Vehicle Networks Toolbox. So this link in your chart, it contains a webinar that goes over CAN basics, it talks about the Vehicle Networks Toolbox, it shows you how you can read and process CAN data in MATLAB, and how you can connect to physical networks in MATLAB and Simulink.

The next few minutes, I will be going over the two demos that we have, which is specific to ROS. So for ROS, we look at the examples that I've listed here. We have ROS data replay. You can send and receive messages live. I'll show you how to do, and then you can generate standalone ROS notes from Simulink, and this you can do for both ROS 1 and ROS 2. And then we have a number of ROS-related automotive application examples, one of which is the parking valet using ROS. So I will show you the models that we currently have for that and how you can use ROS in an actual automated driving system.

And then a little bit before we go to the demo section, we have a few resources in the racing lounge because we have all of our emails. If you have any questions, please feel free to contact us through email, and then we will respond as soon as we can. We also have the Racing Lounge Facebook group, where we post every Tuesday and Thursday every week on just upcoming webinars and events or other relevant examples for student competitions.

And then we have the Student Tutorials and Videos page, where you have tutorials and videos from other Formula Student teams from other competitions, a specific application area based tutorials and videos. This is where you would find the Making Vehicles and Robots series that I talked about and a few other video series as well. We have a Physical Modeling and Co-Generation video series. We have a Building a Race Car or a Race Car Development video series, which might be useful.

And then that is the software offer page. If you would like to request software for your team, please fill out the software request form that is in that link and we will get back with you shortly. Then we have the Student Lounge Blog. Some of you might have already seen that, but that is where we post technical blogs frequently. We've done interviews with other Formula Student teams, with other student competition teams. We have Where Are They Now posts that check in on student competition team members and how they're currently using MATLAB and Simulink in their jobs, and I will also post a lot of technical blogs. The Delauney triangulation example that I talked about, there is a blog about that in detail in the Student Lounge.

And that is it for the presentation. So let's go into the live demo session of the day.

Please feel free to-- I will go over this slowly so that you can follow along if you would like. But if not, you can always look at the recording later. These examples are all available on the MathWorks website, so you can go to these in your leisure later.

The first example is just a ROS and Simulink example. Again, if you have any questions for the presentation or in the demo, please put them in the Q&A section of the webinar and I will get to that shortly.

Before we begin with this example, there are a couple of things that I would like to mention. The first is you would need, at least for this example, if you're using ROS in 2023a or from 2022b onward, you would need Python 3.8 or 3.9, so make sure that you download those specific versions of Python. So you can use the pyenv to check what version you have. So I have the 3.8, which is what is linked, and you want to make sure that the version that is linked in your executable and the library is also 3.8 or 3.9.

And a good way to ensure that MATLAB ROS Toolbox is able to see what version of Python you have is if you go to the Preferences panel in the Environment section, and if I go to the ROS Toolbox, open ROS Toolbox Preferences. Of course, you're going to get the ROS system requirements, but if you browse, make sure to link the version-specific Python executable. And then if you click the recreate Python environment, it will link the specific Python version to your MATLAB and ROS version, and then you should be good to go for any other ROS-related thing that you're doing in this MATLAB and Simulink version.

And then let's actually go into the ROS part of this topic. The first thing that we need to remember is every ROS network has a ROS master. And what this master does is it coordinates all parts of the ROS network. And for this example, we are going to use MATLAB to create a ROS master, but on our local system, and then Simulink will automatically detect this and use this local ROS master.

So to do that you would want to start with rosinit, launching ROS code, and then this screen will pop up, make sure to remember what your ROS IP is because we will need it later. We have a global node. So if you look at ROS node list, we have the MATLAB global node that was just created above.

So what we are going to do is create a publisher and subscriber in Simulink, and then just send some X, Y information, and see if we can view that. Let's open Simulink. Blank model.

Let's start with creating a publisher.

And then publish and make sure you select the tools from the ROS Toolbox are the last two. You're just going to do ROS today.

And to make sure that your simulating system is able to connect to ROS, click the Configure ROS Network. And here, you want to make sure that your hostname or IP address is the same as here, which is not, so 172.20.212.154.

Then you want to test, and make sure that you get this message which says, Successfully reached ROS master at. So that the first time you open this screen, that's always going to have a default hostname or IP address, so make sure that you change this. Otherwise, Simulink will not be able to connect to the MATLAB ROS master.

Once you do that, the next thing to do is select Topic source. And if you already have something that is sending messages, you can pick from that, but I'm just going to specify my own and call it location. So the ROS topic is where ROS messages will be published to. So this location topic will have all of the ROS messages that we will be publishing today.

So when you add a subscriber, you would want to subscribe to this specific topic if you want to see messages here. You can also have multiple topics on the same network, but you will just need to make sure that you are subscribing to whatever topic you want access to at that time.

So now that we have a publisher, we need to actually create the ROS message itself, and that is the blank message. And what this will do is create a point message. This is just the X, Y, so I don't have to change anything else, but if you want to create some other message type, we have inertia messages, or acceleration messages, waypoints. You can select those here. I'm just doing a geometric point message. Click that there.

You want to actually create the message itself, which is just going to be two sine waves. So let's just do a sine wave from the sources block. And the first one, I'm going to make a circle, so I am just going to change the first one's face to minus pi divided by 2, and then the second one can stay the same.

So then in the Bus Assignment block, you would want to give the actual blank message itself as the bus. So then if you go there, since our geometric point messages are of the form X, Y, Z, this is what is showing up in the left part of the BusAssignment block. So what we want is just the X and Y, so you can select those and you will have two input ports here. I'm going to just give in the sine waves that I have. It's two different faces, and then connect back to the publisher. So if I save that. I'm just going to make-- The sample time is infinite so that we can just see it continuously transmitting messages.

So if we go to MATLAB now, and if I do rosnode list, which lists all of the ROS nodes, we have the untitled, that one is the Simulink one that we're just running. So if we look at rostopic list, which lists all of ROS topics, we have location and then rosout is just rosout, and tf are just default topics that send with all information. So we just want to make sure that the topic that we are publishing messages to is actually being published. So it is, which is location.

Now that we've published messages, let's see how to subscribe. It is the same format, but just vise versa. So you would want to start. And then once you stop the actual model itself, the nodes and publishers are automatically deleted. So if I do rostopic list now, I don't have the location anymore because it is not publishing anymore. So you don't have to manually go and delete everything. Simulink will do it all for you.

Next, add publisher-- I mean subscriber, which is just Subscribe block. And then make sure that the topic that you are subscribing to is the one that is actually publishing on, so location. And then, like the bus assignment that we have here, you just need a bus selector so that we're selecting what information that we want to see from this topic. So if you give it the message to assure you that it has X, Y, and Z. What we want is just X and Y.

And then I am just going to use simple display blocks to display this so that we can see the actual information itself. And then I am going to use an X, Y graph so that you can see the circle. Connect it to X, connect it to Y.

And then this is new. I will come to that a little later, but for now we are just going to terminate that. And let me give the stock time as 10 seconds, and we want to make sure that we are changing this solver information. So to do that, let's go to Modeling, Model Settings, Solver, and then I'm going to make it fixed-step discrete.

And then the step size is 0.01, just to make it easier to see. Now let's run.

So if you looked at the display blocks, you see the information changing. Let's make sure everything is good.

Yep, you can see it change.

So you see a circle here come around again. So you see a circle actually forming, and then our starting point is 0,0, so we have a line at 0,0 there.

So I told you I will talk about what this is new to us. So let's say you would like to react only when you receive new messages. So then this new block output just ensures that your system or the rest of your system will only react whenever you get new messages.

So what this means is say you have the same data, which is being transmitted for 5 seconds, but then your new data comes in on the 6th second. What this would do is have one of that 5 second's data from that 5 second's interval, 1 data from that since it's the same data, but then wait until your new data comes in at the sixth interval and then it would receive that second data.

And then I'll show you the difference in this plot that this new part makes. So if we want to include that, you would want to do what is called an enabled sum system, which means it will trigger when a certain condition is enabled. And the condition you want to be enabled here is if it's new data. So what you would want to do is put whatever information you want in your subsystem, click the three dots here, and then you want to Create Enabled Subsystem.

And then what this will do is you see this notch on the top, it just means that is the trigger for the subsystem to be executed, and then you want to put that as this new. And let's see what difference this makes to our circle. And if you look closely at the outputs, you will only see new information in there.

And then visually, now you see that it's no 0,0 because there were multiple 0,0s being transmitted, and since the information wasn't new, we're just not looking at it. So you just have a nice and clean circle without a 0,0 line in it. That is just an overview on how to use ROS, Simulink, and MATLAB together. And you can do the same process in MATLAB as well. Instead of blocks, you would just have the MATLAB code for the publisher, the blank messages subscriber, and all the other blocks that we use here.

And then let's go to our second demo, which is the automated parking valet. And this will be pretty quick, because the idea is to just show you guys what a ROS-enabled automated system actually looks like in the real world.

So we're using all of the planning perception. All of the blocks are from the automated parking valet example in Simulink, so if you just Google automated parking valet, you should be able to see the same files that I'm showing you here. It's just I will just go into detail on some of these things.

So when you are looking to work on an autonomous vehicle application, this is mostly the workflow that you will follow, assuming it has all of the planning controls, localization, and perception aspects. The vehicle blocks will usually contain all of the vehicle dynamics information about the specific ego vehicle that you're using. And the perception localization gives you scene and scenario information, and then planning and controls is the actual algorithm that controls the ego vehicle based on the information that you're getting from the scene and scenario that is included.

And then this example though is only going to focus on the planning controls and vehicle aspects of it because we already have pre-processed localization data that we're just going to input into this example. But the main things that I want you to focus on, at least in this, is this specific diagram here.

And so we have a behavioral planner ROS node, we have a POC planner ROS node, a controller ROS node, and a vehicle simulation ROS node. And then if you look at the information being transmitted, you have all of these ROS topics that are being published and received. Multiple ROS topics here, and then the vehicle simulation node is going to send information to the behavior node, and that closes the loop on this entire process.

So then we have our behavior planner node, which basically determines what the next waypoint the vehicle should go is, speed of the vehicle, and a few other information needed for the planner. The planner controllers will actually calculate the path and the directions that the vehicle needs to move in order to specify certain conditions. Then we have the vehicle node, which gives you what the current vehicle position, velocity, and steering is.

And then the actual goal for this example, let me run this example here so that you understand what I'm talking about. First thing again, you want to start with rosinit. And then just this line. It loads in the pre-defined localization data file that I mentioned before. It's already initialized because we already initialized it in the previous example.

Let's wait for it to open the Simulink model.

So these are just all the nodes and we talked about in the actual file itself.

So if I go into the nodes. So we have the vehicle sim node. It's just taking in acceleration, deceleration, velocity, steering, vehicle commands. Giving out position, velocity, and steering commands. Then we have the behavioral node. It's taking it. If you remember this notch, it is an enable subsystem, which we just did in the previous example. So it's taking in position and velocity in the trigger-enabled subsystem and then also the vehicle direction.

Then we have the path planner node. It's taking in what the next goal the vehicle should end up in, and then some planner configuration stuff. And it's giving out velocity profiles and some path outputs. Then we have the controller node, which is taking in directions, current position of the vehicle, and then it's giving out steering angle, velocity, and acceleration, deceleration commands. And if you run this example, what you're going to see is just a scope that is showing reference velocity versus current vehicle velocity, and then the actual steering angle of the vehicle itself.

So these are all of the nodes in here. These are just all reference examples-- referenced models, which is why they're opening in a separate screen, but if you look at the publisher and subscriber blocks here, they look pretty much like what we had in the previous example. We publish blocks here. Again, it's publishing to specific topics.

And then if you look at all of these topic names, the reachgoal, Pose2D, they are all the same. This reachgoal topic is the same thing that we saw in the images in the previous section, but we have a subscriber block, we have whatever algorithm that you're running, and then we have a publisher block. Then again, if you open this one, it's the same structure. We have the subscriber block, your actual algorithm itself, and then your publish your block. And the reachgoal, the idea of this is just that the blocks know that the vehicle has reached its goal, which is to park in a parking spot.

And then once that is done-- we're just going to stop the simulation-- but this is where ROS comes in handy. When you're publishing to all of these topics, all you would need is a subscriber that is just subscribing to whatever topic that you would want, which is the reachgoal now. And then just do some stuff with that information.

Again, you have the controller model. Same thing. We have a subscriber block, controller, just the algorithm block, and then republish block. Again, same reach goal. Have the vehicle model, which is subscriber, vehicle model, and publish. And now you would have gathered this is the general trend of where you would include ROS publisher and subscriber blocks when you have just multiple algorithms, which is would make separate reference models for each ROS node that you would want. Ideally, you would want one node for every algorithm that you're running, and then you would have publisher and subscriber blocks sending in and receiving information for that specific algorithm, and then you can do whatever. You can select one topic and do whatever you want with that information. But that is the general idea of how you would use ROS in an automotive application example.

Let's if it's running.

Take a little bit.

Let's actually shut down and start a node. So what you would do if you want to shut down a previous ROS node that you have already started or initialized is rosshutdown. That just shuts down the node that we had. Can go again and you can go down and see if--

Initializing a new ROS node.

So we can actually see that the blue part is the reference part, and then the lines are the parking lots, and the black cars are the cars already in the parking lots. And if you select the spot, the idea is that the planning and controls algorithms will determine the best way. And once the goal is reached, the simulation will stop, which is what the reach goal topic does.

The presentation was a little helpful, but the recording of the presentation will be posted in a few days. And if you have any questions, please feel free to put them in the chat or email us later after you've watched the recordings, or again, visit our Facebook page, and we should have links for you posted there as well.

If anyone has any questions, I think that should be it. Please check out the recording once it's available. Again, check out all the resources for the Formula Student teams that I mentioned throughout the presentation, and hopefully you have a great competition here, and I will see you all in the next ADAS webinar. Thank you, everyone, for attending and I hope you have a great rest of the day.