Use simultaneous localization and mapping (SLAM) algorithms to build maps surrounding the ego vehicle based on visual or lidar data. Use visual-inertial odometry to estimate the pose (position and orientation) of a vehicle based on data from onboard sensors such as inertial measurement units (IMUs).
Rotations, Orientations, and Quaternions for Automated Driving
Quaternions are four-part hypercomplex numbers that are used to describe three-dimensional rotations and orientations. Learn how to use them for automated driving applications.
Understand visual simultaneous localization and mapping (SLAM) workflow.
Monocular Visual Simultaneous Localization and Mapping
Visual simultaneous localization and mapping (vSLAM).
Process 3-D lidar sensor data to progressively build a map, with assistance from inertial measurement unit (IMU) readings.
Build a Map from Lidar Data Using SLAM
Process lidar data to build a map and estimate a vehicle trajectory using simultaneous localization and mapping.
Build Occupancy Map from 3-D Lidar Data Using SLAM
Build a 2-D Occupancy map from 3-D Lidar data using a simultaneous localization and mapping (SLAM) algorithm.
Understand point cloud registration and mapping workflow.
Build Map and Localize Using Segment Matching (Lidar Toolbox)
This example shows how to build a map with lidar data and localize the position of a vehicle on the map using SegMatch [1] (Lidar Toolbox), a place recognition algorithm based on segment matching.