Main Content

Detection and Tracking

Camera sensor configuration, visual perception, lidar processing, tracking and sensor fusion

Automated Driving Toolbox™ perception algorithms use data from cameras and lidar scans to detect and track objects of interest and locate them in a driving scenario. These algorithms are ideal for ADAS and autonomous driving applications, such as automatic braking and steering.

Categories

  • Camera Sensor Configuration
    Monocular camera sensor calibration, image-to-vehicle coordinate system transforms, bird’s-eye-view image transforms
  • Visual Perception
    Lane boundary, pedestrian, vehicle, and other object detections using machine learning and deep learning
  • Lidar Processing
    Velodyne® file import, segmentation, downsampling, transformations, visualization, 3-D point cloud registration, and lane detection in lidar data
  • Tracking and Sensor Fusion
    Object tracking and multisensor fusion, bird’s-eye plot of detections and object tracks