Building a Scalable Digital Engineering Framework for Autonomous Ground Vehicle Validation
Automating Scenario Generation for Reliable Autonomous Vehicle Performance
“Using MATLAB tools, we automated 128 real-time test scenarios with full traceability—reducing execution time by roughly six times and cutting validation time from days to hours.”
Key Outcomes
- Automated execution of 128 real-time test scenarios in 7 hours down from one to two days, significantly cutting validation time
- Achieved full traceability across requirements, architecture, and test results
- Developed scalable framework adaptable to new vehicle platforms and operational design domains
Every year, natural disasters like hurricanes or earthquakes can make it difficult to access remote areas to deliver supplies or assess damage. Autonomous, off-road vehicles offer one solution to this problem, but these complex cyber-physical systems face engineering challenges such as handling complex terrain and harsh environmental conditions, as well as choosing between a wide variety of algorithmic options for perception, planning, and control. As a result, the development of these vehicles often relies on inconsistent, ad hoc testing methods.
To address this, researchers from the Automation, Robotics and Mechatronics Laboratory (ARMLab) at Clemson University International Center for Automotive Research (CU-ICAR), in collaboration with the Virtual Prototyping of Autonomy-Enabled Ground Systems (VIPR-GS) Research Center and the U.S. Army DEVCOM Ground Vehicle Systems Center (GVSC), developed a modular digital engineering framework to verify and validate autonomous ground vehicles in off-road environments.
The team integrated digital twin simulations in AutoDRIVE Ecosystem with model-based systems engineering and Model-Based Design workflows. System Composer™ was used to specify and analyze system architecture with bidirectional traceability across requirements, design, and testing. A custom WebSocket-based API interfaced AutoDRIVE with MATLAB® and Simulink®, allowing real-time data exchange and automated scenario generation.
Deep learning models for object detection were coupled with planning and control systems to enable perception-enhanced control. Variant Manager and Test Manager were used to automate 128 unique test scenarios, each simulating different environmental conditions and system configurations.
The result was a scalable, extensible framework capable of executing comprehensive validation in roughly 7 hours—representing a 70% to 85% reduction from the previous duration of one to two days—all triggered with a single click. The team is now exploring hardware-in-the-loop testing and high-performance computing integration to further expand capabilities.
DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. OPSEC10147.