Lockheed Martin Builds Discrete-Event Models to Predict F-35 Fleet Performance
Build a discrete-event model of the fleet with Simulink and SimEvents, use MATLAB Parallel Server to accelerate thousands of simulations, and interpolate the results with Deep Learning Toolbox
- Simulation setup time reduced from months to hours
- Development effort lessened
- Simulation time cut by months
The Lockheed Martin F-35 Lightning II Sustainment program reduces life-cycle costs and increases the mission readiness of the F-35 fleet by minimizing downtime, supporting pilot training, and ensuring the availability of parts while avoiding unnecessary stockpiling. To achieve these goals, the program depends on accurate predictions of fleet performance, including projections of how long the aircraft will be grounded for service.
Lockheed Martin engineers used Simulink®, SimEvents®, Deep Learning Toolbox™, and MATLAB Parallel Server™ to model fleet performance and make predictions based on tens of thousands of simulations on a 256-worker computing cluster.
“With Simulink and SimEvents we created a model that incorporates data from the entire F-35 program and simulates thousands of aircraft operating every day, each with thousands of parts, at hundreds of locations over a span of many years,” says Justin Beales, project engineer at Lockheed Martin. “Accelerating thousands of Monte Carlo simulations on our cluster and then interpolating the results with Deep Learning Toolbox will save us years of processing time.”
Simulating F-35 fleet performance is extremely challenging due to the complexity of the aircraft and of the global logistics system needed to support it. Lockheed Martin initially sought to generate predictions using existing tools, but they proved to add complexity to the problem.
The Lockheed Martin team wanted to develop a detailed, easily configured model that they could use to rapidly simulate thousands of parameter combinations and scenarios. They needed to apply advanced techniques to generate and analyze the results, including Design of Experiments, machine learning, and other statistical and probabilistic methods.
Lockheed Martin engineers developed a sophisticated Simulink model of the F-35 fleet and simulated the model using the SimEvents discrete-event simulation engine.
They built the core of the model using SimEvents to create entities, and implemented the system logic using Attribute Function blocks with MATLAB® code. The model incorporated part and aircraft performance data, as well as data on aircraft modifications, anomalous maintenance events, part availability, and aircraft activity.
They verified the model using test cases and Department of Defense verification, validation, and accreditation guidelines.
The engineers ran Monte Carlo simulations with thousands of trials featuring both random events and parameter variations based on Design of Experiments. To generate results faster, the team used Parallel Computing Toolbox™ and MATLAB Parallel Server to run multiple simulations in parallel on a 256-worker cluster.
Using Deep Learning Toolbox, they trained a neural network to the simulation results, enabling them to interpolate simulation data.
During simulations, Simulink recorded and stored all events that occurred. To postprocess this data, the team developed MATLAB scripts to calculate performance metrics, generate annotated MATLAB plots, and create Microsoft® Excel files for use by other analysts.
Lockheed Martin is already using the model to predict fleet performance to support the F-35 Sustainment program. The team is currently exploring ways to use the model on other programs.
Simulation setup time reduced from months to hours. “It would have taken months to set up the data inputs to the old system,” says Beales. “In contrast, we can set up and run our Simulink and SimEvents model with a new data set in a day.”
Development effort lessened. “Simulink and SimEvents greatly expanded our fleet performance prediction capabilities while minimizing development effort,” says Beales.
Simulation time cut by months. “By running our simulations in parallel on a cluster instead of on our 12-core desktop computers, we completed them more than 20 times faster,” says Beales. “Plus, the interpolation that we perform with Deep Learning Toolbox greatly reduced the number of simulations we needed to perform, saving additional CPU time.”