Code Generation for Deep Learning Simulink Model That Performs Lane and Vehicle Detection
This example shows how to generate C++ code from a Simulink® model that performs lane and vehicle detection by using convolutional neural networks (CNNs). The example takes the frames of a traffic video as an input, outputs two lane boundaries that correspond to the left and right lanes of the ego vehicle, and detects vehicles in the frame. This example uses a pretrained lane detection network and a pretrained vehicle detection network from the Object Detection Using YOLO v2 Deep Learning example of the Computer Vision Toolbox™. For more information, see Object Detection Using YOLO v2 Deep Learning (Computer Vision Toolbox).
This example illustrates the following concepts:
Model the lane detection application in Simulink. First you process the traffic video by resizing to 227-by-227-by-3 and multiplying by a constant factor of 255. Subsequently, you process the traffic video by using the pretrained network loaded in the Predict block from Deep Learning Toolbox™. Finally, if the left and right lane boundaries are detected, the parabolic coefficients to model the trajectories of the lane boundaries are obtained.
Model the vehicle detection application in Simulink by processing the traffic video using a pretrained YOLO v2 detector. This network detects vehicles in the video and outputs the coordinates of the bounding boxes for these vehicles and their confidence score.
Configure the model for code generation.
Prerequisites
Intel Math Kernel Library for Deep Neural Networks (MKL-DNN).
Refer to MKLDNN CPU Support for the list of processors that support MKL-DNN library.
Deep Learning Toolbox™ for using the
DAGNetwork
object.Computer Vision Toolbox™ for video I/O operations.
Algorithmic Workflow
The block diagram for the algorithmic workflow of the Simulink model follows.
Get Pretrained Lane and Vehicle Detection Networks
This example uses the trainedLaneNet
and yolov2ResNet50VehicleExample
MAT files containing the pretrained networks. The files are approximately 143 MB and 98 MB in size, respectively. Download the files.
lanenetFile = matlab.internal.examples.downloadSupportFile('gpucoder/cnn_models/lane_detection','trainedLaneNet.mat'); vehiclenetFile = matlab.internal.examples.downloadSupportFile('vision/data','yolov2ResNet50VehicleExample.mat');
Download Test Traffic Video
To test the model, the example uses the Caltech lanes data set. The file is approximately 16 MB in size. Download this file.
mediaFile = matlab.internal.examples.downloadSupportFile('gpucoder/media','caltech_washington1.avi');
Lane and Vehicle Detection Simulink Model
The following diagram shows the Simulink model for performing lane and vehicle detection on the traffic video. When the model runs, the Video Viewer block displays the traffic video with lane and vehicle annotations.
model='laneAndVehicleDetection';
open_system(model);
Set the file paths of the downloaded network model in the Predict and Detector blocks of the Simulink model.
set_param('laneAndVehicleDetection/Lane Detection','NetworkFilePath',lanenetFile) set_param('laneAndVehicleDetection/Vehicle Detector','DetectorFilePath',vehiclenetFile)
Set the location of the test video that you load to the Simulink model.
set_param('laneAndVehicleDetection/Traffic Video','inputFileName',mediaFile)
Lane Detection
The Predict block loads the pretrained lane detection network from the trainedLaneNet.mat
file. This network takes an image as an input and outputs two lane boundaries that correspond to the left and right lanes of the ego vehicle. Each lane boundary is represented by the parabolic equation:
Here, y is the lateral offset and x is the longitudinal distance from the vehicle. The network outputs the three parameters a, b, and c per lane. The LaneDetectionCoordinates
MATLAB Function block defines a function lane_detection_coordinates
that takes the output from the predict block and outputs three parameters: laneFound
, ltPts
, and rtPts
. The block uses thresholding to determine if both left and right lane boundaries are both found. If are found, laneFound
is set to be true and the trajectories of the boundaries are calculated and stored in ltPts
and rtPts
.
type lane_detection_coordinates
function [laneFound,ltPts,rtPts] = lane_detection_coordinates(laneNetOut) % Copyright 2020 The MathWorks, Inc. persistent laneCoeffMeans; if isempty(laneCoeffMeans) laneCoeffMeans = [-0.0002 0.0002 1.4740 -0.0002 0.0045 -1.3787]; end persistent laneCoeffStds; if isempty(laneCoeffStds) laneCoeffStds = [0.0030 0.0766 0.6313 0.0026 0.0736 0.9846]; end params = laneNetOut .* laneCoeffStds + laneCoeffMeans; isRightLaneFound = abs(params(6)) > 0.5; %c should be more than 0.5 for it to be a right lane isLeftLaneFound = abs(params(3)) > 0.5; persistent vehicleXPoints; if isempty(vehicleXPoints) vehicleXPoints = 3:30; %meters, ahead of the sensor end ltPts = coder.nullcopy(zeros(28,2,'single')); rtPts = coder.nullcopy(zeros(28,2,'single')); if isRightLaneFound && isLeftLaneFound rtBoundary = params(4:6); rt_y = computeBoundaryModel(rtBoundary, vehicleXPoints); ltBoundary = params(1:3); lt_y = computeBoundaryModel(ltBoundary, vehicleXPoints); % Visualize lane boundaries of the ego vehicle tform = get_tformToImage; % map vehicle to image coordinates ltPts = tform.transformPointsInverse([vehicleXPoints', lt_y']); rtPts = tform.transformPointsInverse([vehicleXPoints', rt_y']); laneFound = true; else laneFound = false; end end
Vehicle Detection
A YOLO v2 object detection network is composed of two subnetworks: a feature extraction network followed by a detection network. This pretrained network uses a ResNet-50
for feature extraction. The detection sub-network is a small CNN compared to the feature extraction network and is composed of a few convolutional layers and layers specific to YOLO v2. The Simulink model performs vehicle detection using the Object Detector block. This block takes an image as input and outputs the bounding box coordinates along with the confidence scores for vehicles in the image.
Annotation of Vehicle Bounding Boxes and Lane Trajectory in Traffic Video
The LaneVehicleAnnotation
MATLAB Function block defines a function lane_vehicle_annotation
, which annotates the vehicle bounding boxes with the confidence scores. Also, if laneFound
is true, then the left and right lane boundaries stored in ltPts
and rtPts
are annotated in the traffic video.
type lane_vehicle_annotation
function In = lane_vehicle_annotation(laneFound, ltPts, rtPts, bboxes, scores, In) % Copyright 2020 The MathWorks, Inc. if ~isempty(bboxes) In = insertObjectAnnotation(In, 'rectangle', bboxes, scores); end pts = coder.nullcopy(zeros(28, 4, 'single')); if laneFound prevpt = [ltPts(1,1) ltPts(1,2)]; for k = 2:1:28 pts(k,1:4) = [prevpt ltPts(k,1) ltPts(k,2)]; prevpt = [ltPts(k,1) ltPts(k,2)]; end In = insertShape(In, 'Line', pts, 'LineWidth', 2); prevpt = [rtPts(1,1) rtPts(1,2)]; for k = 2:1:28 pts(k,1:4) = [prevpt rtPts(k,1) rtPts(k,2)]; prevpt = [rtPts(k,1) rtPts(k,2)]; end In = insertShape(In, 'Line', pts, 'LineWidth', 2); In = insertMarker(In, ltPts); In = insertMarker(In, rtPts); end end
Get Pretrained Lane and Vehicle Detection Networks
This function downloads the yolov2ResNet50VehicleExample.mat
file.
getVehicleDetectionAndLaneDetectionNetworks()
Run Simulation
Open Configuration Parameters dialog box. On the Simulation Target pane, in the Deep Learning group, select the Target library as MKL-DNN
.
set_param(model,'SimDLTargetLibrary','MKL-DNN');
On the Interface pane, in the Deep Learning group, select the Target library as MKL-DNN
.
set_param(model, 'DLTargetLibrary','MKL-DNN');
To verify the lane and vehicle detection algorithms and display the lane trajectories, vehicle bounding boxes, and scores for the traffic video loaded in the Simulink model, run the simulation.
set_param('laneAndVehicleDetection', 'SimulationMode', 'Normal'); sim('laneAndVehicleDetection');
Generate and Build Simulink Model
In the Code Generation pane, select the Language as C++.
set_param(model,'TargetLang','C++');
Generate and build the Simulink model using the slbuild
command. The code generator places the files in the laneAndVehicleDetection_grt_rtw
build subfolder under your current working folder.
currentDir = pwd;
status = evalc("slbuild('laneAndVehicleDetection')");
Generated C++ Code
The subfolder named laneAndVehicleDetection_grt_rtw
contains the generated C++ code corresponding to the different blocks in the Simulink model and the specific operations being performed in those blocks. For example, the file trainedLaneNet0_0.h
contains the C++ class, which contains attributes and member functions representing the pretrained lane detection network.
hfile = fullfile(currentDir, 'laneAndVehicleDetection_grt_rtw',... 'trainedLaneNet0_0.h'); coder.example.extractLines(hfile,'#ifndef RTW_HEADER_trainedLaneNet0_0_h_',... '#endif')
#ifndef RTW_HEADER_trainedLaneNet0_0_h_ #define RTW_HEADER_trainedLaneNet0_0_h_ #include "MWOnednnTargetNetworkImpl.hpp" #include "MWTensorBase.hpp" #include "MWTensor.hpp" #include "MWCNNLayer.hpp" #include "MWInputLayer.hpp" #include "MWElementwiseAffineLayer.hpp" #include "MWFusedConvActivationLayer.hpp" #include "MWNormLayer.hpp" #include "MWMaxPoolingLayer.hpp" #include "MWFCLayer.hpp" #include "MWReLULayer.hpp" #include "MWOutputLayer.hpp" #include "MWConvLayer.hpp" #include "MWYoloExtractionLayer.hpp" #include "MWSigmoidLayer.hpp" #include "MWExponentialLayer.hpp" #include "MWYoloSoftmaxLayer.hpp" #include "MWConcatenationLayer.hpp" #include "MWActivationFunctionType.hpp" #include "MWRNNParameterTypes.hpp" #include "MWTargetTypes.hpp" #include "shared_layers_export_macros.hpp" #include "MWOnednnUtils.hpp" #include "MWOnednnCustomLayerBase.hpp" #include "MWOnednnCommonHeaders.hpp" #include "rtwtypes.h" class trainedLaneNet0_0 { public: boolean_T isInitialized; boolean_T matlabCodegenIsDeleted; trainedLaneNet0_0(); void setSize(); void resetState(); void setup(); void predict(); void cleanup(); real32_T *getLayerOutput(int32_T layerIndex, int32_T portIndex); int32_T getLayerOutputSize(int32_T layerIndex, int32_T portIndex); real32_T *getInputDataPointer(int32_T index); real32_T *getInputDataPointer(); real32_T *getOutputDataPointer(int32_T index); real32_T *getOutputDataPointer(); int32_T getBatchSize(); int32_T getOutputSequenceLength(int32_T layerIndex, int32_T portIndex); ~trainedLaneNet0_0(); private: int32_T numLayers; MWTensorBase *inputTensors; MWTensorBase *outputTensors; MWCNNLayer *layers[18]; MWOnednnTarget::MWTargetNetworkImpl *targetImpl; void allocate(); void postsetup(); void deallocate(); };
Similarly, the file yolov2ResNet50VehicleExample0_0.h
contains the C++ class, which represents the pretrained YOLO v2 detection network.
hfile = fullfile(currentDir, 'laneAndVehicleDetection_grt_rtw',... 'yolov2ResNet50VehicleExample0_0.h'); coder.example.extractLines(hfile,'#ifndef RTW_HEADER_yolov2ResNet50VehicleExample0_0_h_',... '#endif')
#ifndef RTW_HEADER_yolov2ResNet50VehicleExample0_0_h_ #define RTW_HEADER_yolov2ResNet50VehicleExample0_0_h_ #include "MWOnednnTargetNetworkImpl.hpp" #include "MWTensorBase.hpp" #include "MWTensor.hpp" #include "MWCNNLayer.hpp" #include "MWInputLayer.hpp" #include "MWElementwiseAffineLayer.hpp" #include "MWFusedConvActivationLayer.hpp" #include "MWNormLayer.hpp" #include "MWMaxPoolingLayer.hpp" #include "MWFCLayer.hpp" #include "MWReLULayer.hpp" #include "MWOutputLayer.hpp" #include "MWConvLayer.hpp" #include "MWYoloExtractionLayer.hpp" #include "MWSigmoidLayer.hpp" #include "MWExponentialLayer.hpp" #include "MWYoloSoftmaxLayer.hpp" #include "MWConcatenationLayer.hpp" #include "MWActivationFunctionType.hpp" #include "MWRNNParameterTypes.hpp" #include "MWTargetTypes.hpp" #include "shared_layers_export_macros.hpp" #include "MWOnednnUtils.hpp" #include "MWOnednnCustomLayerBase.hpp" #include "MWOnednnCommonHeaders.hpp" #include "rtwtypes.h" class yolov2ResNet50VehicleExample0_0 { public: boolean_T isInitialized; boolean_T matlabCodegenIsDeleted; yolov2ResNet50VehicleExample0_0(); void setSize(); void resetState(); void setup(); void predict(); void activations(int32_T layerIdx); void cleanup(); real32_T *getLayerOutput(int32_T layerIndex, int32_T portIndex); int32_T getLayerOutputSize(int32_T layerIndex, int32_T portIndex); real32_T *getInputDataPointer(int32_T index); real32_T *getInputDataPointer(); real32_T *getOutputDataPointer(int32_T index); real32_T *getOutputDataPointer(); int32_T getBatchSize(); int32_T getOutputSequenceLength(int32_T layerIndex, int32_T portIndex); ~yolov2ResNet50VehicleExample0_0(); private: int32_T numLayers; MWTensorBase *inputTensors; MWTensorBase *outputTensors; MWCNNLayer *layers[57]; MWOnednnTarget::MWTargetNetworkImpl *targetImpl; void allocate(); void postsetup(); void deallocate(); };
Note: If the System target file parameter is set to grt.tlc
, you must select the Generate code only model configuration parameter. If you set the System target file to ert.tlc
, you can clear the Generate code only parameter, but to create an executable file, you must generate an example main program.
Cleanup
Close the Simulink model.
save_system('laneAndVehicleDetection'); close_system('laneAndVehicleDetection');