Main Content

trackingSensorConfiguration

Represent sensor configuration for tracking

Description

The trackingSensorConfiguration object creates the configuration for a sensor used with a trackerPHD System object™ or a trackerGridRFS System object. You can use the trackingSensorConfiguration object to specify sensor parameters such as clutter density, sensor limits, and sensor resolution. You can also specify how a tracker perceives the detections from the sensor using properties such as FilterInitializationFcn, SensorTransformFcn, and SensorTransformParameters. See Create a Tracking Sensor Configuration for more details.

When used with a trackerPHD System object, the trackingSensorConfiguration object enables the tracker to perform four main routine operations:

  • Evaluate the probability of detection at points in state-space.

  • Compute the expected number of detections from a target.

  • Initiate components in the probability hypothesis density.

  • Obtain the clutter density of the sensor.

When used with a trackerGridRFS System object, the trackingSensorConfiguration object assists the tracker to project sensor data on 2-D grid. The tracker uses the SensorTransformParameters property to calculate the location and orientation of the sensor in the tracking coordinate frame. The tracker uses the SensorLimits property to calculate the field of view and the maximum range of the sensor. The SensorTransformFcn and FilterInitializationFcn properties are not relevant for the trackerGridRFS System object.

Creation

Description

config = trackingSensorConfiguration(sensorIndex) returns a trackingSensorConfiguration object with a specified sensor index, sensorIndex, and default property values.

example

config = trackingSensorConfiguration(sensor) returns a trackingSensorConfiguration object based on a sensor object.

example

config = trackingSensorConfiguration(sensor,platformPose) additionally specifies the pose of the sensor mounting platform. In this case, the SensorTransformParameters property includes the coordinate transform information from the scenario frame to the sensor platform frame.

example

config = trackingSensorConfiguration(sensorBlock) returns a trackingSensorConfiguration object based on a sensor block in Simulink.

example

config = trackingSensorConfiguration(sensorBlock,platformPose) specifies the pose of the sensor mounting platform. In this case, the SensorTransformParameters property includes the coordinate transform information from the scenario frame to the sensor platform frame.

configs = trackingSensorConfiguration(platform) returns the sensor configurations for all sensors mounted on a platform in a tracking scenario.

example

configs = trackingSensorConfiguration(scenario) returns the sensor configurations for all sensors defined in a tracking scenario. The object uses the poses of the platforms in the scenario to define the SensorTransformParameters property of each returned configuration.

example

___ = trackingSensorConfiguration(___,Name,Value) set properties using one or more name-value pairs. Use this syntax with any of the previous syntaxes.

example

Inputs

expand all

Unique sensor index, specified as a positive integer.

Sensor object, specified as one of these objects:

Platform pose information, specified as a structure. The structure has these fields.

Field NameDescription
PositionPosition of the platform with respect to the scenario frame, specified as a three-element vector.
VelocityVelocity of the platform with respect to the scenario frame, specified as a three-element vector.
OrientationOrientation of the platform frame with respect to the scenario frame, specified as a 3-by-3 rotation matrix or a quaternion.

Alternately, you can specify the structure using these fields.

Field NameDescription
PositionPosition of the platform with respect to the scenario frame, specified as a three-element vector.
VelocityVelocity of the platform with respect to the scenario frame, specified as a three-element vector.
YawYaw angle of the platform frame with respect to the scenario frame, specified as a scalar in degrees. The yaw angle corresponds to the z-axis rotation.
PitchPitch angle of the platform frame with respect to the scenario frame, specified as a scalar in degrees. The pitch angle corresponds to the y-axis rotation.
RollRoll angle of the platform frame with respect to the scenario frame, specified as a scalar in degrees. The roll angle corresponds to the x-axis rotation.

Simulink sensor block, specified as the handle or the path of a valid Simulink sensor block. A valid sensor block is one of those Simulink blocks:

Platform, specified as a Platform object.

Tracking scenario, specified as a trackingScenario object.

Outputs

expand all

Tracking sensor configuration, returned as a trackingSensorConfiguration object.

Tracking sensor configurations, returned as an N-element cell-array of trackingSensorConfiguration objects. N is the number of sensors on the platform or in the scenario.

Properties

expand all

Unique sensor identifier, specified as a positive integer. This property distinguishes data that come from different sensors in a multi-sensor system.

Note

If you specify the platform or scenario input argument, the object ignores the name-value input argument for this property.

Example: 2

Data Types: double

Indicate the detection reporting status of the sensor, specified as false or true. Set this property to true when the sensor must report detections within its sensor limits to the tracker. If a track or target was supposed to be detected by a sensor but the sensor reported no detections, then this information is used to count against the probability of existence of the track when the isValidTime property is set to true.

Data Types: logical

Filter initialization function, specified as a function handle or as a string scalar containing the name of a valid filter initialization function. The function initializes the PHD filter used by trackerPHD. The function must support the following syntaxes:

filter = filterInitializationFcn()
filter = filterInitializationFcn(detections)
filter is a valid PHD filter with components for new-born targets, and detections is a cell array of objectDetection objects. The first syntax allows you to specify the predictive birth density in the PHD filter without using detections. The second syntax allows the filter to initialize the adaptive birth density using detection information. See the BirthRate property of trackerPHD for more details.

If you create your own FilterInitializationFcn, you must also provide a transform function using the SensorTransformFcn property. Other than the default filter initialization function initcvggiwphd, Sensor Fusion and Tracking Toolbox™ also provides other initialization functions such as initctrectgmphd, initctgmphd, initcvgmphd, initcagmphd, initctggiwphd and initcaggiwphd.

Data Types: function_handle | char

Sensor transform function, specified as a function handle or as a character vector containing the name of a valid sensor transform function. The function transforms a track's state into the sensor's detection state. For example, the function transforms the track's state in the scenario Cartesian frame to the sensor's spherical frame. You can create your own sensor transform function, but it must support this syntax:

detStates = SensorTransformFcn(trackStates,params)
params are the parameters stored in the SensorTransformParameters property. Notice that the signature of the function is similar to a measurement function. Therefore, you can use a measurement function (such as cvmeas, ctmeas, or cameas) as the SensorTransformFcn.

Depending on the filter type and the target type, the output detStates needs to return differently.

  • When you use the object with gmphd for non-extended targets or with ggiwphd, detStates is a N-by-M matrix, where N is the number of rows in the SensorLimits property and M is the number of input states in trackStates.

  • When you used the object with gmphd for extended targets, the SensorTransformFcn allows you to specify multiple detStates per trackState. In this case, detStates is a N-by-M-by-S matrix, where S is the number of detectable sources on the extended target. For example, if the target is described by a rectangular state, the detectable sources can be the corners of the rectangle.

    If any of the sources falls inside the SensorLimits, the target is declared detectable. The functions uses the spread (maximum value − minimum value) of each detStates and the ratio between the spread and sensor resolution on each sensor limit to calculate the expected number of detections from each extended target. You can override this default setting by providing an optional output in the SensorTransformFcn as:

    [..., Nexp] = SensorTransformFcn(trackStates, params)
    where Nexp is the expected number of detections from each extended track state.

The default SensorTransformFcn is the sensor transform function of the filter returned by FilterInitilizationFcn. For example, the initicvggiwphd function returns the default cvmeas, whereas initictggiwphd and initicaggiwphd functions return ctmeas and cameas, respectively.

Data Types: function_handle | char

Parameters for the sensor transform function, returned as a structure or an array of structures. If you need to transform the state only once, specify it as a structure. If you need to transform the state multiple times, specify it as an n-by-1 array of structures. For example, to transform a state from the scenario frame to the sensor frame, you usually need to first transform the state from the scenario rectangular frame to the platform rectangular frame, and then transform the state from the platform rectangular frame to the sensor spherical frame. The structure contains these fields.

FieldDescription
Frame

Child coordinate frame type, specified as 'Rectangular' or 'Spherical'.

OriginPosition

Child frame origin position expressed in the Parent frame, specified as a 3-by-1 vector.

OriginVelocity

Child frame origin velocity expressed in the parent frame, specified as a 3-by-1 vector.

Orientation

Relative orientation between frames, specified as a 3-by-3 rotation matrix. If you set the IsParentToChild property to false, then specify Orientation as the rotation from the child frame to the parent frame. If you set the IsParentToChild property to true, then specify Orientation as the rotation from the parent frame to the child frame.

IsParentToChild

Flag to indicate the direction of rotation between parent and child frame, specified as true or false. The default is false. See description of the Orientation field for details.

HasAzimuth

Indicates whether outputs contain azimuth components, specified as true or false.

HasElevation

Indicates whether outputs contain elevation components, specified as true or false.

HasRange

Indicates whether outputs contain range components, specified as true or false.

HasVelocity

Indicates whether outputs contains velocity components, specified as true or false.

The scenario frame is the parent frame of the platform frame, and the platform frame is the parent frame of the sensor frame.

The default values for SensorTransformParameters are a 2-by-1 array of structures.

FieldsStruct 1Struct 2
Frame'Spherical''Rectangular'
OriginPosition[0;0;0][0;0;0]
OriginVelocity[0;0;0][0;0;0]
Orientationeye(3)eye(3)
IsParentToChildfalsefalse
HasAzimuthtruetrue
HasElevationtruetrue
HasRangetruetrue
HasVelocityfalsetrue

In this table, Struct 2 accounts for the transformation from the scenario rectangular frame to the platform rectangular frame, and Struct 1 accounts for the transformation from the platform rectangular frame to the sensor spherical frame, given that you set the IsParentToChild property to false.

Note

If you use a custom sensor transformation function in the SensorTransformFcn property, you can specify this property in any format as long as the sensor transformation function accepts it.

Note

If you specify the platform or scenario input argument, the object ignores the name-value input argument for this property.

Data Types: struct

Sensor's detection limits, specified as an N-by-2 matrix, where N is the output dimension of the sensor transform function. The matrix must describe the lower and upper detection limits of the sensor in the same order as the outputs of the sensor transform function.

If you use cvmeas, cameas, or ctmeas as the sensor transform function, then you need to provide the sensor limits in order as:

SensorLimits = [minAzmaxAzminElmaxElminRngmaxRngminRrmaxRr]

The description of the limits and their default values are given in this table. The default values for SensorLimits are a 3-by-2 matrix including the top six elements in the table. Moreover, if you use these three functions, you can specify a different size matrix (1-by-2, 2-by-2, or 3-by-4), but you have to specify the limits in the sequence in the SensorLimits matrix.

LimitsDescriptionDefault values
minAz

Minimum detectable azimuth in degrees.

-10
maxAz

Maximum detectable azimuth in degrees.

10
minEl

Minimum detectable elevation in degrees.

-2.5
maxEl

Maximum detectable elevation in degrees.

2.5
minRng

Minimum detectable range in meters.

0
maxRng

Maximum detectable range in meters.

1000
minRr

Minimum detectable range rate in meters per second.

N/A
maxRr

Maximum detectable range rate in meters per second.

N/A

Note

If you specify the platform or scenario input argument, the object ignores the name-value input argument for this property.

Data Types: double

Resolution of a sensor, specified as a N-element positive-valued vector, where N is the number of parameters specified in the SensorLimits property. If you want to assign only one resolution cell for a parameter, simply specify its resolution as the difference between the maximum limit and the minimum limit of the parameter.

Note

If you specify the platform or scenario input argument, the object ignores the name-value input argument for this property.

Data Types: double

Maximum number of detections the sensor can report, specified as a positive integer.

Note

If you specify the platform or scenario input argument, the object ignores the name-value input argument for this property.

Example: 1

Data Types: double

Maximum number of detections the sensor can report per object, specified as a positive integer.

Note

If you specify the platform or scenario input argument, the object ignores the name-value input argument for this property.

Example: 3

Data Types: double

Expected number of false alarms per unit volume from the sensor, specified as a positive scalar.

Note

If you specify the platform or scenario input argument, the object ignores the name-value input argument for this property.

Example: 2e-3

Data Types: double

Probability of detecting a target inside the coverage limits, specified as a scalar in the range (0, 1].

Example: 0.75

Data Types: single | double

Probability of detecting a target estimated to be outside of the sensor limits, specified as a positive scalar. This property allows a trackerPHD object to consider that the estimated target, which is outside the sensor limits, can be detectable.

Note

If you specify the platform or scenario input argument, the object ignores the name-value input argument for this property.

Example: 0.03

Data Types: double

Examples

collapse all

Consider a radar with the following sensor limits and sensor resolution.

  azLimits = [-10 10];
  elLimits = [-2.5 2.5];
  rangeLimits = [0 500];
  rangeRateLimits = [-50 50];
  sensorLimits = [azLimits;elLimits;rangeLimits;rangeRateLimits];
  sensorResolution = [5 2 10 3];

Specify the sensor transform function that transforms the Cartesian coordinates [x;y;vx;vy] in the scenario frame to spherical coordinates [az;el;range;rr] in the sensor's frame. Use the measurement function cvmeas as the sensor transform function.

  transformFcn = @cvmeas;

To specify the parameters required for cvmeas, use the SensorTransformParameters property. Here, you assume the sensor is mounted at the center of the platform and the platform located at [100;30;20] is moving with a velocity of [-5;4;2] units per second in the scenario frame.

The first structure defines the sensor's location, velocity, and orientation in the platform frame.

  params(1) = struct("Frame","Spherical", ...
      "OriginPosition",[0;0;0], ...
      "OriginVelocity",[0;0;0], ...
      "Orientation",eye(3), ...
      "HasRange",true, ...
      "HasVelocity",true);

The second structure defines the platform location, velocity, and orientation in the scenario frame.

  params(2) = struct("Frame","Rectangular", ...
      "OriginPosition",[100;30;20], ...
      "OriginVelocity",[-5;4;2], ...
      "Orientation",eye(3), ...
      "HasRange",true, ...
      "HasVelocity",true);

Create the configuration.

  config = trackingSensorConfiguration(SensorIndex=3,SensorLimits=sensorLimits,...
                                       SensorResolution=sensorResolution,...
                                       SensorTransformParameters=params,...
                                       SensorTransformFcn=@cvmeas,...
                                       FilterInitializationFcn=@initcvggiwphd)
config = 
  trackingSensorConfiguration with properties:

                  SensorIndex: 3
                  IsValidTime: 0

                 SensorLimits: [4x2 double]
             SensorResolution: [4x1 double]
           SensorTransformFcn: @cvmeas
    SensorTransformParameters: [1x2 struct]

      FilterInitializationFcn: @initcvggiwphd
             MaxNumDetections: Inf
          MaxNumDetsPerObject: Inf

               ClutterDensity: 1.0000e-03
         DetectionProbability: 0.9000
      MinDetectionProbability: 0.0500

Create a fusionRadarSensor object and specify its properties.

sensor = fusionRadarSensor(1, ...
    FieldOfView=[20 5], ...
    RangeLimits=[0 500], ...
    HasRangeRate=true, ...
    HasElevation=true, ...
    RangeRateLimits=[-50 50], ...
    AzimuthResolution=5, ...
    RangeResolution=10, ...
    ElevationResolution=2, ...
    RangeRateResolution=3);

Specify the cvmeas function as the sensor transform function.

transformFcn = @cvmeas;

Create a trackingSensorConfiguration object.

config = trackingSensorConfiguration(sensor,SensorTransformFcn=transformFcn)
config = 
  trackingSensorConfiguration with properties:

                  SensorIndex: 1
                  IsValidTime: 0

                 SensorLimits: [4x2 double]
             SensorResolution: [4x1 double]
           SensorTransformFcn: @cvmeas
    SensorTransformParameters: [2x1 struct]

      FilterInitializationFcn: []
             MaxNumDetections: Inf
          MaxNumDetsPerObject: 1

               ClutterDensity: 1.0485e-07
         DetectionProbability: 0.9000
      MinDetectionProbability: 0.0500

Create a monostatic lidar sensor object.

sensor = monostaticLidarSensor(1);

Define the pose of the sensor platform with respect to the scenario frame.

platformPose = struct("Position", [10 -10 0], ...
    "Velocity", [1 1 0], ...
    "Orientation", eye(3));

Create a trackingSensorConfiguration object based on the sensor and the platform pose input.

config = trackingSensorConfiguration(sensor,platformPose)
config = 
  trackingSensorConfiguration with properties:

                  SensorIndex: 1
                  IsValidTime: 0

                 SensorLimits: [3x2 double]
             SensorResolution: [3x1 double]
           SensorTransformFcn: []
    SensorTransformParameters: [2x1 struct]

      FilterInitializationFcn: []
             MaxNumDetections: Inf
          MaxNumDetsPerObject: Inf

               ClutterDensity: 1.0000e-03
         DetectionProbability: 0.9000
      MinDetectionProbability: 0.0500

Create a trackingScenario object and add a platform.

scene = trackingScenario;
plat = platform(scene);

Add two sensors to the platform.

plat.Sensors = {fusionRadarSensor(1);monostaticLidarSensor(2)};

Create trackingSensorConfiguration objects using the platform.

configs = trackingSensorConfiguration(plat)
configs=2×1 cell array
    {1x1 trackingSensorConfiguration}
    {1x1 trackingSensorConfiguration}

Open a saved Simulink model that contains a Fusion Radar Sensor block.

open_system("sensorModel");

Get the path and handle of the block.

blockPath = getfullname(gcb);
blockHandle = getSimulinkBlockHandle(blockPath);

Create a trackingSensorConfiguration object based on the block path.

tscByBlockPath = trackingSensorConfiguration(blockPath)
tscByBlockPath = 
  trackingSensorConfiguration with properties:

                  SensorIndex: 1
                  IsValidTime: 0

                 SensorLimits: [2x2 double]
             SensorResolution: [2x1 double]
           SensorTransformFcn: []
    SensorTransformParameters: [2x1 struct]

      FilterInitializationFcn: []
             MaxNumDetections: 100
          MaxNumDetsPerObject: 1

               ClutterDensity: 5.2536e-13
         DetectionProbability: 0.9000
      MinDetectionProbability: 0.0500

Create a trackingSensorConfiguration object based on the block handle.

tscByBlockHandle = trackingSensorConfiguration(blockHandle)
tscByBlockHandle = 
  trackingSensorConfiguration with properties:

                  SensorIndex: 1
                  IsValidTime: 0

                 SensorLimits: [2x2 double]
             SensorResolution: [2x1 double]
           SensorTransformFcn: []
    SensorTransformParameters: [2x1 struct]

      FilterInitializationFcn: []
             MaxNumDetections: 100
          MaxNumDetsPerObject: 1

               ClutterDensity: 5.2536e-13
         DetectionProbability: 0.9000
      MinDetectionProbability: 0.0500

Create a trackingScenario object and add two platforms.

scene = trackingScenario;
plat1 = platform(scene);
plat2 = platform(scene);

Add two sensors to the first platform and one sensor to the second platform.

plat1.Sensors = {fusionRadarSensor(1);monostaticLidarSensor(2)};
plat2.Sensors = {fusionRadarSensor(3)};

Create trackingSensorConfiguration objects using the scenario.

configs = trackingSensorConfiguration(scene)
configs=3×1 cell array
    {1x1 trackingSensorConfiguration}
    {1x1 trackingSensorConfiguration}
    {1x1 trackingSensorConfiguration}

More About

expand all

Extended Capabilities

C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.

Version History

Introduced in R2019a

expand all