Main Content

relativeCameraPose

(Not recommended) Calculate relative rotation and translation between camera poses

relativeCameraPose is not recommended. Use the estrelpose function instead. For more information, see Version History.

Description

[relativeOrientation,relativeLocation] = relativeCameraPose(M,cameraParams,inlierPoints1,inlierPoints2) returns the orientation and location of a calibrated camera relative to its previous pose. The two poses are related by M, which must be a fundamental, essential, affine2d, or projective2d matrix. The function computes the camera location up to scale and returns relativeLocation as a unit vector.

[relativeOrientation,relativeLocation] = relativeCameraPose(M,cameraParams1,cameraParams2,inlierPoints1,inlierPoints2) returns the orientation and location of the second camera relative to the first one.

[relativeOrientation,relativeLocation,validPointsFraction] = relativeCameraPose(M, ___) additionally returns the fraction of the inlier points that project in front of both cameras.

Input Arguments

collapse all

Fundamental, essential matrix, or a homography matrix, specified as a 3-by-3 matrix, an affine2d object, or a projective2d object containing a homography matrix. You can obtain the 3-by-3 matrix using one of the following functions:

Data Types: single | double

Camera parameters, specified as a cameraParameters or cameraIntrinsics object. You can return the cameraParameters object using the estimateCameraParameters function. The cameraParameters object contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

Camera parameters for camera 1, specified as a cameraParameters or cameraIntrinsics object. You can return the cameraParameters object using the estimateCameraParameters function. The cameraParameters object contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

Camera parameters for camera 2, specified as a cameraParameters or cameraIntrinsics object. You can return the cameraParameters object using the estimateCameraParameters function. The cameraParameters object contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

Coordinates of corresponding points in view 1, specified as an M-by-2 matrix of M number of [x,y] coordinates, or as a SIFTPoints, SURFPoints, MSERRegions, or cornerPoints object. You can obtain these points using the estimateFundamentalMatrix function or the estimateEssentialMatrix.

Coordinates of corresponding points in view 2, specified as an M-by-2 matrix of M number of [x,y] coordinates, or as a SIFTPoints, SURFPoints, MSERRegions, or cornerPoints object. You can obtain these points using the estimateFundamentalMatrix function or the estimateEssentialMatrix.

Output Arguments

collapse all

Orientation of camera, returned as a 3-by-3 matrix. If you use only one camera, the matrix describes the orientation of the second camera pose relative to the first camera pose. If you use two cameras, the matrix describes the orientation of camera 2 relative to camera 1.

Data Types: single | double

Location of camera, returned as a 1-by-3 unit vector. If you use only one camera, the vector describes the location of the second camera pose relative to the first camera pose. If you use two cameras, the vector describes the location of camera 2 relative to camera 1.

Data Types: single | double

Fraction of valid inlier points that project in front of both cameras, returned as a scalar. If validPointsFraction is too small, e.g. less than 0.9, it can indicate that the fundamental matrix is incorrect.

Tips

  • You can compute the camera extrinsics, rotationMatrix and translationVector, corresponding to the camera pose, from relativeOrientation and relativeLocation:

    [rotationMatrix,translationVector] = cameraPoseToExtrinsics(relativeOrientation,relativeLocation)
    The orientation of the previous camera pose is the identity matrix, eye(3), and its location is, [0,0,0].

  • You can then use rotationMatrix and translationVector as inputs to the cameraMatrix function.

  • You can compute four possible combinations of orientation and location from the input fundamental matrix. Three of the combinations are not physically realizable, because they project 3-D points behind one or both cameras. The relativeCameraPose function uses inlierPoints1 and inlierPoints2 to determine the realizable combination.

Extended Capabilities

Version History

Introduced in R2016b

collapse all

R2022b: Not recommended

Starting in R2022b, most Computer Vision Toolbox™ functions create and perform geometric transformations using the premultiply convention. However, the relativeCameraPose function uses the postmultiply convention. Although there are no plans to remove relativeCameraPose at this time, you can streamline your geometric transformation workflows by switching to the estrelpose function, which supports the premultiply convention. For more information, see Migrate Geometric Transformations to Premultiply Convention.

To update your code:

  • Change instances of the function name relativeCameraPose to estrelpose.

  • Specify the cameraParams argument as a cameraIntrinsics object. If you have a cameraParameters object, then you can get a cameraIntrinsics object by querying the Intrinsics property. If the Intrinsics property is empty according to the isempty function, then set the ImageSize property of the cameraParameters object to an arbitrary vector before querying the Intrinsics property. For example:

    load worldToImageCorrespondences.mat
    if(isempty(cameraParams.Intrinsics))
        cameraParams.ImageSize = [128 128];
    end
    intrinsics = cameraParams.Intrinsics;
  • Replace the two output arguments relativeOrientation and relativeLocation with a single output argument, relativePose. If you need to obtain the orientation matrix and location vector, then you can query the R and the Translation properties of the rigidtform3d object returned by the worldPose argument. Note that the value of R is the transpose of relativeOrientation.

The table shows examples of how to update your code.

Discouraged UsageRecommended Replacement

This example calculates relative camera poses of a single camera using the relativeCameraPose function with the cameraParams argument specified as a cameraParameters object.

[relativeOrientationOld,relativeLocation] = relativeCameraPose( ...
    M,cameraParams,inlierPoints1,inlierPoints2)

This example gets the camera intrinsics using the Intrinsics property of a cameraParameters object, then estimates the relative camera pose using the estrelpose function.

intrinsics = cameraParams.Intrinsics;
relativePose = estrelpose( ...
     M,intrinsics,inlierPoints1,inlierPoints2);

If you need to obtain the relative camera orientation and location, then you can query properties of relativePose.

relativeOrientation = relativePose.R;
relativeLocation = relativePose.Translation;

If you want the orientation in the postmultiply convention, take the transpose of relativePose.R.

relativeOrientationOld = relativePose.R';

This example calculates the relative camera pose of camera 2 with respect to camera 1 using the relativeCameraPose function.

[relativeOrientation,relativeLocation] = relativeCameraPose( ...
    M,cameraParams1,cameraParams2,inlierPoints1,inlierPoints2)

This example gets the camera intrinsics of two cameras using the Intrinsics property of two cameraParameters objects, then estimates the relative camera pose using the estrelpose function.

intrinsics1 = cameraParams1.Intrinsics;
intrinsics2 = cameraParams2.Intrinsics;
relativePose = estrelpose( ...
     M,intrinsics1,intrinsics2,inlierPoints1,inlierPoints2);