3-D locations of world points matched across multiple images
returns the locations of 3-D world points that correspond to points matched across
multiple images taken with calibrated cameras.
worldPoints = triangulateMultiview(
specifies an array of matched points.
intrinsics specify camera pose information and intrinsics,
respectively. The function does not account for lens distortion.
additionally returns the mean reprojection error for each 3-D world point using all
input arguments in the prior syntax.
additionally returns the indices of valid and invalid world points. Valid points
are located in front of the cameras.
Reconstruct Scene from Multiple Views
Load images in the workspace.
imageDir = fullfile(toolboxdir("vision"),"visiondata","structureFromMotion"); imds = imageDatastore(imageDir);
Load precomputed camera parameters.
data = load(fullfile(imageDir,"cameraParams.mat"));
Get camera intrinsic parameters.
intrinsics = data.cameraParams.Intrinsics;
Compute features for the first image.
I = im2gray(readimage(imds,1)); I = undistortImage(I,intrinsics); pointsPrev = detectSURFFeatures(I); [featuresPrev,pointsPrev] = extractFeatures(I,pointsPrev);
Load camera poses.
poseData = load("sfmCameraPoses.mat"); absPoses = poseData.absPoses;
vSet = imageviewset; vSet = addView(vSet,1,absPoses(1),Points=pointsPrev);
Compute features and matches for the rest of the images.
for i = 2:numel(imds.Files) I = im2gray(readimage(imds,i)); I = undistortImage(I,intrinsics); points = detectSURFFeatures(I); [features,points] = extractFeatures(I,points); vSet = addView(vSet,i,absPoses(i),Points=points); pairsIdx = matchFeatures(featuresPrev,features,MatchThreshold=5); vSet = addConnection(vSet,i-1,i,Matches=pairsIdx); featuresPrev = features; end
Find point tracks.
tracks = findTracks(vSet);
Get camera poses.
cameraPoses = poses(vSet);
Find 3-D world points.
[xyzPoints,errors] = triangulateMultiview(tracks,cameraPoses,intrinsics); z = xyzPoints(:,3); idx = errors < 5 & z > 0 & z < 20; pcshow(xyzPoints(idx, :),AxesVisibility="on",VerticalAxis="y",VerticalAxisDir="down",MarkerSize=30); hold on plotCamera(cameraPoses, Size=0.2); hold off
pointTracks — Matched points across multiple images
N-element array of
Matched points across multiple images, specified as an
N-element array of
pointTrack objects. Each
element contains two or more points that match across multiple
cameraPoses — Camera pose information
Camera pose information, specified as a two-column table. You can
cameraPoses from an
imageviewset object by using the
poses object function.
|View identifier in the |
|Absolute pose of the view, specified as a |
intrinsics — Camera intrinsics
cameraIntrinsics object | M-element vector of
Camera intrinsics, specified as a
cameraIntrinsics object or an
M-element vector of
M is the number of camera poses. When all images are
captured by the same camera, specify one
cameraIntrinsics object. When
images are captured by different cameras, specify a vector.
worldPoints — 3-D world points
3-D world points, returned as an N-by-3 matrix. Each row represents one 3-D world point and is of the form [x, y, z]. N is the number of 3-D world points.
reprojectionErrors — Reprojection errors
Reprojection errors, returned as an N-element vector. To calculate
reprojection errors, first the function projects each world point back into
each image. Then, in each image, the function calculates the distance
between the detected and the reprojected point. Each element of the
reprojectionErrors output is the average
reprojection error for the corresponding world point in the
validIndex — Validity of world points
M-by-1 logical vector
Validity of world points, returned as an M-by-1 logical
vector. Valid points, denoted as a logical
true), are located in front of the cameras. Invalid
points, denoted as logical
are located behind the cameras.
The validity of a world point with respect to the position of a camera is determined by projecting the world point onto the image using the camera matrix and homogeneous coordinates. The world point is valid if the resulting scale factor is positive.
Before detecting the points, correct the images for lens distortion by using by using the
undistortImage function. Alternatively,
you can directly undistort the points by using the
Version HistoryIntroduced in R2016a
R2022b: Supports premultiply geometric transformation convention
Starting in R2022b, most Computer Vision Toolbox™ functions create and perform geometric transformations using the
premultiply convention. Accordingly, you can now specify the
AbsolutePose field of the
argument as a
rigidtform3d object, which uses the premultiply convention. You can
continue to estimate the camera poses using the
object function of the
imageviewset object, which has been updated to support the premultiply
Although you can still specify
AbsolutePose as a
object, this object is not recommended because it uses the postmultiply convention.
You can streamline your geometric transformation workflows by switching to the
premultiply geometric transformation convention. For more information, see Migrate Geometric Transformations to Premultiply Convention.
You can no longer specify
cameraPoses as a three-column table
Location. This table format is not supported because there is
not enough information to determine whether the 3-by-3 rotation matrix in the
Orientation field is intended for the premultiply or