# pointsToWorld

Determine world coordinates of image points

## Description

example

worldPoints = pointsToWorld(intrinsics,rotationMatrix,translationVector,imagePoints) returns world points on the X-Y plane, which correspond to the input image points. Points are converted using the input rotation matrix, translation vector, and camera intrinsics.

## Examples

collapse all

Map the points of a fisheye image to world coordinates and compare these points to the ground truth points. A series of checkerboard pattern images are used to estimate the fisheye parameters and calibrate the camera.

Create a set of checkerboard calibration images.

images = imageDatastore(fullfile(toolboxdir('vision'),'visiondata' ,...
'calibration','gopro'));

Detect the checkerboard corners in the images. Leave the last image for testing.

[imagePoints,boardSize] = detectCheckerboardPoints(images.Files(1:end-1));

Generate the world coordinates of the checkerboard corners in the pattern-centric coordinate system, with the upper-left corner at (0,0).

squareSize = 29; % millimeters
worldPoints = generateCheckerboardPoints(boardSize,squareSize);

Estimate the fisheye camera parameters from the image and world points. Use the first image to get image size.

imageSize = [size(I,1) size(I,2)];
fisheyeParams = estimateFisheyeParameters(imagePoints,worldPoints,imageSize);
intrinsics = fisheyeParams.Intrinsics;

Find the reference object in the new image.

imagePoints = detectCheckerboardPoints(I);

Compute new extrinsics.

[R,t] = extrinsics(imagePoints,worldPoints,intrinsics);

Map image points to world coordinates in the X-Y plane.

newWorldPoints = pointsToWorld(intrinsics,R,t,imagePoints);

Compare estimated world points to the ground truth points.

plot(worldPoints(:,1),worldPoints(:,2),'gx');
hold on
plot(newWorldPoints(:,1),newWorldPoints(:,2),'ro');
legend('Ground Truth','Estimates');
hold off

## Input Arguments

collapse all

Camera parameters, specified as a cameraIntrinsics or a fisheyeIntrinsics object. The objects store information about a camera’s intrinsic calibration parameters, including the lens distortion parameters.

3-D rotation of the world coordinates relative to the image coordinates, specified as a 3-by-3 matrix. The rotation matrix, together with the translation vector, enable you to transform points from the world coordinate system to the camera coordinate system. The rotationMatrix and translationVector inputs must be the same data type.

Data Types: double | single

3-D translation of the world coordinates relative to the image coordinates, specified as a 1-by-3 vector. The translation vector, together with the rotation matrix, enable you to transform points from the world coordinate system to the camera coordinate system. The rotationMatrix and translationVector inputs must be the same data type.

Data Types: double | single

Image points, specified as an M-by-2 matrix containing M [x, y] coordinates of image points.

When using the cameraParameters object as the cameraParams input, pointsToWorld does not account for lens distortion. Therefore, the imagePoints input must contain image points detected in the undistorted image, or they must be undistorted using the undistortPoints function. For a fisheyeIntrinsics object, the image points are distorted.

## Output Arguments

collapse all

World coordinates, returned as an M-by-2 matrix. M represents the number of undistorted points in [x, y] world coordinates.