Obtain Pixel Shift from Registered Images

Hi all,
I have a series of images taken of 96-well plates that I need to analyze, but they are slightly offset as the plate tends to shift while the images are being taken. I want to create a stack where the wells are in the same position. I have had some good luck with imregister, because normxcorr2 was producing weird results (though maybe that was user error).
My question is: how do I best obtain the pixel shift applied to the moving image from imregister, and can I then scale it to apply to differently sized images? I have used imregtform to obtain the translation matrix, but I had to resize the images to 1/4 of their original size to not make the registration take forever. I need the shift information, however, to align the original images so I can analyze those without data loss.
I hope this makes sense! I appreciate any and all feedback--and if there's something else that may work better I'd be open to trying that out as well. Thanks!

Answers (2)

Matt J
Matt J on 7 May 2025
Edited: Matt J on 7 May 2025
There's no reason you can't use the tform from imregtform to imwarp the original images, even if the registration was done on cropped or downsampled images. You just need to use imref2d objects to appropriately inform imregtform and imwarp what the pixel sizes are supposed to be.

4 Comments

So maybe that's where I'm going wrong. I'm not super familiar with imref2d objects, and just using imwarp with the output from the tform isn't registering them. I've attached a portion of my code here.
% image registration
fixed = GrayStack{1}; % the first picture in my stack, what I want to register all other images to
for b = 1:numel(GrayStack)
moving = GrayStack{b};
fixed_smaller = imresize(fixed, 0.25);
moving_smaller = imresize(moving, 0.25);
[optimizer,metric] = imregconfig('multimodal');
optimizer.InitialRadius = 0.009;
optimizer.Epsilon = 1.5e-4;
optimizer.GrowthFactor = 1.01;
optimizer.MaximumIterations = 300;
t = imregtform(moving_smaller,fixed_smaller,'translation',optimizer,metric);
ImageShift{b} = t;
shifted_img = imwarp(GrayStack{b},ImageShift{b});
SamePositionStack{b} = shifted_img;
end
fixed = GrayStack{1}; % the first picture in my stack, what I want to register all other images to
for b = 1:numel(GrayStack)
moving = GrayStack{b};
fixed_smaller = imresize(fixed, 0.25);
moving_smaller = imresize(moving, 0.25);
[optimizer,metric] = imregconfig('multimodal');
optimizer.InitialRadius = 0.009;
optimizer.Epsilon = 1.5e-4;
optimizer.GrowthFactor = 1.01;
optimizer.MaximumIterations = 300;
Rbig=imref2d(size(moving));
Rsmall=imref2d(size(moving_smaller),4,4);
t = imregtform(moving_smaller, Rsmall,...
fixed_smaller, Rsmall, ...
'translation',optimizer,metric);
shifted_img = imwarp(moving,Rbig,t);
ImageShift{b} = t;
SamePositionStack{b} = shifted_img;
end
Unfortunately after copying this and running it with my image stack, the images are not in the same position (ie the wells still move location) when I display SamePositionStack. When I had run pretty much the same code just using imregister, displaying SamePositionStack yielded a much more consistent series where I could just watch the wells. Any clue why?
Change to,
shifted_img = imwarp(moving,Rbig,t,'OutputView',Rbig);
Example:
moving=im2double(im2gray(imread('cameraman.tif')));
fixed=imtranslate(moving,[10,15]);
fixed_smaller = imresize(fixed, 0.25);
moving_smaller = imresize(moving, 0.25);
[optimizer,metric] = imregconfig('multimodal');
optimizer.InitialRadius = 0.009;
optimizer.Epsilon = 1.5e-4;
optimizer.GrowthFactor = 1.01;
optimizer.MaximumIterations = 300;
Rbig=imref2d(size(moving));
Rsmall=imref2d(size(moving_smaller),4,4);
t = imregtform(moving_smaller, Rsmall,...
fixed_smaller, Rsmall, ...
'translation',optimizer,metric)
t =
transltform2d with properties: Dimensionality: 2 Translation: [10.0995 14.7921] A: [1.0000 0 10.0995 0 1.0000 14.7921 0 0 1.0000]
shifted_img = imwarp(moving,Rbig,t,'OutputView',Rbig);
figure
imshowpair(moving, fixed,"montage")
figure
imshowpair(shifted_img, fixed,"montage")

Sign in to comment.

So I'm assuming you cannot get control over your plate and camera, like have the plate put into a jig to position it precisely every time and have the camera overhead and rigidly fixed to mounts. That's too bad. But to align the images, you can use imregister. It gives you the shifted images directly. You don't need to get a translation vector and then create the registered images from that yourself, so I'm not sure why you want to.
I don't think you can do it without some data loss. I think the shifted image is of the same size as the fixed reference image so some pixels may be shifted out of the frame. I don't believe there is an option to return a larger image so that you do not lose any pixels. However, you should set up your optical system such that any pixels shifted out are not inside any of the 96 wells, so that you're just shifting out border pixels that you don't plan on analyzing anyway.
Registering the images would be useful if you (1) want to make a movie of how the wells are changing over time, or (2) you have a fixed set of circular ROI masks and want to use that for all images instead of trying to determine the array of circles from the image itself (which could be done assuming that at least some of the wells have some contrasting color in them).

5 Comments

We are currently working on fixing the camera itself, but in the meantime I still do need to work with these images. Hopefully once the setup is more stable, this won't be necessary. I have used imregister on the original images, but running through a stack with them full size (3040 x 4056) takes quite a while.
Luckily the pixels that shift out of frame are border pixels and I could care less about them. Like you mentioned, I am looking to do ROI analysis and analyze each of the wells over time. I guess I had figured I could get the shift from the tform and apply it to my ROI mask before analyzing, but maybe there's a better way to do that?
I think it's computing the transform that takes the most time, rather than creating the shifted output image, but I'm not sure. Anyway if you had the transform, and you had an ROI mask as a binary image, it might take as long to make a new shifted mask as to just shift the ggray scale image, which is what imregister does for you. So I'm not sure there would be any time savings.
I have only used imregtform once a long time ago. Might it be faster to just call it on a subsampled image, like subsampled by a factor of 10, and then apply the transform to the full size image? You might have to modify some parameters to get it to work on the full-sized image.
One thing I've done in the past is to find the ROI automatically from the image, but it depends on having at least a blob or two in each row and column. You threshold the image and then find the centroids of all the blobs. Then you use kmeans on all the centroids to find 8 rows and 12 columns. Then you can get a perfectly spaced grid by looking at the mean distance between the kmeans centroids. So now you have a 8x12 grid of well centroids and you can make up a circular mask by putting a circle of fixed diameter at each well's centroid. It worked pretty well. It even worked with an outer row that was completely missing but I had to tell it if the missing row was above or below the other rows.
I do think that the time is computing the transform from playing around with it. I've resized the images for use with imregtform, and that runs for a short enough time period. But I run into issues with applying the transform to the original image, even with the edits that Matt J made above.
I'm intrigued by finding them automatically. I had kind of discounted any hope of doing that because there's a large chance that most of my images will be pretty dark and not have any sort of fluorescence. How large/bright of a blob do you think I would need in each row/column? I'm worried that the thresholding wouldn't be able to be sensitive enough to distinguish the wells from the background, especially with some of the noise that is currently present.
Well if there's nothing there, there's nothing there to find. If it's all black, you can't threshold and find anything brighter than the background. In that case I think you need to redouble your efforts to lock down your plate and camera. Attach your camera to a rigid metal frame. Then for the plate, make an L-shaped metal plate that you can then shove your 96 well plate into the corner of. Make that jig plate bolted down so that it does not change position with respect to the camera's field of view. Then when you shove your 96 well plate into the corner of the L, it should be in precisely the same position, pixel-wise, every single time. It's not hard so get to work on it today. You'll spend less time doing that than the days you're trying to develop custom software to automatically find a shifting plate.
Unfortunately it isn't my system and the people who designed it are aware of the issue and are curently working on fixing it--it just isn't there yet. Kind of a complicated/weird situation. I appreciate the suggestions though. :)

Sign in to comment.

Products

Release

R2019b

Asked:

on 7 May 2025

Edited:

on 9 May 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!