Apply transform on an image at lower resolution onto that of higher resolution

Hello, I am performing a transformation for video stabilization on frames at 0.125 of the original size. I would now like to infer the geometric transform back onto the original image.
for i = 1: 10
% Estimate transform from frame A to frame B, and fit as an s-R-t
H = cvexEstStabilizationTform(imgA_small,imgB_small);
HsRt = cvexTformToSRT(H);
Hcumulative = HsRt * Hcumulative;
imgB_small_transform = imwarp(imgB_small,affine2d(Hcumulative),'OutputView',imref2d(size(imgB_small)));
% img_B_original_size = ??
end
Any ideas how to acheive this please? Many thanks,

 Accepted Answer

You have an estimate for rotation, scale, and translation. Ideally, the rotation and scale estimates that you obtain at lower scale are invariant to the fact that you have pre-scaled your images (this won't be strictly true in reality because you will lose information in downsampling that your features may be sensitive to). If your registration estimate indicates a scale difference of 1.2 and a rotation of 10 degrees, this is independent of whether you have pre-scaled your images as long as both images are at the same scale.
The translation component of your SRT transformation is relative to your downsampled images. You can see this immediately if you think about it: a translation in pixels of (-2,5) in your downsampled images is not the same amount of translation (in physical units) as (-2,5) pixels at full scale.
To solve your problem, you will need to re-scale your translation estimate in HsRt(3,1:2) by a factor of 8x to account for your initial pre-scaling.

More Answers (1)

There are dozens of image registration files on the File Exchange
I don't know if all of them require the images to be at the same resolution, but you can always imsize() the high-resolution image down to the same resolution as the low-resolution image, if needed. It couldn't matter that much for the low parameter affine registration that you're trying to do.

15 Comments

That's what I did, I have performed the cvexEstStabilizationTform and have the transform between the 2 small images .. now I would like to infer this transform back onto the original image which is 8 times the size of the small image..
It's not clear what you mean by "infer this transform" if you already have the transform.
You mean you want to apply the transform to the low-res image to align it with the high-res image? Applying the IMTRANSFORM command to the low-res image will do that. Optionally, you could then use IMRESIZE to upsample to the same size as the high-res image, if that's what you're trying to do.
I do what Matt J does. I use polyfitn() to fit a 2D polynomial to a subsampled version of my background image, because it's faster than doing it on the whole giant image. Then I apply it to the small image and use imresize to interpolate it up to the full size image. Theoretically it's not mathematically the same since imresize may not use the same interpolating function as your transform, but for me, for estimating slowly varying background illumination differences, it's good enough, and it's a lot faster than applying it to the full size image.
Otherwise you'll have to just plug in 8*x and 8*y into your transform and come up with the new formula, which would be totally exact and not lose any info from the original image like would happen with subsampling.
As ImageAnalyst mentioned ,I would like to plug in 8*x and 8*y into your transform have exactly the same alignment in my original images
I have the transform Hcummulative which I found using cvexEstStabilizationTform the 256*256 images. Now, I would like to rescale the transform Hcummulative so that I can have the same alignment onto the 2056*2056 original images.
I have added some more of my code below so that it is clearer.
while ii < length(images)
imgA = imgB;
imgAp = imgBp;
imgB = imresize(imread(cs{ii}),0.125);
imgB2 = imresize(imread(cs{ii}),1);
% Estimate transform from frame A to frame B, and fit as an s-R-t
H = cvexEstStabilizationTform(imgA,imgB);
HsRt = cvexTformToSRT(H);
Hcumulative = HsRt * Hcumulative;
imgBp = imwarp(imgB,affine2d(Hcumulative),'OutputView',imref2d(size(imgB)));
This is the part I am stuck when I try on the original image imgB2 (2048*2048)
imgBp2 = imwarp(imgB2,affine2d(Hcumulative),'OutputView',imref2d(size(imgB2)));
ii = ii+1;
end
The Hcummulative determined by cvexEstStabilizationTform on the resized images imgB (256*256) does not align the original images ImgB2 (2056 * 2056) original images the same way
This is the part I am stuck when I try on the original image imgB2 (2048*2048)
Stuck in what way? What isn't working? And earlier, you said the dimensions of the original image was 2056x2056 not 2048x2048.
In any case, why are you applying the transform to imgB2 when it was the target image of the registration, not the reference image? Shouldn't you be applying it to imgA? Or, alternatively, shouldn't you be registering from imgB to imgA instead of the other way around?
Sorry about the dimension typo.
I resized the image as it is faster this way. Now I would like to have the alignment back onto the original image as the resized image is too small.
The code works perfectly fine on the 256*256 images and the alignment is perfect.
The next step for me now is to align the original images 2048*2048 using the transform obtained from the 256*256 images.
The next step for me now is to align the original images 2048*2048 using the transform obtained from the 256*256 images.
Again, you haven't said what the difficulty is, so I just have to guess. My guess is that your Hcummulative has translation measured in the units of the 256x256 image. If you just multiply the translation parameters by 8, I'm betting that it will work fine on the 2048x2048 images.
The difficulty is that the 2048 * 2048 images are not properly aligned.
I tried to multiply by 8 as you suggested,
Hdown = [ 256/2048 0 0; 0 256/2048 0; 0 0 1];
Hup = [ 2048/256 0 0; 0 2048/256 0; 0 0 1];
Hcumulative = Hup * HsRt * Hcumulative * Hdown;
I have posted some examples here:
Correctly (small) aligned images: http://postimg.org/gallery/21o6bfv2/
Incorrectly (orignal) aligned images http://postimg.org/gallery/26985h50/
So, my question is how to get the original aligned like the downsampled ones?
Thanks for your help
I can't really tell the difference between the small and the original. But if the small images are correctly aligned, why not just upsample them using imresize?
Because the quality of the upsampled images (imresize(img,8)) in this case is poor. I would rather not lose any info from the original image like would happen with subsampling.
OK. Well, the usage of imref2d in setting the reference coordinates is not clear to me from the documentation, and it isn't supported pre-R2013, so I can't tinker with it myself. I imagine the problem lies there, though.
I would just perform the warp the old-fashioned way, using griddedInterpolant or interp2.
When I answered yesterday, I didn't notice Matt J's comments, which are spot on. Ashvin, try simply upscaling just the translation parameters in Hcummulative. That should work.
Alternatively, you will need to describe what exactly is still not working. Try looking at your images. Does it look like the scale/rotation are correct? Does it look like translation is correct?
Try providing visualizations that would help people to understand the nature of misalignment you are seeing. For example:
help imshowpair
You should also try using the transformPointsForward/transformPointsInverse methods of the affine2d class to see if points map the way that you expect them to between your target and reference images. This is what I do when I'm stuck on a geometric transformation problem. It is often more helpful to think about point mapping than to look at the resampled images grids when you are trying to figure out what is going on.
When I answered yesterday, I didn't notice Matt J's comments, which are spot on. Ashvin, try simply upscaling just the translation parameters in Hcummulative. That should work.
Earlier in this string of comments, Ashvin showed the following code. It should have had the effect of scaling the translation parameters, but apparently that didn't help. So, I am puzzled...
Hdown = [ 256/2048 0 0; 0 256/2048 0; 0 0 1];
Hup = [ 2048/256 0 0; 0 2048/256 0; 0 0 1];
Hcumulative = Hup * HsRt * Hcumulative * Hdown;
Many thanks to everyone for your time and input, I have learned a lot from all your suggestions and input.
Best,

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!