Apply transform on an image at lower resolution onto that of higher resolution
Show older comments
Hello, I am performing a transformation for video stabilization on frames at 0.125 of the original size. I would now like to infer the geometric transform back onto the original image.
for i = 1: 10
% Estimate transform from frame A to frame B, and fit as an s-R-t
H = cvexEstStabilizationTform(imgA_small,imgB_small);
HsRt = cvexTformToSRT(H);
Hcumulative = HsRt * Hcumulative;
imgB_small_transform = imwarp(imgB_small,affine2d(Hcumulative),'OutputView',imref2d(size(imgB_small)));
% img_B_original_size = ??
end
Any ideas how to acheive this please? Many thanks,
Accepted Answer
More Answers (1)
There are dozens of image registration files on the File Exchange
I don't know if all of them require the images to be at the same resolution, but you can always imsize() the high-resolution image down to the same resolution as the low-resolution image, if needed. It couldn't matter that much for the low parameter affine registration that you're trying to do.
15 Comments
Nitin
on 16 Jun 2013
It's not clear what you mean by "infer this transform" if you already have the transform.
You mean you want to apply the transform to the low-res image to align it with the high-res image? Applying the IMTRANSFORM command to the low-res image will do that. Optionally, you could then use IMRESIZE to upsample to the same size as the high-res image, if that's what you're trying to do.
Image Analyst
on 16 Jun 2013
I do what Matt J does. I use polyfitn() to fit a 2D polynomial to a subsampled version of my background image, because it's faster than doing it on the whole giant image. Then I apply it to the small image and use imresize to interpolate it up to the full size image. Theoretically it's not mathematically the same since imresize may not use the same interpolating function as your transform, but for me, for estimating slowly varying background illumination differences, it's good enough, and it's a lot faster than applying it to the full size image.
Otherwise you'll have to just plug in 8*x and 8*y into your transform and come up with the new formula, which would be totally exact and not lose any info from the original image like would happen with subsampling.
This is the part I am stuck when I try on the original image imgB2 (2048*2048)
Stuck in what way? What isn't working? And earlier, you said the dimensions of the original image was 2056x2056 not 2048x2048.
In any case, why are you applying the transform to imgB2 when it was the target image of the registration, not the reference image? Shouldn't you be applying it to imgA? Or, alternatively, shouldn't you be registering from imgB to imgA instead of the other way around?
The next step for me now is to align the original images 2048*2048 using the transform obtained from the 256*256 images.
Again, you haven't said what the difficulty is, so I just have to guess. My guess is that your Hcummulative has translation measured in the units of the 256x256 image. If you just multiply the translation parameters by 8, I'm betting that it will work fine on the 2048x2048 images.
Matt J
on 16 Jun 2013
I can't really tell the difference between the small and the original. But if the small images are correctly aligned, why not just upsample them using imresize?
Nitin
on 17 Jun 2013
Matt J
on 17 Jun 2013
OK. Well, the usage of imref2d in setting the reference coordinates is not clear to me from the documentation, and it isn't supported pre-R2013, so I can't tinker with it myself. I imagine the problem lies there, though.
I would just perform the warp the old-fashioned way, using griddedInterpolant or interp2.
Alex Taylor
on 18 Jun 2013
Edited: Alex Taylor
on 18 Jun 2013
When I answered yesterday, I didn't notice Matt J's comments, which are spot on. Ashvin, try simply upscaling just the translation parameters in Hcummulative. That should work.
Alternatively, you will need to describe what exactly is still not working. Try looking at your images. Does it look like the scale/rotation are correct? Does it look like translation is correct?
Try providing visualizations that would help people to understand the nature of misalignment you are seeing. For example:
help imshowpair
You should also try using the transformPointsForward/transformPointsInverse methods of the affine2d class to see if points map the way that you expect them to between your target and reference images. This is what I do when I'm stuck on a geometric transformation problem. It is often more helpful to think about point mapping than to look at the resampled images grids when you are trying to figure out what is going on.
Matt J
on 18 Jun 2013
When I answered yesterday, I didn't notice Matt J's comments, which are spot on. Ashvin, try simply upscaling just the translation parameters in Hcummulative. That should work.
Earlier in this string of comments, Ashvin showed the following code. It should have had the effect of scaling the translation parameters, but apparently that didn't help. So, I am puzzled...
Hdown = [ 256/2048 0 0; 0 256/2048 0; 0 0 1];
Hup = [ 2048/256 0 0; 0 2048/256 0; 0 0 1];
Hcumulative = Hup * HsRt * Hcumulative * Hdown;
Nitin
on 21 Jun 2013
Categories
Find more on Computer Vision with Simulink in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!