Hello,
I am currently in internship and I have to do the registration of 2 Sentinel-2 satellite images (to start). However, as a beginner in the use of OTB, I have some difficulty understanding how FineRegistration works. What do the different parameters to be filled in correspond to, the disparity map and the warp image? How to interpret them? Is the warp image the image that I need to recover for the next applications on my images? How to choose Exploration Radius X, Y and Metric Radius X, Y parameters ? Is it really a FineRegistration that I have to do for the registration of my images? Please find attached a screenshot of the window I filled in to do the FineRegistration.
Thank you for the help you can give me and sorry for my poor English.
From a reference and a secondary image it computes a disparity image (or disparity map): this is an image that contains at each pixel the offset to apply to this position in the secondary image in order to get to the corresponding point in the reference image i.e. G(x_{out},y_{out}) = (x_{in}-x_{out}, y_{in}-y_{out})
For each pixel p1 of the secondary image the algorithm will search in the reference image which pixel p2 in the neighborhood of p1 is the “closest” to p1. The size of this (rectangular) neighoborhood is determined by the erx and ery parameters (respectively in x and y).
To determine how “close” two pixels are, a metric is computed on rectangular patches around the two pixels (the default metric is the cross correlation , this is a parameter of the application), the size of these patches is determined by the parameters mrx and mry (size in x and y). This metric is computed for all pixels in the neigborhood, and the corresponding shift is then deduced by finding the extremum of the metric.
Optionally the image given by the parameter w (this should be the same image as the secondary image) will be resampled using the disparity image, producing the registered image.
this application is a good choice for image registration if the shifting between your images is small (less than a few pixel), and I think this is the case for your Sentinel2 data. However, for bigger shifts, you might want to use methods based on homologous point extraction. See the associated recipe in the Cookbook.
Thank you very much for your answer, which sheds a lot of light on.
Unfortunately, I have the impression that FineRegistration does not work. When I open my images on ENVI and link them to compare the reference image with the image after registration, the pixel offset is always present. I don’t understand why. My reference image is an image of 19/08/2018 including bands B2, B3, B4 and B8 (at 10 m resolution) and my second image is an image of 08/09/2018 including the same bands at the same resolution. I put my second image (08/09/2018) in the w parameter so that it can be shifted to the first one (19/08/2018). I tried different values for erx, ery, mrx and mry but my second image doesn’t seem to have been adjusted at all. I also tested with images that included only one band. Do you have any idea what’s wrong and where the problem may come from ?
In addition, as far as Residual Registration is concerned, I saw that you need a file in .geom but I don’t have such a file and I don’t know how to acquire it.
I tried the function otbcli_ReadImageInfo as well as otbgui ReadImageInfo but neither of them gives me a .geom file in output.
I even tried otbgui GenerateRPCSensorModel to get this .geom but I’m not sure it’s the right way to get it because the orthorectification I do at the end of my registration doesn’t work.
Unless it comes from the format of my image? Should only .TIF images be used or are ENVI (.dat or .HDR) images also read by OTB ? Could you help me on how to carry out this Residual Registration ?
Thank you in advance for the extra help you can give me.
(I created a new account to answer because the first one was blocked until an administrator activated it, although I might have the same problem with this one )
I’m sorry, I didn’t see that a post was blocked pending validation. On top of that, I don’t understand why this post was blocked. I will investigate…
When you say the pixel offset is still there, you mean that the application is not doing anything, or that the offset is not correctly estimated ? Do you have an idea of what the offset size is ? If you are using S2 orthorectified data I think it should be less than 2 pixels (20m), is it the case with your data ?
I tried to do Residual Registration on two S2 L2A images of the same zone acquired on two different dates. I also used GenerateRPCSensorModel to create a model. It worked but the results were pretty bad. Afterthought I’m not sure doing feature based registration (with homologous points) , is possible today without using the C++ API, even if the recipe says it is … Does anybody knows more about this ?