Very slow HooverCompareSegmentation

I’m using (rather (trying to use) HooverCompareSegmentation to evaluate a given “labelled” image based on some (24) ground truth polygons, and it is very slow. No way to run at full resolution. I have coarsened to a ridiculous 313 x 405 image, and even in that case it takes 4’.
I observe the RAM requirement is quite large, but surprisingly this application has no -ram parameter
(i have 40 Gb of RAM and a fast SSD disk). Would not a -ram parameter let the user speed up the process a lot? Also, in another thread (Hoover metrics: some advices?), it is said: " if you have N segments in first and M in latter, cost is O(NxM)". In the case of ground truth polygons, one of the 2 would be very small, so it should not be such a big deal.
This is what I get:
(linux 11th Gen Intel® Core™ i7-1165G7 @ 2.80GHz × 8 , 40Gb RAM, SSD disk)
Is it normal? Might I have some other problem?

$ otbcli_HooverCompareSegmentation -ingt rcTraining.tif -inms cROI_7_s10_r25_S5_R13M25.tif -th 0.75 -outgt cROI_7_s10_r25_S5_R13M25_colored_GT.tif uint8 -outms cROI_7_s10_r25_S5_R13M25_colored_seg.tif uint8
2021-06-29 11:24:48 (INFO) HooverCompareSegmentation: Default RAM limit for OTB is 256 MB
2021-06-29 11:24:48 (INFO) HooverCompareSegmentation: GDAL maximum cache size is 1995 MB
2021-06-29 11:24:48 (INFO) HooverCompareSegmentation: OTB will use at most 8 threads
2021-06-29 11:28:36 (INFO): Estimated memory for full processing: 41.1034MB (avail.: 256 MB), optimal image partitioning: 1 blocks
2021-06-29 11:28:36 (INFO): File cROI_7_s10_r25_S5_R13M25_colored_GT.tif will be written in 1 blocks of 405x313 pixels
2021-06-29 11:28:36 (INFO): Estimated memory for full processing: 34.9734MB (avail.: 256 MB), optimal image partitioning: 1 blocks
2021-06-29 11:28:36 (INFO): File cROI_7_s10_r25_S5_R13M25_colored_seg.tif will be written in 1 blocks of 405x313 pixels
2021-06-29 11:28:36 (INFO): Estimated memory for full processing: 63.3809MB (avail.: 256 MB), optimal image partitioning: 1 blocks
Writing 2 output images ...: 100% [**************************************************] (0s)

Hello,

Looking at the code, the application loads both input in memory to compute a confusion matrix from the segments. This is probably why the RAM parameter is not defined in the application. In your case RAM does not seems to be the issue, looking at the logs. This should be added in the documentation of the application.

I don’t know the Hoover algorithm, but according to the other post you are mentioning it seems very computationally intensive. And the processing time will probably be very high if one of the image is over-segmented.

Cédric

If “the application loads both input in memory to compute a confusion matrix”, then having more RAM available should make a difference, as the log reports that it is using a limit of 256 MB only while requires 41103 MB. Therefore, cannot understand why making more memory (> 256) available to the application through a -ram parameter would not be useful.
In any case, this application is hardly unusable beyond the demo data described in the documentation and users should keep in mind that OTB method to evaluate segmentation results is very limited in practice. Do not know if a more efficient implementation is possible or the current computational costs are intrinsic to the method itself.

as the log reports that it is using a limit of 256 MB only while requires 41103 MB

How do you know the application requires 41103 MB ? You monitored the RAM usage of the application ? If the application requires 40 GB of RAM on a 313x405, there is definitely something wrong in the application, even if there is one segment per pixel.

Also OTB only takes into account images when computing the memory usages, the ram parameters in (other) applications is only used to drive the image tiling process. This does not take into account other C++ object, and in this application I suspect this is non negligible.

How do you know the application requires 41103 MB ?

From the output produced by the application, I reproduced it from the very first message:

2021-06-29 11:28:36 (INFO): Estimated memory for full processing: 41.1034MB (avail.: 256 MB), optimal image partitioning: 1 blocks

OTB only takes into account images when computing the memory usages

The message issued by the application unequivocally states "Estimated memory for full processing".

the ram parameters in (other) applications is only used to drive the image tiling process

There is no way to modify the 256 Mb limit for processing?

2021-06-29 11:24:48 (INFO) HooverCompareSegmentation: Default RAM limit for OTB is 256 MB
2021-06-29 11:28:36 (INFO): Estimated memory for full processing: 41.1034MB (avail.: 256 MB), optimal image partitioning: 1 blocks

From this log the estimated RAM is around 41MB.

The message issued by the application unequivocally states "Estimated memory for full processing".

The “full” refers to full images, It should be interpreted as “The memory used if the input images were processed at once and not divided into smaller blocks”.

There is no way to modify the 256 Mb limit for processing?

The environment variable, OTB_MAX_RAM_HINT can be set to modify the default RAM. But it will not be used in the first step of the processing, which requires the full input images to be loaded in memory.

From this log the estimated RAM is around 41MB.

My mistake here.

The “full” refers to full images, It should be interpreted as “The memory used if the input images were processed at once and not divided into smaller blocks”.

No way anybody can understand “full processing” in the sense you mention.

The environment variable, OTB_MAX_RAM_HINT can be set to modify the default RAM.

Would that be improving the performance of the application? If so please tell where we can modify its value.

No way anybody can understand “full processing” in the sense you mention.

I agree with you on this point, it would be nice to change the documentation and the logs associated to the ram parameter in OTB next version. Documenting what this parameter does and does not is important for the OTB user.

Would that be improving the performance of the application?

I don’t think so, as the input image is already fully loaded into memory. I don’t think you should expect noticeable improvements by increasing the RAM.

where we can modify its value.

like any other environment variable, for example:

`OTB_MAX_RAM_HINT=1000 otbcli_HooverCompareSegmentation -ingt rcTraining.tif [...]`

You can find on this page the list of environment variable affecting OTB applications

Cédric