Available RAM (mb)

I’ve been accessing the tools in OTB through mapla.bat

In many of the tools there is a tick box for ‘Available RAM (mb)’ with a default value of 256

Can someone explain how to use this? I assume increasing the RAM will increase processing speeds?

However if I just leave the box blank does the computer process at the maximum speed anyway?


Sometimes images used in remote sensing are very large, and their size is superior than the available memory for the program. Therefore if the image was entirely loaded into memory otb would crash.

This is why the images are processed by block of pixels (in the library it is called “streaming”). OTB estimates the size in memory of the input and all of the temporary outputs that will be computed. This value is then compared to the “Available RAM” parameter to determine how many block should be used.

You can see it in the logs of the applications, e.g. :

otbcli_EdgeExtraction -in S2A.tif  -out /tmp/out.tif -ram 512
2020-03-31 17:59:09 (INFO) EdgeExtraction: Default RAM limit for OTB is 256 MB
2020-03-31 17:59:09 (INFO) EdgeExtraction: GDAL maximum cache size is 794 MB
2020-03-31 17:59:09 (INFO) EdgeExtraction: OTB will use at most 12 threads
2020-03-31 17:59:09 (INFO): Estimated memory for full processing: 4691.88MB (avail.: 512 MB), optimal image partitioning: 10 blocks
2020-03-31 17:59:09 (INFO): File /tmp/out.tif will be written in 11 blocks of 10980x999 pixels
Writing /tmp/out.tif...: 100% [**************************************************] (8s)

Here the estimated memory for the application is 4691.88MB and the available ram is set to 512 MB, so the input is divided in 11 blocks.

Increasing the RAM parameter should increase processing time, because it reduces the number of IO operations.

Also note that some file format like png don’t support streaming and should be read or written in one block. In this case OTB will ignore the ram parameter.

Hope that helps,


Great I think that makes sense.

My laptop has 16GB of RAM. What would you recommend setting the available RAM a for fastest processing speed? I won’t be doing anything else with the laptop while it is processing.

I also have another question and thought I’d write it here instead of creating a new thread.

In ‘TrainImagesClassifier’ there is an option for elevation management and a DEM directory.

What exactly is this for? If I was to add a high resolution DEM would the classifier take elevation value into account during the classification?


Maybe you can set 2Gb of RAM for your processing. I think there’s no need to put more, because OTB will stream data very efficiently.
But you may also fix other variables to optimize processing. See here : https://www.orfeo-toolbox.org/CookBook/EnvironmentVariables.html
-> I think you can set ITK_NUMBER_OF_THREADS the number of cores of your CPU (or half that number).

Maybe @cedric.traizet will complete on this !

For your other question, a lot of OTB applications have some “elevation management” parameters. This allow OTB to make projections on the fly. But in most cases, you don’t have to use it.
To take into account the elevation value, you should first process your DEM file, to resample it to your image resolution (ie : using SuperImpose application) and then give that file to the classifier.
In some cases, it can be very interesting to use elevation, but at first, I would advise you to make first training with your input images only.

Hope that helps,


Great thanks for your help Yannik!

Dear @yannick and @wkcmark, I found this thread because I’m also looking for information about the Elevation Management parameters.
Is it only for projections on the fly? I use it as input in TrainImageClassifier. All my inputs, including the DEM, are already projected. What would be the function of the DEM in this process? Will it be part of the training process? I notice improvements in my classification results when using the DEM as elevation management parameter.

Thank you for your attention and nice day.

Hi @DavideFornacca,

Yes, OTB can process an orthorectification on the fly, but if your inputs are already georeferenced, there’s no use of setting this parameter. And any way, I think it’s better to project all your inputs in the same geometry before calling TrainImagesClassifier.
In your case, I can’t explain why you have better results when using it. As the classifier (for instance, a Random Forest) initializes some variables randomly, depending on your launch, you can observe slightly different results.

If you want to use the elevation value as an input of your training process, you should resample the DEM so it fits your input image(s). To do that, you can use SuperImpose application, which will resample and crop your DEM.
In certain cases, you could try this. But at first, I would advise you to make a training only on your input images (I think I have to find your first thread to better understand what kind of classification you intend to do!).

Hope that helps


Thanks so much for the clarification. I guess in my case it was j the random forest performance variation.
Best Regards