Create mask problem (urgent)

Hi, I am using the mask image and the original image generated by ManageNoData in the OTB toolbox to perform imageclassifier, but I still cannot make the background not classified, what is the reason? (I use ManageNoData to set the parameter inside value to 1 and outside value to 0)

Hello,

Have you checked that the output of ManageNoData is correct ?

Also note that background pixel are classified as 0 in ImageClassifer (this value can be changed), maybe 0 is one of your class and that’s the problem ?

CĂ©dric

1 Like

Thank you for your answers. I still have a confusion now. I would like to consult you. Where can I generate the overall accuracy value after completing the classification in the OTB toolbox?

I think you are looking for the ComputeConfusionMatrix application, it computes a confusion matrix from a classification result and a reference (raster or vector), and overall accuracy is printed in the logs.

Note that you should set the no data value in this application, if you did in ImageClassifier.

CĂ©dric

1 Like

Thank you very much for taking the time to help me during my busy schedule. I also encountered the following problems while using the OTB toolbox:
Question 1. I don’t want the background value to participate in the next classification. When using managenodata for mask generation, because my picture has a black background, the background value is 0, and the value of the image is between 0 ~ 255, I would like to ask how to set the inside value and outside value in “Build a no-data Mask” mode, or “The new no-data value” in “Change the no-data value” mode “How do I set it up?”
Question 2. When using the “Trainimageclassifier” module for artificial neural network classification, how should the “Number of neurous in each intermediate layer” parameter name be set (relevant instructions I have read but did not understand), can help me in more detail Explain?
Question 3. When using the “Trainimageclassifier” module for artificial neural network classification, I encountered the following error: “(FATAL) TrainImagesClassifier: Caught std :: exception during application execution: bad lexical cast: source type value could not be interpreted as target”, I want to ask because “Number of neurous in each intermediate layer” in question 2 is not correct?

Dear @ipodsky,

To answer your question 1, let’s have a look at the application’s parameters:

Parameters: 
    -in                    <string>         Input image  (mandatory)
    -out                   <string> [pixel] Output Image  [pixel=uint8/uint16/int16/uint32/int32/float/double/cint16/cint32/cfloat/cdouble] (default value is float) (mandatory)
    -usenan                <boolean>        Consider NaN as no-data  (mandatory, default value is false)
    -mode                  <string>         No-data handling mode [buildmask/changevalue/apply] (mandatory, default value is buildmask)
    -mode.buildmask.inv    <float>          Inside Value  (mandatory, default value is 1)
    -mode.buildmask.outv   <float>          Outside Value  (mandatory, default value is 0)
    -mode.changevalue.newv <float>          The new no-data value  (mandatory, default value is 0)
    -mode.apply.mask       <string>         Mask image  (mandatory)
    -mode.apply.ndval      <float>          Nodata value used  (mandatory, default value is 0)
    -ram                   <int32>          Available RAM (MB)  (optional, off by default, default value is 256)
    -progress              <boolean>        Report progress 
    -help                  <string list>    Display long help (empty list), or help for given parameters keys

We can see that the inside and outside values are set with respectively -mode.buildmask.inv and -mode.buildmask.outv. For example, I could run the application with this command:

otbcli_ManageNoData -in input_image.tiff -out output_image.tiff -mode buildmask -mode.buildmask.inv 42. -mode.buildmask.outv 3.14

The other parameter you are looking for is -mode.changevalue.newv to provide the new value for no_data. For example:

otbcli_ManageNoData -in input_image.tiff -out output_image.tiff -mode changevalue -mode.changevalue.newv 1.618

About question 2, you have to use the parameter -classifier.ann.sizes in order to specify the size (the number of neurons) of the intermediate layers for your network. For example, if my network has 3 intermediates layers with respectively 15, 10 and 5 neurons:

otbcli_TrainImagesClassifier -io.il input_image1.tif input_image2.tif -io.vd input_vector1 input_vector2 -io.out output_model.txt -classifier ann -classifier.ann.sizes 15 10 5

The error you describe in question 3 happens when someone sets a value with a bad format (for example, a string when expecting an integer). Your guess is probably right, try again with a correct -classifier.ann.sizes and this error should disappear.

Best regards.

1 Like

Dear Dr. Julien

Thank you for your patience to answer, I still have a few questions to ask you.

  1. When using “managenodata” to generate a mask image, what is the specific difference between the “Build a no-data Mask” mode and the “The new no-data value” mode? (Because sometimes the mask images generated by these two modes are the same)

  2. How to use the mask images generated in the two modes “Build a no-data Mask” and “The new no-data value” in question 1 in the “imageclassifier” tool to avoid background What about image classification? (The background value of the image is 0, and the category ID at the time of classification also has a value of 0)

Best regards.

Dear Dr. Julien

Thank you for your patience to answer, I still have a few questions to ask you.

  1. When using “managenodata” to generate a mask image, what is the specific difference between the “Build a no-data Mask” mode and the “The new no-data value” mode? (Because sometimes the mask images generated by these two modes are the same)
  2. How to use the mask images generated in the two modes “Build a no-data Mask” and “The new no-data value” in question 1 in the “imageclassifier” tool to avoid background What about image classification? (The background value of the image is 0, and the category ID at the time of classification also has a value of 0)

Best regards.

Dear @ipodsky,

The ManageNoData application has 3 modes:

  1. The buildmask mode is useful when you want to create a no-data mask.
    With the input image, and the nodata value, it generates a new image whose pixels value equals:
  • the outside value if the pixel at the same position in the input image has a value equal to nodata.
  • the inside value if the pixel at the same position int the input image has a value not equal to nodata.
  1. The changevalue mode is useful if you want to change the nodata value for your input image.
    The output image will be the same as the input image, except for the pixels whose value equals nodata, their value will be changed to the new nodata value.

The output of those two modes should not be the same (except in some weird cases).

  1. The apply mode is useful when you want to apply a mask to an image.
    The output image will be the same as the input image, except for the masked pixels. The masked pixels are the pixels who share the same position than the pixels from the mask image who have a value equal to mask_nodata. The masked pixels will have a value equal to nodata.

If you want to use a mask in the ImageClassifier application, you should generate a mask with the ManageNoData application using the buildmask mode. You can then give the mask to the classifier with the parameter -mask. The pixels marked as nodata by the mask won’t be taken into account for the classification.

1 Like

Dear Dr. Julien,

Thank you very much for your meticulous help. During the follow-up process of using the OTB toolbox, I have encountered the following questions. I would like to ask the doctor for your help:

  1. After classification using the clustering algorithm ‘kmeansclassifier’ function, can kappa and classification accuracy be output directly? If not, do you need to use the ‘ComputeConfusionMatrix’ tool to generate it? If I use ‘ComputeConfusionMatrix’ tool, do I still need a truth map or shp file? If the relevant image does not have a truth map or shp file, how do I generate the overall accuracy and kappa value?
  2. Spectral-based classification of remote sensing images should be implemented using the relevant functions in the OTB toolbox. How are the specific steps implemented?

Best regards.
ipodsky

Dear Dr. Julien,

Thank you very much for your meticulous help. During the follow-up process of using the OTB toolbox, I have encountered the following questions. I would like to ask the doctor for your help:

  1. After classification using the clustering algorithm ‘kmeansclassifier’ function, can kappa and classification accuracy be output directly? If not, do you need to use the ‘ComputeConfusionMatrix’ tool to generate it? If I use ‘ComputeConfusionMatrix’ tool, do I still need a truth map or shp file? If the relevant image does not have a truth map or shp file, how do I generate the overall accuracy and kappa value?
  2. Spectral-based classification of remote sensing images should be implemented using the relevant functions in the OTB toolbox. How are the specific steps implemented?

Best regards.
ipodsky