How to use PatchesSelection and PatchesExtraction applications in OTBTF

Could you point me to any tutorials or documentation that would help me better understand how PatchesSelection and PatchesExtraction are to be used to extract the ground truth from the images and create patches?

Here’s the documentation provided: Sampling - OTBTF

Hi srikar,

if you want i’ve made a tutorial to understand how use OTBTF with Docker and how to make a land use classification with deep learning method. At the 20-minute mark I explain how to use PatchExtraction, i’m sorry i’ve made the tutorial in french but you can put the automatic subtitle, works relatively well.
The link : Tutoriel : Classification d'occupation du sol deep learning avec Docker et OrfeoToolbox Tensorflow - YouTube

Best regards,

Adrien

1 Like

That’s great Adrien. I will follow the tutorial along and report back with an update. Thank you so much for making the tutorial !!

Srikar

Hey Adrien,

The model that you build is in Tensorflow 1 version, right ? Do otbcli_TensorflowModelTrain and otbcli_TensorFlowServe functions made to work with tensorflow version 1 model paradigm (placeholders, etc.) only ? Our team has worked on creating patches and label files using patchesextraction. But we are planning to build the model in v2, and work towards using otbtf accordingly. Have you tried building the model in TF v2. Do you have any pointers as to how to move forward with otbtf using TF 2.

Hello guys,

Like I mentioned in another post, there is a tutorial in the otbtf doc explaining how to build/train a model with TF v2 (keras). There are only benefits from this: do everything from python, cleaner/simpler code, and native distributed training as bonus. You can then use otbcli_TensorflowModelServe with the savedmodel as always (…from python if you prefer), it works with TF v1 and TF v2 models.

RΓ©mi

Hey Remi,

Thank you for reply. Our team has followed along the prescribed tutorial but, we are faced with a value error while training the model in the tutorial. We have encountered a compatibility issue with the output of the model and the target (one-hot encoded with dataset_preprocessing_fn).

ValueError: Shapes (8, 1, 1, 20) and (8, 64, 64, 20) are incompatible

FYI, We have 20 classes and an 8-band satellite image. Firstly, we have created the patches-images and corresponding labels files using PatchExtraction. We then successfully created the tfrecords from patchesimages as prescribed in the tutorial using DatasetFromPatchesImages, tf_dataset.to_tfrecords and divided them up into the train, test, and validation sets accordingly. We then built the exact same model as given. But while we are trying to train the model it is throwing out a value error saying the output from the model (8, 64, 64, 20) is not compatible with the target shape of (8, 1, 1, 20). Our patches are of size 64x64, we tried changing our patch sizes to 32,16,1, etc. No matter the patch size, we are getting a similar value error.

We are not sure if the prescribed model is missing something (maybe another layer) where it is not churning out the required output or if we have to adjust the one-hot encoding preprocessing step of the target. Please, guide us through this block. Feel free to ask for more details if needed.

Below is the training script we have used and the full error:


import argparse
from pathlib import Path
import tensorflow as tf
import os
from otbtf.model import ModelBase
from otbtf import DatasetFromPatchesImages, TFRecords

#---Model-------------------------------------------------------------------------------
"""
Implementation of a small U-Net like model
"""

# Number of classes estimated by the model
N_CLASSES = 20

# Name of the input in the `FCNNModel` instance, also name of the input node
# in the SavedModel
INPUT_NAME = "input_xs"

# Name of the output in the `FCNNModel` instance
TARGET_NAME = "predictions"

# Name (prefix) of the output node in the SavedModel
OUTPUT_SOFTMAX_NAME = "predictions_softmax_tensor"


class FCNNModel(ModelBase):
    """
    A Simple Fully Convolutional U-Net like model
    """

    def normalize_inputs(self, inputs: dict):
        """
        Inherits from `ModelBase`

        The model will use this function internally to normalize its inputs,
        before applying `get_outputs()` that actually builds the operations
        graph (convolutions, etc). This function will hence work at training
        time and inference time.

        In this example, we assume that we have an input 12 bits multispectral
        image with values ranging from [0, 10000], that we process using a
        simple stretch to roughly match the [0, 1] range.

        Params:
            inputs: dict of inputs

        Returns:
            dict of normalized inputs, ready to be used from `get_outputs()`
        """
        return {INPUT_NAME: tf.cast(inputs[INPUT_NAME], tf.float32) * 0.0001}

    def get_outputs(self, normalized_inputs: dict) -> dict:
        """
        Inherits from `ModelBase`

        This small model produces an output which has the same physical
        spacing as the input. The model generates [1 x 1 x N_CLASSES] output
        pixel for [32 x 32 x <nb channels>] input pixels.

        Params:
            normalized_inputs: dict of normalized inputs

        Returns:
            dict of model outputs
        """

        norm_inp = normalized_inputs[INPUT_NAME]

        def _conv(inp, depth, name):
            conv_op = tf.keras.layers.Conv2D(
                filters=depth,
                kernel_size=3,
                strides=2,
                activation="relu",
                padding="same",
                name=name
            )
            return conv_op(inp)

        def _tconv(inp, depth, name, activation="relu"):
            tconv_op = tf.keras.layers.Conv2DTranspose(
                filters=depth,
                kernel_size=3,
                strides=2,
                activation=activation,
                padding="same",
                name=name
            )
            return tconv_op(inp)

        out_conv1 = _conv(norm_inp, 16, "conv1")
        out_conv2 = _conv(out_conv1, 32, "conv2")
        out_conv3 = _conv(out_conv2, 64, "conv3")
        out_conv4 = _conv(out_conv3, 64, "conv4")
        out_tconv1 = _tconv(out_conv4, 64, "tconv1") + out_conv3
        out_tconv2 = _tconv(out_tconv1, 32, "tconv2") + out_conv2
        out_tconv3 = _tconv(out_tconv2, 16, "tconv3") + out_conv1
        out_tconv4 = _tconv(out_tconv3, N_CLASSES, "classifier", None)

        # Generally it is a good thing to name the final layers of the network
        # (i.e. the layers of which outputs are returned from
        # `MyModel.get_output()`). Indeed this enables to retrieve them for
        # inference time, using their name. In case your forgot to name the
        # last layers, it is still possible to look at the model outputs using
        # the `saved_model_cli show --dir /path/to/your/savedmodel --all`
        # command.
        #
        # Do not confuse **the name of the output layers** (i.e. the "name"
        # property of the tf.keras.layer that is used to generate an output
        # tensor) and **the key of the output tensor**, in the dict returned
        # from `MyModel.get_output()`. They are two identifiers with a
        # different purpose:
        #  - the output layer name is used only at inference time, to identify
        #    the output tensor from which generate the output image,
        #  - the output tensor key identifies the output tensors, mainly to
        #    fit the targets to model outputs during training process, but it
        #    can also be used to access the tensors as tf/keras objects, for
        #    instance to display previews images in TensorBoard.
        softmax_op = tf.keras.layers.Softmax(name=OUTPUT_SOFTMAX_NAME)
        predictions = softmax_op(out_tconv4)

        return {TARGET_NAME: predictions}


def dataset_preprocessing_fn(examples: dict):
    """
    Preprocessing function for the training dataset.
    This function is only used at training time, to put the data in the
    expected format for the training step.
    DO NOT USE THIS FUNCTION TO NORMALIZE THE INPUTS ! (see
    `otbtf.ModelBase.normalize_inputs` for that).
    Note that this function is not called here, but in the code that prepares
    the datasets.

    Params:
        examples: dict for examples (i.e. inputs and targets stored in a single
            dict)

    Returns:
        preprocessed examples

    """
    return {
        INPUT_NAME: examples["input_xs_patches"],
        TARGET_NAME: tf.one_hot(
            tf.squeeze(tf.cast(examples["labels_patches"], tf.int32), axis=-1),
            depth=N_CLASSES
        )
    }


def train(model_dir, batch_size, learning_rate, nb_epochs, ds_train, ds_valid, ds_test):
    """
    Create, train, and save the model.

    Params:
        params: contains batch_size, learning_rate, nb_epochs, and model_dir
        ds_train: training dataset
        ds_valid: validation dataset
        ds_test: testing dataset

    """

    strategy = tf.distribute.MirroredStrategy()  # For single or multi-GPUs
    with strategy.scope():
        # Model instantiation. Note that the normalize_fn is now part of the
        # model. It is mandatory to instantiate the model inside the strategy
        # scope.
        model = FCNNModel(dataset_element_spec=ds_train.element_spec)

        # Compile the model
        model.compile(
            loss=tf.keras.losses.CategoricalCrossentropy(),
            optimizer=tf.keras.optimizers.Adam(
                learning_rate=learning_rate
            ),
            metrics=[tf.keras.metrics.Precision(), tf.keras.metrics.Recall()]
        )

        # Summarize the model (in CLI)
        model.summary()

        # Train
        model.fit(ds_train, epochs=nb_epochs, validation_data=ds_valid)

        # Evaluate against test data
        if ds_test is not None:
            model.evaluate(ds_test, batch_size=batch_size)

        # Save trained model as SavedModel
        model.save(model_dir)




#---Create TFRecords--------------------------------------------------------------------

def create_tfrecords(patches, labels, outdir):
    patches = sorted(patches)
    labels = sorted(labels)
    outdir = Path(outdir)
    if not outdir.exists():
        outdir.mkdir(exist_ok=True)
    #create a dataset
    dataset = DatasetFromPatchesImages(
        filenames_dict = {
            "input_xs_patches":patches,
            "labels_patches": labels
        }
    )
    #convert dataset into TFRecords
    dataset.to_tfrecords(output_dir=outdir, drop_remainder=False)

#----Main------------------------------------------------------------
if __name__=="__main__":
    datapath = "/home/otbuser/all/data/"
    batch_size = 8
    learning_rate = 0.0001
    nb_epochs = 5

    # create TFRecords
    patches = ['/home/otbuser/all/data/area2_0530_2022_8bands_norm_patches_A.tif', '/home/otbuser/all/data/area2_0530_2022_8bands_norm_patches_B.tif']
    labels = ['/home/otbuser/all/data/area2_0530_2022_8bands_norm_labels_A.tif', '/home/otbuser/all/data/area2_0530_2022_8bands_norm_labels_B.tif']
    create_tfrecords(patches=patches[0:1], labels=labels[0:1], outdir=datapath+"train")
    create_tfrecords(patches=patches[1:], labels=labels[1:], outdir=datapath+"valid")

    # Train the model and save the model
    train_dir = os.path.join(datapath, "train")
    valid_dir = os.path.join(datapath, "valid")
    test_dir = None # define the training directory if test dataset is available
    kwargs = {
        "batch_size": batch_size,
        "target_keys": [TARGET_NAME],
        "preprocessing_fn": dataset_preprocessing_fn
    }
    ds_train = TFRecords(train_dir).read(shuffle_buffer_size=1000, **kwargs)
    ds_valid = TFRecords(valid_dir).read(**kwargs)

    train(datapath+"sandbox_model", batch_size, learning_rate, nb_epochs, ds_train, ds_valid, ds_test=None)

----------

Epoch 1/5
Traceback (most recent call last):
  File "/home/otbuser/all/code/sandbox_model.py", line 239, in <module>
    train(datapath+"sandbox_model", batch_size, learning_rate, nb_epochs, ds_train, ds_valid, ds_test=None)
  File "/home/otbuser/all/code/sandbox_model.py", line 184, in train
    model.fit(ds_train, epochs=nb_epochs, validation_data=ds_valid)
  File "/opt/otbtf/lib/python3/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/tmp/__autograph_generated_file2pdlf9di.py", line 15, in tf__train_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
ValueError: in user code:

    File "/opt/otbtf/lib/python3/dist-packages/keras/engine/training.py", line 1284, in train_function  *
        return step_function(self, iterator)
    File "/opt/otbtf/lib/python3/dist-packages/keras/engine/training.py", line 1268, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/opt/otbtf/lib/python3/dist-packages/keras/engine/training.py", line 1249, in run_step  **
        outputs = model.train_step(data)
    File "/opt/otbtf/lib/python3/dist-packages/keras/engine/training.py", line 1051, in train_step
        loss = self.compute_loss(x, y, y_pred, sample_weight)
    File "/opt/otbtf/lib/python3/dist-packages/keras/engine/training.py", line 1109, in compute_loss
        return self.compiled_loss(
    File "/opt/otbtf/lib/python3/dist-packages/keras/engine/compile_utils.py", line 265, in __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    File "/opt/otbtf/lib/python3/dist-packages/keras/losses.py", line 142, in __call__
        losses = call_fn(y_true, y_pred)
    File "/opt/otbtf/lib/python3/dist-packages/keras/losses.py", line 268, in call  **
        return ag_fn(y_true, y_pred, **self._fn_kwargs)
    File "/opt/otbtf/lib/python3/dist-packages/keras/losses.py", line 1984, in categorical_crossentropy
        return backend.categorical_crossentropy(
    File "/opt/otbtf/lib/python3/dist-packages/keras/backend.py", line 5559, in categorical_crossentropy
        target.shape.assert_is_compatible_with(output.shape)

ValueError: Shapes (8, 1, 1, 20) and (8, 64, 64, 20) are incompatible

Hello @srikar ,

You must train this model with input/output of the same size.
This is a U-Net-like model, that needs input/output of the same size (in your case, 64x64 pixels).

The confusion comes from the fact that PatchesExtraction can extract an additional 1x1 patch carrying the values of the vector data (of the specified field). This is optional, and you don’t always need this 1x1 patch.

  • If you have dense terrain truth (i.e. one terrain truth patch per input patch): To extract two set of patches with the similar size, you must use PatchesExtraction with the environment variable OTB_TF_NSOURCES set to 2 (one for the input, the other for the labels).

  • If your terrain truth is sparse (i.e. one label value per patch), you can build a fully convolutional model that outputs a 1x1 label for an input 64x64 (use convolutions without padding to shrink the output size, layer after layer), and use PatchesExtraction with 64x64 patches for the input, and 1x1 patches for the labels

Hello @remi.cresson,

Thank you so much for your prompt response and solutions to overcome our block.

We have tried both of your suggested solutions and we need some more help:

Solution 1:
We have tried using OTB_TF_NSOURCES = 2 as you have suggested in the below code, We didn’t see any change in the label output dimension though or any extra file being created. we are still getting only one label of dimension(1x1x1) per patch (64x64x8). Whereas, the output of the model is expecting a 64x64 label. How do we get a label output that is equal to the size of input? What are we missing or making a mistake here?

def PatchesExtraction(apptype, datapath, input, vec, out_patches, out_labels, patchsize,OTB_TF_NSOURCES = 2):
        app = otbApplication.Registry.CreateApplication(apptype)
        app.SetParameterStringList("source1.il", [datapath + input])
        app.SetParameterString("source1.out", datapath + out_patches) 
        app.SetParameterInt("source1.patchsizex", patchsize)
        app.SetParameterInt("source1.patchsizey", patchsize)
        app.SetParameterString("vec", datapath + vec)
        app.SetParameterString("field", "class")
        app.SetParameterString("outlabels", datapath + out_labels)
        app.ExecuteAndWriteOutput()

output shapes that are being generated as seen in output_shapes.json

{
    "input_xs_patches": [
        64,
        64,
        8
    ],
    "labels_patches": [
        1,
        1,
        1
    ]


Solution 2:

Also, as suggested by you, we have built a fully convolutional model that outputs a 1x1 label for an input 64x64.

        out_conv5 = _conv(norm_inp, 8, "conv5")
        out_conv1 = _conv(out_conv5, 16, "conv1")
        out_conv2 = _conv(out_conv1, 32, "conv2")
        out_conv3 = _conv(out_conv2, 64, "conv3")
        out_conv4 = _conv(out_conv3, 64, "conv4")
        out_tconv1 = _tconv(out_conv4, 64, "tconv1") + out_conv3
        out_tconv2 = _tconv(out_tconv1, 32, "tconv2") + out_conv2
        out_tconv3 = _tconv(out_tconv2, 16, "tconv3") + out_conv1
        out_tconv5 = _tconv(out_tconv3, 8, "tconv5") + out_conv5
# Replace the transposed convolutions with global average pooling
        gap = tf.keras.layers.GlobalAveragePooling2D(name="global_avg_pool")(out_tconv4)
        gap = tf.expand_dims(gap, axis=1)
        gap = tf.expand_dims(gap, axis=1)
        softmax_op = tf.keras.layers.Softmax(name=OUTPUT_SOFTMAX_NAME)
        predictions = softmax_op(gap)
        #print(predictions.shape)

        return {TARGET_NAME: predictions}

Although, the training was successful and a model got built. We are encountering a runtime error while we trying to use TensorflowModelServe.

import otbApplication
app = otbApplication.Registry.CreateApplication("TensorflowModelServe")
app.SetParameterStringList("source1.il", ['/home/otbuser/all/data/area2_0530_2022_8bands_norm.tif'])
app.SetParameterInt("source1.rfieldx", 256)
app.SetParameterInt("source1.rfieldy", 256)
app.SetParameterString("source1.placeholder", "input_xs")
app.SetParameterString("model.dir", "/home/otbuser/all/data/sandbox_model")
#app.EnableParameter("fullyconv")
app.SetParameterStringList("output.names", ["predictions_softmax_tensor"]) 
app.SetParameterInt("output.efieldx", 128)
app.SetParameterInt("output.efieldy", 128)
app.SetParameterString("out", "/home/otbuser/all/data/softmax.tif")
app.ExecuteAndWriteOutput()
RuntimeError: Exception thrown in otbApplication Application_WriteOutput: /usr/include/ITK-4.13/itkImageConstIterator.h:210:
itk::ERROR: Region ImageRegion (0x7ffcd3f01910)
  Dimension: 2
  Index: [-64, -64]
  Size: [256, 256]
 is outside of buffered region ImageRegion (0x55affcf20e80)
  Dimension: 2
  Index: [0, 0]
  Size: [257, 257]

Also, fullyconv parameter is causing exception error. It is not being recognized at all.

Exception: TensorflowModelServe: parameter 'fullyconv' was not recognized. 

Available keys are ('source1', 'source1.il', 'source1.rfieldx', 'source1.rfieldy', 'source1.placeholder', 'model', 'model.dir', 'model.userplaceholders', 'model.fullyconv', 'model.tagsets', 'output', 'output.spcscale', 'output.names', 'output.efieldx', 'output.efieldy', 'optim', 'optim.disabletiling', 'optim.tilesizex', 'optim.tilesizey', 'out')

In conclusion, we seek clarity with regards to creating label output the same as the patch dimension, and with regards to the usage of ModelServe. We are not clear as to how β€˜Postprocessing to avoid blocking artifacts’ is done and more clarity in documentation is needed as to what parameters of the ModelServe function are doing.

Thanks in advance,
Srikar

Yes, you must also set the source2 parameters (il, patchsizex, …). The error message you probably encounter at this point should tell you what is missing.

Also, do not set outlabels parameter: you don’t need it because they are the 1x1 patches carrying the vector data field class value.

Solution 2:

Also, as suggested by you, we have built a fully convolutional model that outputs a 1x1 label for an input 64x64.

You model is not fully convolutional since a global pooling is applied in the end, thus making the output size always 1x1 whatever the input size. But I believe that you noticed that and commented line 8, which is okay. Now, you could apply this model in a patch-based fashion, for every pixels, changing source1.rfieldx/y to 64 and output.efieldx/y to 1: this way the model is applied on the same kind of image as it was trained for.

This model will be really slow, and you could maybe build a FCN instead: just stack as many convolutions with stride 1 and no padding (β€œvalid” instead of β€œsame” in convolution operator), in order to have a 1x1 output from a 64x64 input. Then you could set fullyconv to 1 at inference time!

Greetings @remi.cresson

We’ve explored the suggested solutions, but the issue persists. Could we please revisit the solutions proposed earlier for further clarity?

For Solution - 1:
What we are currently doing is we are normalizing input then passing it in PolygonClassStatistics, the output of PolygonClassStatistics goes into SampleSelection and using the output of SampleSelection as input of PatchesExtraction from which we are extracting Patches and Labels.

input = normalized_input                # use the normalized input !! 

out_patches_A = input.split('.')[0] + "_patches_A.tif"
out_labels_A = input.split('.')[0] + "_labels_A.tif"

out_patches_B = input.split('.')[0] + "_patches_B.tif"
out_labels_B = input.split('.')[0] + "_labels_B.tif"

#--------------------------------------------------------------------------------
apptype = "PolygonClassStatistics"
vec = "area2_0123_2023_raster_classification_13.shp"
output = "area2_0123_2023_raster_classification_13_vecstats.xml"

PolygonClassStatistics(apptype, datapath, input, vec, output)
print("\Stats created")
#------------------------------------------------------------------------------
apptype = "SampleSelection"
instats = "area2_0123_2023_raster_classification_13_vecstats.xml"
output_A = "area2_0123_2023_raster_classification_13_points_A.shp"
output_B = "area2_0123_2023_raster_classification_13_points_B.shp"

SampleSelection(apptype, datapath, input, vec, instats, output_A)
print("\nSamples A created")
SampleSelection(apptype, datapath, input, vec, instats, output_B)
print("\nSamples B created")
#------------------------------------------------------------------------------
out_patches_A = input.split('.')[0] + "_patches_A.tif"
out_labels_A = input.split('.')[0] + "_labels_A.tif"

out_patches_B = input.split('.')[0] + "_patches_B.tif"
out_labels_B = input.split('.')[0] + "_labels_B.tif"
#------------------------------------------------------------------------------
apptype = "PatchesExtraction"
patchsize = 128		#16?
vec_A = output_A
vec_B = output_B

PatchesExtraction(apptype, datapath, input, vec_A, out_patches_A, out_labels_A, patchsize)
print("\nPatches A created")
PatchesExtraction(apptype, datapath, input, vec_B, out_patches_B, out_labels_B, patchsize)
print("\nPatches B created")

The steps that we have currently taken in PatchesExtraction.

  1. Removing the outlabels field althogether. As suggested.
  2. Added OTB_TF_NSOURCES = 2 and set all the parameters for β€œsource2.il”, but it’s giving Runtime error.
RuntimeError: Exception thrown in otbApplication Application_SetParameterStringList: /src/otb/otb/Modules/Wrappers/ApplicationEngine/src/otbWrapperParameterGroup.cxx:470:
itk::ERROR: ParameterList(0x5619dee818c0): Could not find parameter source2.il
  1. Not setting the patchsize to anything, which is also giving RuntimeError of cannot convert patchsizex to int (no value).
"For a SPOT6 image for instance, the patch size can "
        "be 64x64 and for an input Sentinel-2 time series the patch size could be "
        "1x1. Note that if a dimension size is not defined, the largest one will "
        "be used (i.e. input image dimensions"

We also tried visualizing patches which is big rectangle something like a building, however the labels visualization is very thin rectangle surmounted on patches’ rectangle, and they are not overlapping as they are supposed to be.

This is how we are using the PatchesExtraction.

def PatchesExtraction(apptype, datapath, input, vec, out_patches, out_labels, patchsize, OTB_TF_NSOURCES = 2):
        # trying OTB_TF_Nsources = 2
	app = otbApplication.Registry.CreateApplication(apptype)
	app.SetParameterStringList("source1.il", [datapath + input])
	app.SetParameterInt("source1.patchsizex", patchsize)
	app.SetParameterInt("source1.patchsizey", patchsize)
	app.SetParameterString("vec", datapath + vec)
	app.SetParameterString("field", "class")
	app.SetParameterString("source1.out", datapath + out_patches)
	# app.SetParameterString("outlabels", datapath + out_labels)

	# app.SetParameterStringList("source2.il", [datapath + input])
	# app.SetParameterInt("source2.patchsizex", patchsize)
	# app.SetParameterInt("source2.patchsizey", patchsize)
	# app.SetParameterString("vec", datapath + vec)
	# app.SetParameterString("field", "class")
	# app.SetParameterString("source2.out", datapath + out_labels)
	
	app.ExecuteAndWriteOutput()

I think we are not correctly implementing the solution 1. Could you please elaborate on how to implement solution 1.


For the solution - 2, with same abovementioned arguments for PatchesExtraction which has patch size = 128, we are somehow getting patches of (8x8x8) and labels are still (1x1x1).
Even before the patches were (8x8x8), we were getting the same error.

def get_outputs(self, normalized_inputs: dict) -> dict:

        norm_inp = normalized_inputs[INPUT_NAME]

        def _conv(inp, depth, name):
            conv_op = tf.keras.layers.Conv2D(
                filters=depth,
                kernel_size=3,
                strides=2,
                activation="relu",
                padding="same",
                name=name
            )
            return conv_op(inp)

        def _tconv(inp, depth, name, activation="relu"):
            tconv_op = tf.keras.layers.Conv2DTranspose(
                filters=depth,
                kernel_size=3,
                strides=2,
                activation=activation,
                padding="same",
                name=name
            )
            return tconv_op(inp)
        out_conv5 = _conv(norm_inp, 8, "conv5")
        out_conv1 = _conv(out_conv5, 16, "conv1")
        out_conv2 = _conv(out_conv1, 32, "conv2")
        out_conv3 = _conv(out_conv2, 64, "conv3")
        out_conv4 = _conv(out_conv3, 64, "conv4")
        out_tconv1 = _tconv(out_conv4, 64, "tconv1") + out_conv3
        out_tconv2 = _tconv(out_tconv1, 32, "tconv2") + out_conv2
        out_tconv3 = _tconv(out_tconv2, 16, "tconv3") + out_conv1
        out_tconv5 = _tconv(out_tconv3, 8, "tconv5") + out_conv5
        out_tconv4 = _tconv(out_tconv5, N_CLASSES, "classifier", None)
        gap = out_tconv4
        # gap = tf.keras.layers.Flatten(gap)
        gap = tf.expand_dims(gap, axis=0)
        print("Batch size:", gap.shape[0])
        print("Height:", gap.shape[1])
        print("Width:", gap.shape[2])
        print("Channels:", gap.shape[3])

        softmax_op = tf.keras.layers.Softmax(name=OUTPUT_SOFTMAX_NAME)
        predictions = softmax_op(gap)
 
        predictions = tf.argmax(predictions)

        return {TARGET_NAME: predictions}
def create_tfrecords(patches, labels, outdir):
    
    patches = sorted(patches)
    labels = sorted(labels)
    outdir = Path(outdir)
    if not outdir.exists():
        outdir.mkdir(exist_ok=True)

    #create a dataset
    dataset = DatasetFromPatchesImages(
        filenames_dict = {
            "input_xs_patches":patches,
            "labels_patches": labels
        }
    )

    is_eager_execution = tf.executing_eagerly()
    print('Is eager execution:', is_eager_execution)
    dataset.to_tfrecords(output_dir=outdir, drop_remainder=False)

#----Main------------------------------------------------------------
if __name__=="__main__":
    datapath = "/home/otbuser/all/data/"
    batch_size = 5
    learning_rate = 0.0001
    nb_epochs = 5

    # create TFRecords
    tf.compat.v1.enable_eager_execution()
    patches = ['/home/otbuser/all/data/area2_0530_2022_8bands_patches_A.tif', '/home/otbuser/all/data/area2_0530_2022_8bands_patches_B.tif']
    labels = ['/home/otbuser/all/data/area2_0530_2022_8bands_labels_A.tif', '/home/otbuser/all/data/area2_0530_2022_8bands_labels_B.tif']
    create_tfrecords(patches=patches[0:1], labels=labels[0:1], outdir=datapath+"train")
    create_tfrecords(patches=patches[1:], labels=labels[1:], outdir=datapath+"valid")

    # Train the model and save the model
    train_dir = os.path.join(datapath, "train")
    valid_dir = os.path.join(datapath, "valid")
    test_dir = None # define the training directory if test dataset is available
    kwargs = {
        "batch_size": batch_size,
        "target_keys": [TARGET_NAME],
        "preprocessing_fn": dataset_preprocessing_fn
    }
    #shuffle_buffer_size=1000,
    ds_train = TFRecords(train_dir).read(**kwargs)
    ds_valid = TFRecords(valid_dir).read(**kwargs)

    train(datapath+"sandbox_model", batch_size, learning_rate, nb_epochs, ds_train, ds_valid, ds_test=None)
    tf.compat.v1.disable_eager_execution()

For solution - 2, these are the change we made.

  1. Removed the Global Average Pooling layer. As suggested.
  2. We were getting error if we were not expanding dimension before softmax layer.
    This is where we were getting error in postprocess_outputs() in model.py
    cropped = out_tensor[:, crop:-crop, crop:-crop, :] which i think because of compatibility issue of TF! and TF2.

This is the output and error.

2023-10-20 07:07:14 INFO     Number of samples: 3980
2023-10-20 07:07:14 INFO     output_types: {'input_xs_patches': tf.float32, 'labels_patches': tf.uint8}
2023-10-20 07:07:14 INFO     output_shapes: {'input_xs_patches': (8, 8, 8), 'labels_patches': (1, 1, 1)}
Is eager execution: True
2023-10-20 07:07:15 INFO     3980 samples
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 40/40 [00:00<00:00, 46.86it/s]
2023-10-20 07:07:16 INFO     Number of samples: 3980
2023-10-20 07:07:16 INFO     output_types: {'input_xs_patches': tf.float32, 'labels_patches': tf.uint8}
2023-10-20 07:07:16 INFO     output_shapes: {'input_xs_patches': (8, 8, 8), 'labels_patches': (1, 1, 1)}
Is eager execution: True
2023-10-20 07:07:16 INFO     3980 samples
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 40/40 [00:00<00:00, 45.51it/s]
2023-10-20 07:07:16 INFO     Searching TFRecords in /home/otbuser/all/data/train/*.records...
2023-10-20 07:07:16 INFO     Number of matching TFRecords: 40
2023-10-20 07:07:16 INFO     Reducing number of records to : 40
2023-10-20 07:07:17 INFO     Searching TFRecords in /home/otbuser/all/data/valid/*.records...
2023-10-20 07:07:17 INFO     Number of matching TFRecords: 40
2023-10-20 07:07:17 INFO     Reducing number of records to : 40
WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
2023-10-20 07:07:17 WARNING  There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0',)
2023-10-20 07:07:17 INFO     Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0',)
2023-10-20 07:07:17 INFO     Dataset input element spec: {'input_xs': TensorSpec(shape=(5, 8, 8, 8), dtype=tf.float32, name=None)}
2023-10-20 07:07:17 INFO     Found dataset input keys: ['input_xs']
2023-10-20 07:07:17 INFO     Inputs shapes: {'input_xs': TensorShape([Dimension(8), Dimension(8), Dimension(8)])}
2023-10-20 07:07:17 INFO     Inference cropping values: [16, 32, 64, 96, 128]
2023-10-20 07:07:17 INFO     Original shape for input input_xs: [Dimension(8), Dimension(8), Dimension(8)]
2023-10-20 07:07:17 INFO     New shape for input input_xs: [None, None, Dimension(8)]
2023-10-20 07:07:17 INFO     Model inputs: {'input_xs': <KerasTensor: shape=(?, ?, ?, 8) dtype=float32 (created by layer 'input_xs')>}
2023-10-20 07:07:17 INFO     Normalized model inputs: {'input_xs': <KerasTensor: shape=(?, ?, ?, 8) dtype=float32 (created by layer 'tf.math.multiply')>}
Batch size: 1
Height: ?
Width: ?
Channels: ?
2023-10-20 07:07:17 INFO     Model outputs: {'predictions': <KerasTensor: shape=(?, ?, ?, 20) dtype=int64 (created by layer 'tf.math.argmax')>}
2023-10-20 07:07:17 INFO     Adding extra output for tensor predictions with crop 16 (tf.math.argmax_crop16)
2023-10-20 07:07:17 INFO     Adding extra output for tensor predictions with crop 32 (tf.math.argmax_crop32)
2023-10-20 07:07:17 INFO     Adding extra output for tensor predictions with crop 64 (tf.math.argmax_crop64)
2023-10-20 07:07:17 INFO     Adding extra output for tensor predictions with crop 96 (tf.math.argmax_crop96)
2023-10-20 07:07:17 INFO     Adding extra output for tensor predictions with crop 128 (tf.math.argmax_crop128)
WARNING:tensorflow:From /home/otbuser/all/code/cocktail/sandbox/otbtf/combined_model.py:294: The name tf.keras.optimizers.Adam is deprecated. Please use tf.keras.optimizers.legacy.Adam instead.

2023-10-20 07:07:17 WARNING  From /home/otbuser/all/code/cocktail/sandbox/otbtf/combined_model.py:294: The name tf.keras.optimizers.Adam is deprecated. Please use tf.keras.optimizers.legacy.Adam instead.

Model: "FCNNModel"
______________________________________________________________________________________________________________________________________________________
 Layer (type)                                    Output Shape                     Param #           Connected to                                      
======================================================================================================================================================
 input_xs (InputLayer)                           [(None, None, None, 8)]          0                 []                                                
                                                                                                                                                      
 tf.cast (TFOpLambda)                            (None, None, None, 8)            0                 ['input_xs[0][0]']                                
                                                                                                                                                      
 tf.math.multiply (TFOpLambda)                   (None, None, None, 8)            0                 ['tf.cast[0][0]']                                 
                                                                                                                                                      
 conv5 (Conv2D)                                  (None, None, None, 8)            584               ['tf.math.multiply[0][0]']                        
                                                                                                                                                      
 conv1 (Conv2D)                                  (None, None, None, 16)           1168              ['conv5[0][0]']                                   
                                                                                                                                                      
 conv2 (Conv2D)                                  (None, None, None, 32)           4640              ['conv1[0][0]']                                   
                                                                                                                                                      
 conv3 (Conv2D)                                  (None, None, None, 64)           18496             ['conv2[0][0]']                                   
                                                                                                                                                      
 conv4 (Conv2D)                                  (None, None, None, 64)           36928             ['conv3[0][0]']                                   
                                                                                                                                                      
 tconv1 (Conv2DTranspose)                        (None, None, None, 64)           36928             ['conv4[0][0]']                                   
                                                                                                                                                      
 tf.__operators__.add (TFOpLambda)               (None, None, None, 64)           0                 ['tconv1[0][0]',                                  
                                                                                                     'conv3[0][0]']                                   
                                                                                                                                                      
 tconv2 (Conv2DTranspose)                        (None, None, None, 32)           18464             ['tf.__operators__.add[0][0]']                    
                                                                                                                                                      
 tf.__operators__.add_1 (TFOpLambda)             (None, None, None, 32)           0                 ['tconv2[0][0]',                                  
                                                                                                     'conv2[0][0]']                                   
                                                                                                                                                      
 tconv3 (Conv2DTranspose)                        (None, None, None, 16)           4624              ['tf.__operators__.add_1[0][0]']                  
                                                                                                                                                      
 tf.__operators__.add_2 (TFOpLambda)             (None, None, None, 16)           0                 ['tconv3[0][0]',                                  
                                                                                                     'conv1[0][0]']                                   
                                                                                                                                                      
 tconv5 (Conv2DTranspose)                        (None, None, None, 8)            1160              ['tf.__operators__.add_2[0][0]']                  
                                                                                                                                                      
 tf.__operators__.add_3 (TFOpLambda)             (None, None, None, 8)            0                 ['tconv5[0][0]',                                  
                                                                                                     'conv5[0][0]']                                   
                                                                                                                                                      
 classifier (Conv2DTranspose)                    (None, None, None, 20)           1460              ['tf.__operators__.add_3[0][0]']                  
                                                                                                                                                      
 tf.expand_dims (TFOpLambda)                     (1, None, None, None, 20)        0                 ['classifier[0][0]']                              
                                                                                                                                                      
 predictions_softmax_tensor (Softmax)            (1, None, None, None, 20)        0                 ['tf.expand_dims[0][0]']                          
                                                                                                                                                      
 tf.math.argmax (TFOpLambda)                     (None, None, None, 20)           0                 ['predictions_softmax_tensor[0][0]']              
                                                                                                                                                      
 tf.__operators__.getitem_4 (SlicingOpLambda)    (None, None, None, 20)           0                 ['tf.math.argmax[0][0]']                          
                                                                                                                                                      
 tf.__operators__.getitem (SlicingOpLambda)      (None, None, None, 20)           0                 ['tf.math.argmax[0][0]']                          
                                                                                                                                                      
 tf.__operators__.getitem_1 (SlicingOpLambda)    (None, None, None, 20)           0                 ['tf.math.argmax[0][0]']                          
                                                                                                                                                      
 tf.__operators__.getitem_2 (SlicingOpLambda)    (None, None, None, 20)           0                 ['tf.math.argmax[0][0]']                          
                                                                                                                                                      
 tf.__operators__.getitem_3 (SlicingOpLambda)    (None, None, None, 20)           0                 ['tf.math.argmax[0][0]']                          
                                                                                                                                                      
 tf.math.argmax_crop128 (Activation)             (None, None, None, 20)           0                 ['tf.__operators__.getitem_4[0][0]']              
                                                                                                                                                      
 tf.math.argmax_crop16 (Activation)              (None, None, None, 20)           0                 ['tf.__operators__.getitem[0][0]']                
                                                                                                                                                      
 tf.math.argmax_crop32 (Activation)              (None, None, None, 20)           0                 ['tf.__operators__.getitem_1[0][0]']              
                                                                                                                                                      
 tf.math.argmax_crop64 (Activation)              (None, None, None, 20)           0                 ['tf.__operators__.getitem_2[0][0]']              
                                                                                                                                                      
 tf.math.argmax_crop96 (Activation)              (None, None, None, 20)           0                 ['tf.__operators__.getitem_3[0][0]']              
                                                                                                                                                      
======================================================================================================================================================
Total params: 124,452
Trainable params: 124,452
Non-trainable params: 0
______________________________________________________________________________________________________________________________________________________
2023-10-20 07:07:17.861851: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype string and shape [40]
         [[{{node Placeholder/_0}}]]
2023-10-20 07:07:17.862284: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype string and shape [40]
         [[{{node Placeholder/_0}}]]
Epoch 1/5
INFO:tensorflow:Error reported to Coordinator: Exception encountered when calling layer 'tf.__operators__.add_2' (type TFOpLambda).

Dimensions must be equal, but are 8 and 2 for '{{node FCNNModel/tf.__operators__.add_2/AddV2}} = AddV2[T=DT_FLOAT](FCNNModel/tconv3/Relu, FCNNModel/conv1/Relu)' with input shapes: [5,8,8,16], [5,2,2,16].

Call arguments received by layer 'tf.__operators__.add_2' (type TFOpLambda):
  β€’ x=tf.Tensor(shape=(5, 8, 8, 16), dtype=float32)
  β€’ y=tf.Tensor(shape=(5, 2, 2, 16), dtype=float32)
  β€’ name=None
Traceback (most recent call last):
  File "/opt/otbtf/lib/python3/dist-packages/tensorflow/python/training/coordinator.py", line 293, in stop_on_exception
    yield

As you can see the mismatch between x and y, which remains the same no matter what batch size, what input.
x = (5, 8, 8, 16), y = (5, 2, 2, 16)

We also tried tweaking the layers to have β€œvalid” padding which was giving error on 2nd conv layer.
We also tried changing some other parameters and tried making smaller Unet as well.
All the things were pointing in the same error. (mismatch between x and y).

Any assistance would be greatly appreciated as we have encountered this issue for quite some time.

Solution 1
itk::ERROR: ParameterList(0x5619dee818c0): Could not find parameter source2.il

You have this error because OTB_TF_NSOURCES is not correctly set.
Before instantiating your application from python you can do that with os.environ["OTB_TF_NSOURCES"] = "2"

Solution 2

There is too much things that I can’t answer because I don’t have all your code. Plus, it looks like you are mixing keras and the tf.v1 API? Unfortunately these two won’t pair very well…

Please take a look in this tutorial which explains how to use otbtf and keras to do the patches sampling, build + train + evaluate the model, and the inference.
It’s very close to the example that you have followed, but with all steps detailed and explained (sorry it wasn’t available at the time you posted!). Plus the code, plus the data. This should really help you!

Greetings Remi,

Thank you for the tutorial, it helped us move ahead and get an idea about how you implemented it with your data. We followed it verbatim and it worked smoothly with your data.
However, with our data, Patches Selection is working. But when we use those vector files to extract patches, using patches extraction function most of the samples were getting rejected. Just to give an idea, Around 3500+ samples were rejected and only 5 were selected. We have double checked in QGIS to see if vector points are aligning with the tif files we are trying to create patches of.

2023-11-08 19:26:48 (INFO) [pyOTB] PatchesExtraction: argument for parameter "source1.il" was converted to list
2023-11-08 19:26:48 (INFO): Loading metadata from official product
2023-11-08 19:26:48 (INFO) PatchesExtraction: Rejecting samples that have at least one no-data value
Sampling patches: 100% [**************************************************]2023-11-08 19:26:50 (INFO) PatchesExtraction: Number of samples collected: 5
2023-11-08 19:26:50 (INFO) PatchesExtraction: Number of samples rejected : 3674
2023-11-08 19:26:50 (WARNING) [pyOTB] PatchesExtraction: overwriting file /home/otbuser/all/data/output/vec_train_xs_patches_label_new.tif?&gdal:co:COMPRESS=DEFLATE
2023-11-08 19:26:50 (INFO): Default RAM limit for OTB is 256 MB
2023-11-08 19:26:50 (INFO): GDAL maximum cache size is 799 MB
2023-11-08 19:26:50 (INFO): OTB will use at most 4 threads
2023-11-08 19:26:50 (INFO): File /home/otbuser/all/data/output/vec_train_xs_patches_label_new.tif will be written in 1 blocks of 64x320 pixels
Writing /home/otbuser/all/data/output/vec_train_xs_patches_label_new.tif?&gdal:co:COMPRESS=DEFLATE...: 0% [                                                  ]2023-11-08 19:26:50 (WARNING): Could not get the source process object. Progress report might be buggy
Writing /home/otbuser/all/data/output/vec_train_xs_patches_label_new.tif?&gdal:co:COMPRESS=DEFLATE...: 100% [**************************************************] (0s)

Can you please provide us some insight about the criteria for selection and rejection of samples. Or if you find anything wrong about the code implemented.

Just to provide additional information,
labels_img = β€˜β€¦/new_terrain_truth_rasterized.tif’ (It’s a tif file)
vec_train, vec_valid, vec test - β€œβ€¦/ vec.geojson” (It’s a geojson file)
pixel_type = {β€œsource1.out”: β€œfloat”}

Hi,
Here are the reasons why patches could be rejected:

  • patch lies outside the image, at least for 1 source
  • patch contains at least one no-data value (specified in sourceX.nodata, default is disabled), at least for 1 source

Check sources CRS, no-data, and extents.

RΓ©mi