Onnx Model tutorial

This tutorial is a part of Model module guide. Here, we explore how you can use the OnnxModel wrapper to use your Onnx deep learning models in the benchmark.

For now on, we suppose you are running your codes on the project root folder.

The following function will be used throughout this tutorial to display denoising results,

def display_results(clean_imgs, noisy_imgs, rest_images, name):
    """Display denoising results."""
    fig, axes = plt.subplots(5, 3, figsize=(15, 15))

    plt.suptitle("Denoising results using {}".format(name))

    for i in range(5):
        axes[i, 0].imshow(np.squeeze(clean_imgs[i]), cmap="gray")
        axes[i, 0].axis("off")
        axes[i, 0].set_title("Ground-Truth")

        axes[i, 1].imshow(np.squeeze(noisy_imgs[i]), cmap="gray")
        axes[i, 1].axis("off")
        axes[i, 1].set_title("Noised Image")

        axes[i, 2].imshow(np.squeeze(rest_imgs[i]), cmap="gray")
        axes[i, 2].axis("off")
        axes[i, 2].set_title("Restored Images")

Moreover, you may download the data we will use by using the following function,

data.download_BSDS_grayscale(output_dir="./tmp/BSDS500/")

The models will be evaluated using the BSDS dataset,

# Validation images generator
valid_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Valid",
                                             batch_size=8,
                                             n_channels=1,
                                             noise_config={data.utils.gaussian_noise: [25]},
                                             name="BSDS_Valid")

Onnx Model

This notebook covers how OnnxModel class can be used to hold Deep Learning models, and to perform inference. We remark that Onnx is a framework focused on deploying trained neural networks, hence, there is no support for training models in this format.

Charging a model

The charging can only be done by the specification of a “.onnx” file, which holds the model’s computational graph as well as weights. To perform inference, we rely on onnxruntime module for Python.

After charging a model into OnnxModel, a onnxruntime session is created so that you can perform inference.

Running Inference

Inference can be done by using the “__call__” method required by the AbstractDeepLearningModel interface. This method ensures that derived classes (as OnnxModel) can be used a function. Bellow, a batch of noisy/clean images is drawn from the dataset, and the OnnxModel used to predict the restored image,

../../_images/Onnx_output_15_0.png

Converting models to Onnx

Onnx is thought as a bridge between different Deep Learning frameworks. It is used to deploy models for inference (assuming they were trained previously).

Each language has its own way of converting its models into Onnx. Some of them, such as Pytorch and Matlab, have such support natively. Others, rely on non-official modules such as keras2onnx or tf2onnx.

Keras to Onnx

Pytorch to Onnx

[Graph Input] name: input, shape: [5, 1, 40, 40]
[Graph Output] name: output, shape: [5, 1, 40, 40]

Notice that the past two tensors are of fixed size (both batch size, height and width). We address this problem in the next section.

Note on Pytorch2Onnx

Since Pytorch does not support dynamic shapes (i.e. None values in shape), the exported Onnx model will have fixed shape, which can be problematic at inference (you can only process images by slicing them into fixed-sized patches).

However, there is a turn-around for such problem, that is to process Onnx graph and switch the height/width values for “?” values (analogous to None in Tensorflow/Keras).

If you face problems with fixed-sized inputs, you can use the model.utils module for conversion:

New graph:
[Graph Input]
 name: "input"
type {
  tensor_type {
    elem_type: 1
    shape {
      dim {
        dim_param: "?"
      }
      dim {
        dim_value: 1
      }
      dim {
        dim_param: "?"
      }
      dim {
        dim_param: "?"
      }
    }
  }
}

[Graph Output] name: name: "output"
type {
  tensor_type {
    elem_type: 1
    shape {
      dim {
        dim_param: "?"
      }
      dim {
        dim_value: 1
      }
      dim {
        dim_param: "?"
      }
      dim {
        dim_param: "?"
      }
    }
  }
}

Tensorflow to Onnx

In order to convert a Tensorflow model to Onnx, you need to convert all its variables to constants. To do so, the model.utils module has a function called freeze_tf_graph that converts all the variables in the current Tensorflow graph to constants.

You can either specify a model_file (containing your Tensorflow Model) to be read and frozen, or let the function get the default graph and session. In the first case, the default graph is expected to be empty (that is, you have not previously defined any tensorflow computation).

In this example, we will freeze the Tensorflow model present on “./Additional Files/Tensorflow Models/from_saved_model”.

Matlab to Onnx

Once you have trained your model using Matlab’s Deep Learning Toolbox, you should either a network object on your workspace, or a .mat file saved on a folder.

Taking the last case as example, we consider the file “./Additional Files/Matlab Models/dncnn_matlab.mat” previously trained. In order to convert it to Onnx, you can either do it from Python (by using matlab’s engine) or directly on Matlab. Keep in mind that every command run with engine.evalc is a pure matlab command.

Matlab's Workspace:

Name      Size              Bytes  Class            Attributes

ME        1x1                1091  MException
net       1x1             2258128  SeriesNetwork