AWS Machine Learning Blog

Announcing ONNX Support for Apache MXNet

Today, AWS announces the availability of ONNX-MXNet, an open source Python package to import Open Neural Network Exchange (ONNX) deep learning models into Apache MXNet. MXNet is a fully featured and scalable deep learning framework that offers APIs across popular languages such as Python, Scala, and R. With ONNX format support for MXNet, developers can build and train models with other frameworks, such as PyTorch, Microsoft Cognitive Toolkit, or Caffe2, and import these models into MXNet to run them for inference using the MXNet highly optimized and scalable engine.

We’re also excited to share that AWS will be collaborating on the ONNX format. We will be working with Facebook, Microsoft, and the deep learning community to further develop ONNX, making it accessible and useful for deep learning practitioners.

What is ONNX?

ONNX is an open source format to encode deep learning models. ONNX defines the format for the neural network’s computational graph, as well as the format for an extensive list of operators used within the graph. With ONNX being supported by an increasing list of frameworks and hardware vendors, developers working on deep learning can move between frameworks easily, picking and choosing the framework that is best suited for the task at hand.

Quick Start

We’ll show how you can use ONNX-MXNet to import ONNX models into MXNet, and use the imported model for inference, benefiting from the MXNet optimized execution engine.

Step 1: Installations

First, install ONNX, following instructions on the ONNX repo.

Then, install the ONNX-MXNet package:

$ pip install onnx-mxnet

Step 2: Prepare an ONNX model to import

In this example, we will demonstrate importing a Super Resolution model, designed to increase spatial resolution of images. The model was built and trained with PyTorch, and exported into ONNX using PyTorch’s ONNX export API. More details about the model design are available in PyTorch’s example.

Download the Super Resolution ONNX model to your working directory:

$ wget https://s3.amazonaws.com/onnx-mxnet/examples/super_resolution.onnx

Step 3: Import the ONNX model into MXNet

Now that we have an ONNX model file ready, let’s import it into MXNet using the ONNX-MXNet import API. Run the following code in a Python shell:

import onnx_mxnet
sym, params = onnx_mxnet.import_model('super_resolution.onnx')

This created two instances in the Python runtime: sym – the model’s symbolic graph, and params – the model’s weights. Importing the ONNX model is now done, and we have a standard MXNet model.

Step 4: Prepare input for inference 

Next, we will prepare an input image for inference. The following steps download an example image, resize it to the model’s expected input shape, and finally convert it into a numpy array.

From your shell console, download the example input image to your working directory:

$ wget https://s3.amazonaws.com/onnx-mxnet/examples/super_res_input.jpg

Then, install Pillow, Python Imaging Library, so we can load and pre-process the input image:

$ pip install Pillow

Next, from your Python shell, run the code to prepare the image in the MXNet NDArray format:

import numpy as np
import mxnet as mx
from PIL import Image
img = Image.open("super_res_input.jpg").resize((224, 224))
img_ycbcr = img.convert("YCbCr")
img_y, img_cb, img_cr = img_ycbcr.split()
test_image = mx.nd.array(np.array(img_y)[np.newaxis, np.newaxis, :, :])

Step 5: Create the MXNet Module

We’ll be using the MXNet Module API to create the module, bind it and assign the loaded weights.
Note that the ONNX-MXNet import API assigns the input layer the name ‘input_0’, which we are using when initializing and binding the module.

mod = mx.mod.Module(symbol=sym, data_names=['input_0'], label_names=None)
mod.bind(for_training=False, data_shapes=[('input_0',test_image.shape)])
mod.set_params(arg_params=params, aux_params=params, allow_missing=True, allow_extra=True)

Step 6: Run inference

Now that we have an MXNet Module loaded, bound, and with trained weights, we’re ready to run inference. We’ll prepare a single input batch, and feed forward through the network:

from collections import namedtuple
Batch = namedtuple('Batch', ['data'])
mod.forward(Batch([test_image]))
output = mod.get_outputs()[0][0][0]

Step 7: Examine the results

Now let’s examine the results we received running inference on the Super Resolution image:

img_out_y = Image.fromarray(np.uint8((output.asnumpy().clip(0, 255)), mode='L'))
result_img = Image.merge(
"YCbCr", [
        	img_out_y,
        	img_cb.resize(img_out_y.size, Image.BICUBIC),
        	img_cr.resize(img_out_y.size, Image.BICUBIC)
]).convert("RGB")
result_img.save("super_res_output.jpg")

Here’s the input image and the resulting output image. As you can see, the model was able to increase the image spatial resolution from 256 by 256 to 672 by 672.

Input image Output image
 

What’s Next?

We’re working with our ONNX partners and community to further develop ONNX, adding more useful operators, extending ONNX-MXNet to include export and increased operator coverage. We will be working with the Apache MXNet community to bring ONNX into MXNet core APIs.

Want to learn more?

The example is available here, as part of the ONNX-MXNet GitHub repo.

Check out ONNX to dive into how network graphs and operators are encoded.

Contributions are welcomed!

Special thanks to the dmlc/nnvm community and Zhi Zhang, whose ONNX code was used as a reference for this implementation.

Facebook Blog:
https://research.fb.com/amazon-to-join-onnx-ai-format-drive-mxnet-support/

Microsoft Blog:
https://www.microsoft.com/en-us/cognitive-toolkit/blog/2017/11/framework-support-open-ai-ecosystem-grows/

 


 

About the Authors

Hagay Lupesko is an Engineering Manager for AWS Deep Learning. He focuses on building Deep Learning tools that enable developers and scientists to build intelligent applications. In his spare time he enjoys reading, hiking and spending time with his family.

 

 

 

Roshani Nagmote is a Software Developer for AWS Deep Learning. She is working on innovative tools to make Deep Learning accessible for all. In her spare time, she loves to play with her adorable nephew and is a huge dog lover.