Converting an ONNX Model¶
Introduction to ONNX¶
ONNX* is a representation format for deep learning models. ONNX allows AI developers easily transfer models between different frameworks that helps to choose the best combination for them. Today, PyTorch*, Caffe2*, Apache MXNet*, Microsoft Cognitive Toolkit* and other tools are developing ONNX support.
This page provides instructions on how to convert a model from the ONNX format to the OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the installation instructions.
Convert an ONNX* Model¶
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX* model, run Model Optimizer with the path to the input model .onnx
file:
mo --input_model <INPUT_MODEL>.onnx
There are no ONNX* specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the General Conversion Parameters section on the Converting a Model to Intermediate Representation (IR) page.
Supported ONNX* Layers¶
Refer to Supported Framework Layers for the list of supported standard layers.
Additional Resources¶
See the Model Conversion Tutorials page for a set of tutorials providing step-by-step instructions for converting specific ONNX models. Here are some examples: