Introduction to OpenVINO™ Deployment¶
Once you have a model that meets both OpenVINO™ and your requirements, you can choose among several ways of deploying it with your application:
Deploy your model with OpenVINO Model Server.
Deploy your application for the TensorFlow framework with OpenVINO Integration.
Note
Note that running inference in OpenVINO Runtime is the most basic form of deployment. Before moving forward, make sure you know how to create a proper Inference configuration.