Integrate OpenVINO™ with Your Application¶
Note
Before start using OpenVINO™ Runtime, make sure you set all environment variables during the installation. To do so, follow the instructions from the Set the Environment Variables section in the installation guides:
To build an open source version, use the OpenVINO™ Runtime Build Instructions.
Use OpenVINO™ Runtime API to Implement Inference Pipeline¶
This section provides step-by-step instructions to implement a typical inference pipeline with the OpenVINO™ Runtime C++ or Python API:
Step 1. Create OpenVINO™ Runtime Core¶
Include next files to work with OpenVINO™ Runtime:
#include <openvino/openvino.hpp>
import openvino.runtime as ov
Use the following code to create OpenVINO™ Core to manage available devices and read model objects:
Step 2. Compile the Model¶
ov::CompiledModel
class represents a device specific compiled model. ov::CompiledModel
allows you to get information inputs or output ports by a tensor name or index. This approach is aligned with the majority of frameworks.
Compile the model for a specific device using ov::Core::compile_model()
:
ov::CompiledModel compiled_model = core.compile_model("model.xml", "AUTO");
ov::CompiledModel compiled_model = core.compile_model("model.onnx", "AUTO");
ov::CompiledModel compiled_model = core.compile_model("model.pdmodel", "AUTO");
auto create_model = []() {
std::shared_ptr<ov::Model> model;
// To construct a model, please follow
// https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_Representation.html
return model;
};
std::shared_ptr<ov::Model> model = create_model();
compiled_model = core.compile_model(model, "AUTO");
compiled_model = core.compile_model("model.xml", "AUTO")
compiled_model = core.compile_model("model.onnx", "AUTO")
compiled_model = core.compile_model("model.pdmodel", "AUTO")
def create_model():
# This example shows how to create ov::Function
#
# To construct a model, please follow
# https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_Representation.html
data = ov.opset8.parameter([3, 1, 2], ov.Type.f32)
res = ov.opset8.result(data)
return ov.Model([res], [data], "model")
model = create_model()
compiled_model = core.compile_model(model, "AUTO")
The ov::Model
object represents any models inside the OpenVINO™ Runtime. For more details please read article about OpenVINO™ Model representation.
The code above creates a compiled model associated with a single hardware device from the model object. It is possible to create as many compiled models as needed and use them simultaneously (up to the limitation of the hardware resources). To learn how to change the device configuration, read the Query device properties article.
Step 3. Create an Inference Request¶
ov::InferRequest
class provides methods for model inference in OpenVINO™ Runtime. Create an infer request using the following code (see InferRequest detailed documentation for more details):
ov::InferRequest infer_request = compiled_model.create_infer_request();
infer_request = compiled_model.create_infer_request()
Step 4. Set Inputs¶
You can use external memory to create ov::Tensor
and use the ov::InferRequest::set_input_tensor
method to put this tensor on the device:
// Get input port for model with one input
auto input_port = compiled_model.input();
// Create tensor from external memory
ov::Tensor input_tensor(input_port.get_element_type(), input_port.get_shape(), memory_ptr);
// Set input tensor for model with one input
infer_request.set_input_tensor(input_tensor);
# Create tensor from external memory
input_tensor = ov.Tensor(array=memory, shared_memory=True)
# Set input tensor for model with one input
infer_request.set_input_tensor(input_tensor)
Step 5. Start Inference¶
OpenVINO™ Runtime supports inference in either synchronous or asynchronous mode. Using the Async API can improve application’s overall frame-rate: instead of waiting for inference to complete, the app can keep working on the host while the accelerator is busy. You can use ov::InferRequest::start_async
to start model inference in the asynchronous mode and call ov::InferRequest::wait
to wait for the inference results:
infer_request.start_async();
infer_request.wait();
infer_request.start_async()
infer_request.wait()
This section demonstrates a simple pipeline. To get more information about other ways to perform inference, read the dedicated OpenVINO™ Inference Request Run inference” section”.
Step 6. Process the Inference Results¶
Go over the output tensors and process the inference results.
// Get output tensor by tensor name
auto output = infer_request.get_tensor("tensor_name");
const float \*output_buffer = output.data<const float>();
/\* output_buffer[] - accessing output tensor data \*/
# Get output tensor for model with one output
output = infer_request.get_output_tensor()
output_buffer = output.data
# output_buffer[] - accessing output tensor data
Link and Build Your C++ Application with OpenVINO™ Runtime¶
The example uses CMake for project configuration.
Create a structure for the project:
project/ ├── CMakeLists.txt - CMake file to build ├── ... - Additional folders like includes/ └── src/ - source folder └── main.cpp build/ - build directory ...
Include OpenVINO™ Runtime libraries in
project/CMakeLists.txt
cmake_minimum_required(VERSION 3.10) set(CMAKE_CXX_STANDARD 11) find_package(OpenVINO REQUIRED) add_executable(${TARGET_NAME} src/main.cpp) target_link_libraries(${TARGET_NAME} PRIVATE openvino::runtime)
To build your project using CMake with the default build tools currently available on your machine, execute the following commands:
Note
Make sure you set environment variables first by running <INSTALL_DIR>/setupvars.sh
(or setupvars.bat
for Windows). Otherwise the OpenVINO_DIR
variable won’t be configured properly to pass find_package
calls.
cd build/
cmake ../project
cmake --build .
You can also specify additional build options (e.g. to build CMake project on Windows with a specific build tools). Please refer to the CMake page for details.
Run Your Application¶
Congratulations, you have made your first application with OpenVINO™ toolkit, now you may run it.
Additional Resources¶
See the OpenVINO Samples page or the Open Model Zoo Demos page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others.