Working with devices¶
The OpenVINO Runtime provides capabilities to infer deep learning models on the following device types with corresponding plugins:
OpenVINO Runtime also offers several execution modes which work on top of other devices:
Capability |
Description |
---|---|
Multi-Device enables simultaneous inference of the same model on several devices in parallel |
|
Auto-Device selection enables selecting Intel device for inference automatically |
|
Heterogeneous execution enables automatic inference splitting between several devices (for example if a device doesn’t support certain operation ) |
|
the Auto-Batching plugin enables batching (on top of the specified device) that is completely transparent to the application |
Devices similar to the ones we use for benchmarking can be accessed using Intel® DevCloud for the Edge, a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. Learn more or Register here.
Feature Support Matrix¶
The table below demonstrates support of key features by OpenVINO device plugins.
Capability |
||||
---|---|---|---|---|
Yes |
Yes |
No |
Yes |
|
Yes |
Yes |
Partial |
Yes |
|
No |
Yes |
No |
No |
|
Yes |
Yes |
No |
Yes |
|
Yes |
Partial |
Yes |
No |
|
Yes |
Partial |
No |
No |
|
Yes |
No |
Yes |
No |
|
Yes |
Yes |
No |
Partial |
|
Yes |
No |
Yes |
No |
|
Yes |
Yes |
No |
No |
For more details on plugin-specific feature limitations, refer to the corresponding plugin pages.