Get Started with Sample and Demo Applications

Introduction

This section guides you through a simplified workflow for the Intel® Distribution of OpenVINO™ toolkit using code samples and demo applications. You will perform the following steps:

  1. Use the Model Downloader to download suitable models.

  2. Convert the models with the Model Optimizer.

  3. Download media files to run inference on.

  4. Run inference on the sample and see the results:

This guide assumes you completed all installation and configuration steps. If you have not yet installed and configured the toolkit:

Install OpenVINO Development Tools

To install OpenVINO Development Tools for working with Caffe* models, use the following command:

pip install openvino-dev[caffe]

Build Samples and Demos

If you have already built the demos and samples, you can skip this section. The build will take about 5-10 minutes, depending on your system.

To build OpenVINO samples:

Go to the OpenVINO Samples page and see the “Build the Sample Applications on Linux*” section.

Go to the OpenVINO Samples page and see the “Build the Sample Applications on Microsoft Windows* OS” section.

Go to the OpenVINO Samples page and see the “Build the Sample Applications on macOS*” section.

To build OpenVINO demos:

Go to the Open Model Zoo Demos page and see the “Build the Demo Applications on Linux*” section.

Go to the Open Model Zoo Demos page and see the “Build the Demo Applications on Microsoft Windows* OS” section.

Go to the Open Model Zoo Demos page and see the “Build the Demo Applications on Linux*” section. You can use the requirements from “To build OpenVINO samples” above and adapt the Linux build steps for macOS*.

Step 1: Download the Models

You must have a model that is specific for your inference task. Example model types are:

  • Classification (AlexNet, GoogleNet, SqueezeNet, others): Detects one type of element in an image

  • Object Detection (SSD, YOLO): Draws bounding boxes around multiple types of objects in an image

  • Custom: Often based on SSD

Options to find a model suitable for the OpenVINO™ toolkit:

  • Download public or Intel pre-trained models from the Open Model Zoo using the Model Downloader tool

  • Download from GitHub*, Caffe* Zoo, TensorFlow* Zoo, etc.

  • Train your own model with machine learning tools

This guide uses the OpenVINO™ Model Downloader to get pre-trained models. You can use one of the following commands to find a model:

  • List the models available in the downloader

omz_info_dumper --print_all
  • Use grep to list models that have a specific name pattern

omz_info_dumper --print_all | grep <model_name>
  • Use Model Downloader to download models.

    This guide uses <models_dir> and <models_name> as placeholders for the models directory and model name:

omz_downloader --name <model_name> --output_dir <models_dir>
  • Download the following models to run the Image Classification Sample:

Model Name

Code Sample or Demo App

googlenet-v1

Image Classification Sample

To download the GoogleNet v1 Caffe* model to the models folder:

omz_downloader --name googlenet-v1 --output_dir ~/models
omz_downloader --name googlenet-v1 --output_dir %USERPROFILE%\Documents\models
omz_downloader --name googlenet-v1 --output_dir ~/models

Your screen looks similar to this after the download and shows the paths of downloaded files:

###############|| Downloading models ||###############

========= Downloading /home/username/models/public/googlenet-v1/googlenet-v1.prototxt

========= Downloading /home/username/models/public/googlenet-v1/googlenet-v1.caffemodel
... 100%, 4834 KB, 3157 KB/s, 1 seconds passed

###############|| Post processing ||###############

========= Replacing text in /home/username/models/public/googlenet-v1/googlenet-v1.prototxt =========
################|| Downloading models ||################

========== Downloading C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.prototxt
... 100%, 9 KB, ? KB/s, 0 seconds passed

========== Downloading C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.caffemodel
... 100%, 4834 KB, 571 KB/s, 8 seconds passed

################|| Post-processing ||################

========== Replacing text in C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.prototxt
###############|| Downloading models ||###############

========= Downloading /Users/username/models/public/googlenet-v1/googlenet-v1.prototxt
... 100%, 9 KB, 44058 KB/s, 0 seconds passed

========= Downloading /Users/username/models/public/googlenet-v1/googlenet-v1.caffemodel
... 100%, 4834 KB, 4877 KB/s, 0 seconds passed

###############|| Post processing ||###############

========= Replacing text in /Users/username/models/public/googlenet-v1/googlenet-v1.prototxt =========

Step 2: Convert the Model with Model Optimizer

In this step, your trained models are ready to run through the Model Optimizer to convert them to the IR (Intermediate Representation) format. For most model types, this is required before using the OpenVINO Runtime with the model.

Models in the IR format always include an .xml and .bin file and may also include other files such as .json or .mapping. Make sure you have these files together in a single directory so the OpenVINO Runtime can find them.

REQUIRED: model_name.xml REQUIRED: model_name.bin OPTIONAL: model_name.json, model_name.mapping, etc.

This tutorial uses the public GoogleNet v1 Caffe* model to run the Image Classification Sample. See the example in the Download Models section of this page to learn how to download this model.

The googlenet-v1 model is downloaded in the Caffe* format. You must use the Model Optimizer to convert the model to IR.

Create an <ir_dir> directory to contain the model’s Intermediate Representation (IR).

mkdir ~/ir
mkdir %USERPROFILE%\Documents\ir
mkdir ~/ir

The OpenVINO Runtime can infer models where floating-point weights are compressed to FP16. To generate an IR with a specific precision, run the Model Optimizer with the appropriate --data_type option.

Generic Model Optimizer script:

mo --input_model <model_dir>/<model_file> --data_type <model_precision> --output_dir <ir_dir>

IR files produced by the script are written to the <ir_dir> directory.

The command with most placeholders filled in and FP16 precision:

mo --input_model ~/models/public/googlenet-v1/googlenet-v1.caffemodel --data_type FP16 --output_dir ~/ir
mo --input_model %USERPROFILE%\Documents\models\public\googlenet-v1\googlenet-v1.caffemodel --data_type FP16 --output_dir %USERPROFILE%\Documents\ir
mo --input_model ~/models/public/googlenet-v1/googlenet-v1.caffemodel --data_type FP16 --output_dir ~/ir

Step 3: Download a Video or Still Photo as Media

Many sources are available from which you can download video media to use the code samples and demo applications. Possibilities include:

As an alternative, the Intel® Distribution of OpenVINO™ toolkit includes several sample images and videos that you can use for running code samples and demo applications:

Step 4: Run Inference on the Sample

Run the Image Classification Code Sample

To run the Image Classification code sample with an input image using the IR model:

  1. Set up the OpenVINO environment variables:

    source  <INSTALL_DIR>/setupvars.sh
    
    <INSTALL_DIR>\setupvars.bat
    
    source <INSTALL_DIR>/setupvars.sh
    
  2. Go to the code samples release directory created when you built the samples earlier:

    cd ~/inference_engine_cpp_samples_build/intel64/Release
    
    cd  %USERPROFILE%\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release
    
    cd ~/inference_engine_cpp_samples_build/intel64/Release
    
  3. Run the code sample executable, specifying the input media file, the IR for your model, and a target device for performing inference:

classification_sample_async -i <path_to_media> -m <path_to_model> -d <target_device>
classification_sample_async.exe -i <path_to_media> -m <path_to_model> -d <target_device>
classification_sample_async -i <path_to_media> -m <path_to_model> -d <target_device>

The following commands run the Image Classification Code Sample using the dog.bmp file as an input image, the model in IR format from the ir directory, and on different hardware devices:

CPU:

./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d CPU
.\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d CPU
./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d CPU

GPU:

Note

Running inference on Intel® Processor Graphics (GPU) requires additional hardware configuration steps, as described earlier on this page. Running on GPU is not compatible with macOS*.

./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d GPU
.\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d GPU

MYRIAD:

Note

Running inference on VPU devices (Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires additional hardware configuration steps, as described earlier on this page.

./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d MYRIAD
.\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d MYRIAD
./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d MYRIAD

When the sample application is complete, you see the label and confidence for the top 10 categories on the display. Below is a sample output with inference results on CPU:

Top 10 results:

Image dog.bmp

   classid probability label
   ------- ----------- -----
   156     0.6875963   Blenheim spaniel
   215     0.0868125   Brittany spaniel
   218     0.0784114   Welsh springer spaniel
   212     0.0597296   English setter
   217     0.0212105   English springer, English springer spaniel
   219     0.0194193   cocker spaniel, English cocker spaniel, cocker
   247     0.0086272   Saint Bernard, St Bernard
   157     0.0058511   papillon
   216     0.0057589   clumber, clumber spaniel
   154     0.0052615   Pekinese, Pekingese, Peke

Other Demos/Samples

For more samples and demos, you can visit the samples and demos pages below. You can review samples and demos by complexity or by usage, run the relevant application, and adapt the code for your use.

Samples

Demos