Visualize Model Output

DL Workbench enables you to visually estimate how well a model recognizes images by testing the model on particular sample images. This functionality considerably enhances the analysis of inference results, giving you an opportunity not only to estimate the performance, but also to visually understand whether the model works correctly and the accuracy is tolerable for client applications.

To get a visual representation of the output of your model, go to the Perform tab on the Projects page and open the Visualize Output tab.

../_images/visualize_tab.png

There are three ways to visualize model output in the DL Workbench:

Model Predictions

Note

The feature is available for models trained for the following tasks:

  • Classification

  • Object-Detection

  • Instance-Segmentation

  • Semantic-Segmentation

  • Super-Resolution

  • Style-Transfer

  • Image-Inpainting

Select an image on your system or drag and drop an image directly. Click Test, and the model predictions appear on the right.

Classification Models

Predictions for a classification model with corresponding confidence levels are sorted from the highest confidence rate to the lowest.

../_images/test_02.png

Object-Detection Models

With object-detection models, you can visualize bounding boxes by hovering your mouse over a class prediction on the right.

../_images/test_04.png

Use the Threshold drop-down list to filter classes based on the confidence score.

Instance-Segmentation Models

With instance-segmentation models, you can visualize masks by hovering your mouse over a class prediction on the right.

../_images/test_05.png

Use the Threshold drop-down list to filter classes based on the confidence score.

Semantic-Segmentation Models

For semantic-segmentation models, the DL Workbench provides areas of categorized objects, which enables you to see whether your model recognized all object types, like the buses in this image:

../_images/test_06.png

Or the road in the same image:

../_images/test_07.png

Super-Resolution Models

Asses the performance of your super-resolution model by looking at a higher-resolution image on the right:

../_images/test_08.png

Style-Transfer Models

For style-transfer models, see the style for which your model was trained applied to a sample image:

../_images/test_09.png

Image-Inpainting Models

With image-inpainting models, select areas that you want to inpaint on your test image by drawing rectangles.

../_images/test_12.png

In this example, the goal is to conceal license plates. Click Test and see the result on the right.

../_images/test_10.png

Model Predictions with Importance Map

Note

The feature is available for models trained for the Classification use case

Although deep neural models are widely used to automate data processing, their decision-making process is mostly unknown and difficult to explain. Explainable AI helps you understand and interpret model predictions.

Randomized Input Sampling for Explanation (RIZE) algorithm can explain why a black-box model makes classification decisions by generating a pixel importance map for each class. The algorithm tests the model with randomly masked versions of the input image and obtains the corresponding outputs to evaluate the importance.

Select Model Predictions with Importance Map visualization type, upload an image and click Visualize button. You will see the progress bar on the right.

../_images/visualization_rise.png

In the images below, red area indicates the most important pixels for class #269 (polar bear). Blue area contains less important pixels for the corresponding model prediction.

../_images/polar_bear_detected.png

Select another prediction to show the heatmap for class #143 (crane).

../_images/crane_detected.png

Learn more about the RISE algorithm in this paper.

Compare Optimized and Parent Model Predictions

Note

The feature is available for optimized models

You can compare Optimized model predictions with Parent model predictions used as optimal references. Find out on which validation dataset images the predictions of the model became different after optimization. Learn more at the Create Accuracy Report page.

../_images/visualize_parent_od.png

All images were taken from ImageNet, Pascal Visual Object Classes, and Common Objects in Context datasets for demonstration purposes only.

See Also