mask_rcnn_inception_resnet_v2_atrous_coco¶
Use Case and High-Level Description¶
Mask R-CNN Inception ResNet V2 Atrous is trained on Common Objects in Context (COCO) dataset and used for object instance segmentation. For details, see a paper.
Specification¶
Metric |
Value |
---|---|
Type |
Instance segmentation |
GFlops |
675.314 |
MParams |
92.368 |
Source framework |
TensorFlow* |
Accuracy¶
Metric |
Value |
---|---|
coco_orig_precision |
39.86% |
coco_orig_segm_precision |
35.36% |
Input¶
Original Model¶
Image, name: image_tensor
, shape: 1, 800, 1365, 3
, format: B, H, W, C
, where:
B
- batch sizeH
- image heightW
- image widthC
- number of channels
Expected color order: RGB
.
Converted Model¶
Image, name:
image_tensor
, shape:1, 800, 1365, 3
, format:B, H, W, C
, where:B
- batch sizeH
- image heightW
- image widthC
- number of channels
Expected color order:
BGR
.Information of input image size, name:
image_info
, shape:1, 3
, format:B, C
, where:B
- batch sizeC
- vector of 3 values in formatH, W, S
, whereH
is an image height,W
is an image width,S
is an image scale factor (usually 1)
Output¶
Original Model¶
Classifier, name:
detection_classes
. Contains predicted bounding boxes classes in a range [1, 91]. The model was trained on Common Objects in Context (COCO) dataset version with 90 categories of objects, 0 class is for background.Probability, name:
detection_scores
. Contains probability of detected bounding boxes.Detection box, name:
detection_boxes
. Contains detection boxes coordinates in a format[y_min, x_min, y_max, x_max]
, where (x_min
,y_min
) are coordinates of the top left corner, (x_max
,y_max
) are coordinates of the right bottom corner. Coordinates are rescaled to input image size.Detections number, name:
num_detections
. Contains the number of predicted detection boxes.Segmentation mask, name:
detection_masks
. Contains segmentation heatmaps of detected objects for all classes for every output bounding box.
Converted Model¶
The array of summary detection information, name:
reshape_do_2d
, shape:100, 7
in the formatN, 7
, whereN
is the number of detected bounding boxes. For each detection, the description has the format: [image_id
,label
,conf
,x_min
,y_min
,x_max
,y_max
], where:image_id
- ID of the image in the batchlabel
- predicted class IDconf
- confidence for the predicted class(
x_min
,y_min
) - coordinates of the top left bounding box corner (coordinates stored in normalized format, in range [0, 1])(
x_max
,y_max
) - coordinates of the bottom right bounding box corner (coordinates stored in normalized format, in range [0, 1])
Segmentation heatmaps for all classes for every output bounding box, name:
masks
, shape:100, 90, 33, 33
in the formatN, 90, 33, 33
, whereN
is the number of detected masks, 90 is the number of classes (the background class excluded).
Download a Model and Convert it into OpenVINO™ IR Format¶
You can download models and if necessary convert them into OpenVINO™ IR format using the Model Downloader and other automation tools as shown in the examples below.
An example of using the Model Downloader:
omz_downloader --name <model_name>
An example of using the Model Converter:
omz_converter --name <model_name>
Demo usage¶
The model can be used in the following demos provided by the Open Model Zoo to show its capabilities:
Legal Information¶
The original model is distributed under the
Apache License, Version 2.0.
A copy of the license is provided in <omz_dir>/models/public/licenses/APACHE-2.0-TF-Models.txt
.