Quantization Aware Training with NNCF, using PyTorch framework¶
This notebook is based on ImageNet training in PyTorch.
The goal of this notebook is to demonstrate how to use the Neural Network Compression Framework NNCF 8-bit quantization to optimize a PyTorch model for inference with OpenVINO Toolkit. The optimization process contains the following steps:
Transform the original FP32 model to INT8
Use fine-tuning to restore the accuracy
Export optimized and original models to ONNX and then to OpenVINO IR
Measure and compare the performance of models
For more advanced usage, please refer to these examples.
We selected the ResNet-18 model with the Tiny ImageNet-200 dataset. ResNet-18 is the version of ResNet models that contains the fewest layers (18). Tiny ImageNet-200 is a subset of the larger ImageNet dataset with smaller images. The dataset will be downloaded in the notebook. Using the smaller model and dataset will speed up training and download time. To see other ResNet models, visit PyTorch hub.
NOTE: This notebook requires a C++ compiler.
Imports and Settings¶
On Windows, add the required C++ directories to the system’s path.
Import NNCF and all auxiliary packages from your Python code. Set a name for the model, and the image width and height that will be used for the network. Also define paths where PyTorch, ONNX and OpenVINO IR versions of the models will be stored.
NOTE: All NNCF logging messages below ERROR level (INFO and WARNING) are disabled to simplify the tutorial. For production use, it is recommended to enable logging, by removing
set_log_level(logging.ERROR)
.
# On Windows, add the directory that contains cl.exe to the PATH to enable PyTorch to find the
# required C++ tools. This code assumes that Visual Studio 2019 is installed in the default
# directory. If you have a different C++ compiler, please add the correct path to os.environ["PATH"]
# directly. Note that the C++ Redistributable is not enough to run this notebook.
# Adding the path to os.environ["LIB"] is not always required - it depends on the system's configuration
import sys
if sys.platform == "win32":
import distutils.command.build_ext
import os
from pathlib import Path
VS_INSTALL_DIR = r"C:/Program Files (x86)/Microsoft Visual Studio"
cl_paths = sorted(list(Path(VS_INSTALL_DIR).glob("**/Hostx86/x64/cl.exe")))
if len(cl_paths) == 0:
raise ValueError(
"Cannot find Visual Studio. This notebook requires a C++ compiler. If you installed "
"a C++ compiler, please add the directory that contains cl.exe to `os.environ['PATH']`."
)
else:
# If multiple versions of MSVC are installed, get the most recent version
cl_path = cl_paths[-1]
vs_dir = str(cl_path.parent)
os.environ["PATH"] += f"{os.pathsep}{vs_dir}"
# Code for finding the library dirs from
# https://stackoverflow.com/questions/47423246/get-pythons-lib-path
d = distutils.core.Distribution()
b = distutils.command.build_ext.build_ext(d)
b.finalize_options()
os.environ["LIB"] = os.pathsep.join(b.library_dirs)
print(f"Added {vs_dir} to PATH")
import sys
import time
import warnings # to disable warnings on export to ONNX
import zipfile
from pathlib import Path
import logging
import torch
import nncf # Important - should be imported directly after torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim
import torch.utils.data
import torch.utils.data.distributed
import torchvision.datasets as datasets
import torchvision.models as models
import torchvision.transforms as transforms
from nncf.common.utils.logger import set_log_level
set_log_level(logging.ERROR) # Disables all NNCF info and warning messages
from nncf import NNCFConfig
from nncf.torch import create_compressed_model, register_default_init_args
from openvino.runtime import Core
from torch.jit import TracerWarning
sys.path.append("../utils")
from notebook_utils import download_file
torch.manual_seed(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using {device} device")
MODEL_DIR = Path("model")
OUTPUT_DIR = Path("output")
DATA_DIR = Path("data")
BASE_MODEL_NAME = "resnet18"
image_size = 64
OUTPUT_DIR.mkdir(exist_ok=True)
MODEL_DIR.mkdir(exist_ok=True)
DATA_DIR.mkdir(exist_ok=True)
# Paths where PyTorch, ONNX and OpenVINO IR models will be stored
fp32_pth_path = Path(MODEL_DIR / (BASE_MODEL_NAME + "_fp32")).with_suffix(".pth")
fp32_onnx_path = Path(OUTPUT_DIR / (BASE_MODEL_NAME + "_fp32")).with_suffix(".onnx")
fp32_ir_path = fp32_onnx_path.with_suffix(".xml")
int8_onnx_path = Path(OUTPUT_DIR / (BASE_MODEL_NAME + "_int8")).with_suffix(".onnx")
int8_ir_path = int8_onnx_path.with_suffix(".xml")
# It's possible to train FP32 model from scratch, but it might be slow. So the pre-trained weights are downloaded by default.
pretrained_on_tiny_imagenet = True
fp32_pth_url = "https://storage.openvinotoolkit.org/repositories/nncf/openvino_notebook_ckpts/302_resnet18_fp32_v1.pth"
download_file(fp32_pth_url, directory=MODEL_DIR, filename=fp32_pth_path.name)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/nncf/torch/__init__.py:23: UserWarning: NNCF provides best results with torch==1.9.1, while current torch version is 1.7.1+cpu - consider switching to torch==1.9.1
warnings.warn("NNCF provides best results with torch=={bkc}, "
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/nncf/torch/dynamic_graph/patch_pytorch.py:163: UserWarning: Not patching unique_dim since it is missing in this version of PyTorch
warnings.warn("Not patching {} since it is missing in this version of PyTorch".format(op_name))
Using cpu device
model/resnet18_fp32.pth: 0%| | 0.00/43.1M [00:00<?, ?B/s]
PosixPath('/home/runner/work/openvino_notebooks/openvino_notebooks/notebooks/302-pytorch-quantization-aware-training/model/resnet18_fp32.pth')
Download Tiny ImageNet dataset * 100k images of shape 3x64x64 * 200 different classes: snake, spider, cat, truck, grasshopper, gull, etc.
def download_tiny_imagenet_200(
data_dir: Path,
url="http://cs231n.stanford.edu/tiny-imagenet-200.zip",
tarname="tiny-imagenet-200.zip",
):
archive_path = data_dir / tarname
download_file(url, directory=data_dir, filename=tarname)
zip_ref = zipfile.ZipFile(archive_path, "r")
zip_ref.extractall(path=data_dir)
zip_ref.close()
def prepare_tiny_imagenet_200(dataset_dir: Path):
# format validation set the same way as train set is formatted
val_data_dir = dataset_dir / 'val'
val_annotations_file = val_data_dir / 'val_annotations.txt'
with open(val_annotations_file, 'r') as f:
val_annotation_data = map(lambda line: line.split('\t')[:2], f.readlines())
val_images_dir = val_data_dir / 'images'
for image_filename, image_label in val_annotation_data:
from_image_filepath = val_images_dir / image_filename
to_image_dir = val_data_dir / image_label
if not to_image_dir.exists():
to_image_dir.mkdir()
to_image_filepath = to_image_dir / image_filename
from_image_filepath.rename(to_image_filepath)
val_annotations_file.unlink()
val_images_dir.rmdir()
DATASET_DIR = DATA_DIR / "tiny-imagenet-200"
if not DATASET_DIR.exists():
download_tiny_imagenet_200(DATA_DIR)
prepare_tiny_imagenet_200(DATASET_DIR)
print(f"Successfully downloaded and prepared dataset at: {DATASET_DIR}")
'data/tiny-imagenet-200.zip' already exists.
Successfully downloaded and prepared dataset at: data/tiny-imagenet-200
Pre-train Floating-Point Model¶
Using NNCF for model compression assumes that the user has a pre-trained model and a training pipeline.
Here we demonstrate one possible training pipeline: a ResNet-18 model pre-trained on 1000 classes from ImageNet is fine-tuned with 200 classes from Tiny-Imagenet.
Subsequently, the training and validation functions will be reused as is for quantization-aware training.
Train Function¶
def train(train_loader, model, criterion, optimizer, epoch):
batch_time = AverageMeter("Time", ":3.3f")
losses = AverageMeter("Loss", ":2.3f")
top1 = AverageMeter("Acc@1", ":2.2f")
top5 = AverageMeter("Acc@5", ":2.2f")
progress = ProgressMeter(
len(train_loader), [batch_time, losses, top1, top5], prefix="Epoch:[{}]".format(epoch)
)
# switch to train mode
model.train()
end = time.time()
for i, (images, target) in enumerate(train_loader):
images = images.to(device)
target = target.to(device)
# compute output
output = model(images)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), images.size(0))
top1.update(acc1[0], images.size(0))
top5.update(acc5[0], images.size(0))
# compute gradient and do opt step
optimizer.zero_grad()
loss.backward()
optimizer.step()
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
print_frequency = 50
if i % print_frequency == 0:
progress.display(i)
Validate Function¶
def validate(val_loader, model, criterion):
batch_time = AverageMeter("Time", ":3.3f")
losses = AverageMeter("Loss", ":2.3f")
top1 = AverageMeter("Acc@1", ":2.2f")
top5 = AverageMeter("Acc@5", ":2.2f")
progress = ProgressMeter(len(val_loader), [batch_time, losses, top1, top5], prefix="Test: ")
# switch to evaluate mode
model.eval()
with torch.no_grad():
end = time.time()
for i, (images, target) in enumerate(val_loader):
images = images.to(device)
target = target.to(device)
# compute output
output = model(images)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), images.size(0))
top1.update(acc1[0], images.size(0))
top5.update(acc5[0], images.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
print_frequency = 10
if i % print_frequency == 0:
progress.display(i)
print(" * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}".format(top1=top1, top5=top5))
return top1.avg
Helpers¶
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self, name, fmt=":f"):
self.name = name
self.fmt = fmt
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def __str__(self):
fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})"
return fmtstr.format(**self.__dict__)
class ProgressMeter(object):
def __init__(self, num_batches, meters, prefix=""):
self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
self.meters = meters
self.prefix = prefix
def display(self, batch):
entries = [self.prefix + self.batch_fmtstr.format(batch)]
entries += [str(meter) for meter in self.meters]
print("\t".join(entries))
def _get_batch_fmtstr(self, num_batches):
num_digits = len(str(num_batches // 1))
fmt = "{:" + str(num_digits) + "d}"
return "[" + fmt + "/" + fmt.format(num_batches) + "]"
def accuracy(output, target, topk=(1,)):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
res.append(correct_k.mul_(100.0 / batch_size))
return res
Get a Pre-trained FP32 Model¶
А pre-trained floating-point model is a prerequisite for quantization.
It can be obtained by tuning from scratch with the code below. However,
this usually takes a lot of time. Therefore, we have already run this
code and received good enough weights after 4 epochs (for the sake of
simplicity, we did not tune until the best accuracy). By default, this
notebook just loads these weights without launching training. To train
the model yourself on a model pre-trained on ImageNet, set
pretrained_on_tiny_imagenet = False
in the Imports and Settings
section at the top of this notebook.
num_classes = 200 # 200 is for Tiny ImageNet, default is 1000 for ImageNet
init_lr = 1e-4
batch_size = 128
epochs = 4
model = models.resnet18(pretrained=not pretrained_on_tiny_imagenet)
# update the last FC layer for Tiny ImageNet number of classes
model.fc = nn.Linear(in_features=512, out_features=num_classes, bias=True)
model.to(device)
# Data loading code
train_dir = DATASET_DIR / "train"
val_dir = DATASET_DIR / "val"
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
train_dataset = datasets.ImageFolder(
train_dir,
transforms.Compose(
[
transforms.Resize(image_size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
]
),
)
val_dataset = datasets.ImageFolder(
val_dir,
transforms.Compose(
[
transforms.Resize(image_size),
transforms.ToTensor(),
normalize,
]
),
)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True, sampler=None
)
val_loader = torch.utils.data.DataLoader(
val_dataset, batch_size=batch_size, shuffle=False, num_workers=4, pin_memory=True
)
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=init_lr)
if pretrained_on_tiny_imagenet:
#
# ** WARNING: torch.load functionality uses Python's pickling module that
# may be used to perform arbitrary code execution during unpickling. Only load data that you
# trust.
#
checkpoint = torch.load(str(fp32_pth_path), map_location="cpu")
model.load_state_dict(checkpoint["state_dict"], strict=True)
acc1_fp32 = checkpoint["acc1"]
else:
best_acc1 = 0
# Training loop
for epoch in range(0, epochs):
# run a single training epoch
train(train_loader, model, criterion, optimizer, epoch)
# evaluate on validation set
acc1 = validate(val_loader, model, criterion)
is_best = acc1 > best_acc1
best_acc1 = max(acc1, best_acc1)
if is_best:
checkpoint = {"state_dict": model.state_dict(), "acc1": acc1}
torch.save(checkpoint, fp32_pth_path)
acc1_fp32 = best_acc1
print(f"Accuracy of FP32 model: {acc1_fp32:.3f}")
Accuracy of FP32 model: 55.520
Export the FP32 model to ONNX, which is supported by OpenVINO™ Toolkit, to benchmark it in comparison with the INT8 model.
dummy_input = torch.randn(1, 3, image_size, image_size).to(device)
torch.onnx.export(model, dummy_input, fp32_onnx_path)
print(f"FP32 ONNX model was exported to {fp32_onnx_path}.")
FP32 ONNX model was exported to output/resnet18_fp32.onnx.
Create and Initialize Quantization¶
NNCF enables compression-aware training by integrating into regular training pipelines. The framework is designed so that modifications to your original training code are minor. Quantization is the simplest scenario and requires only 3 modifications.
Configure NNCF parameters to specify compression
nncf_config_dict = {
"input_info": {"sample_size": [1, 3, image_size, image_size]},
"log_dir": str(OUTPUT_DIR), # log directory for NNCF-specific logging outputs
"compression": {
"algorithm": "quantization", # specify the algorithm here
},
}
nncf_config = NNCFConfig.from_dict(nncf_config_dict)
Provide data loader to initialize the values of quantization ranges and determine which activation should be signed or unsigned from the collected statistics using a given number of samples.
nncf_config = register_default_init_args(nncf_config, train_loader)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
Create a wrapped model ready for compression fine-tuning from a pre-trained FP32 model and configuration object.
compression_ctrl, model = create_compressed_model(model, nncf_config)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
Evaluate the new model on the validation set after initialization of quantization. The accuracy should be close to the accuracy of the floating-point FP32 model for a simple case like the one we are demonstrating now.
acc1 = validate(val_loader, model, criterion)
print(f"Accuracy of initialized INT8 model: {acc1:.3f}")
Test: [ 0/79] Time 0.803 (0.803) Loss 1.003 (1.003) Acc@1 78.12 (78.12) Acc@5 89.84 (89.84)
Test: [10/79] Time 0.394 (0.431) Loss 1.900 (1.589) Acc@1 46.09 (61.08) Acc@5 82.03 (84.52)
Test: [20/79] Time 0.404 (0.418) Loss 1.712 (1.672) Acc@1 63.28 (59.00) Acc@5 79.69 (83.18)
Test: [30/79] Time 0.401 (0.412) Loss 2.285 (1.773) Acc@1 52.34 (57.43) Acc@5 68.75 (81.50)
Test: [40/79] Time 0.394 (0.410) Loss 1.543 (1.820) Acc@1 64.06 (55.98) Acc@5 85.94 (80.79)
Test: [50/79] Time 0.393 (0.407) Loss 2.002 (1.820) Acc@1 54.69 (55.91) Acc@5 75.78 (80.58)
Test: [60/79] Time 0.391 (0.406) Loss 1.703 (1.847) Acc@1 57.81 (55.37) Acc@5 84.38 (80.14)
Test: [70/79] Time 0.347 (0.403) Loss 2.400 (1.874) Acc@1 46.09 (55.00) Acc@5 72.66 (79.50)
* Acc@1 55.410 Acc@5 80.140
Accuracy of initialized INT8 model: 55.410
Fine-tune the Compressed Model¶
At this step, a regular fine-tuning process is applied to further improve quantized model accuracy. Normally, several epochs of tuning are required with a small learning rate, the same that is usually used at the end of the training of the original model. No other changes in the training pipeline are required. Here is a simple example.
compression_lr = init_lr / 10
optimizer = torch.optim.Adam(model.parameters(), lr=compression_lr)
# train for one epoch with NNCF
train(train_loader, model, criterion, optimizer, epoch=0)
# evaluate on validation set after Quantization-Aware Training (QAT case)
acc1_int8 = validate(val_loader, model, criterion)
print(f"Accuracy of tuned INT8 model: {acc1_int8:.3f}")
print(f"Accuracy drop of tuned INT8 model over pre-trained FP32 model: {acc1_fp32 - acc1_int8:.3f}")
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:54: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
return img.transpose(Image.FLIP_LEFT_RIGHT)
Epoch:[0][ 0/782] Time 1.926 (1.926) Loss 0.853 (0.853) Acc@1 79.69 (79.69) Acc@5 92.19 (92.19)
Epoch:[0][ 50/782] Time 1.277 (1.340) Loss 0.750 (0.821) Acc@1 85.16 (79.52) Acc@5 95.31 (94.16)
Epoch:[0][100/782] Time 1.355 (1.339) Loss 0.833 (0.810) Acc@1 78.12 (79.83) Acc@5 96.09 (94.21)
Epoch:[0][150/782] Time 1.353 (1.344) Loss 0.729 (0.800) Acc@1 82.81 (80.30) Acc@5 96.09 (94.23)
Epoch:[0][200/782] Time 1.363 (1.347) Loss 0.836 (0.792) Acc@1 75.78 (80.53) Acc@5 93.75 (94.23)
Epoch:[0][250/782] Time 1.342 (1.349) Loss 0.668 (0.785) Acc@1 87.50 (80.70) Acc@5 95.31 (94.31)
Epoch:[0][300/782] Time 1.271 (1.344) Loss 0.804 (0.777) Acc@1 81.25 (80.91) Acc@5 94.53 (94.44)
Epoch:[0][350/782] Time 1.421 (1.344) Loss 0.708 (0.772) Acc@1 83.59 (81.03) Acc@5 94.53 (94.50)
Epoch:[0][400/782] Time 1.354 (1.345) Loss 0.752 (0.766) Acc@1 78.12 (81.19) Acc@5 95.31 (94.55)
Epoch:[0][450/782] Time 1.335 (1.346) Loss 0.773 (0.762) Acc@1 81.25 (81.30) Acc@5 93.75 (94.63)
Epoch:[0][500/782] Time 1.354 (1.346) Loss 0.912 (0.758) Acc@1 77.34 (81.38) Acc@5 92.19 (94.67)
Epoch:[0][550/782] Time 1.351 (1.347) Loss 0.779 (0.756) Acc@1 84.38 (81.42) Acc@5 93.75 (94.65)
Epoch:[0][600/782] Time 1.344 (1.347) Loss 1.059 (0.754) Acc@1 75.78 (81.54) Acc@5 90.62 (94.66)
Epoch:[0][650/782] Time 1.336 (1.347) Loss 0.730 (0.750) Acc@1 81.25 (81.66) Acc@5 93.75 (94.69)
Epoch:[0][700/782] Time 1.276 (1.347) Loss 0.728 (0.747) Acc@1 80.47 (81.77) Acc@5 94.53 (94.70)
Epoch:[0][750/782] Time 1.362 (1.344) Loss 0.628 (0.745) Acc@1 86.72 (81.81) Acc@5 96.09 (94.72)
Test: [ 0/79] Time 0.886 (0.886) Loss 1.092 (1.092) Acc@1 74.22 (74.22) Acc@5 85.16 (85.16)
Test: [10/79] Time 0.408 (0.421) Loss 1.916 (1.501) Acc@1 47.66 (63.07) Acc@5 80.47 (84.73)
Test: [20/79] Time 0.369 (0.396) Loss 1.571 (1.575) Acc@1 64.84 (60.97) Acc@5 81.25 (84.30)
Test: [30/79] Time 0.375 (0.389) Loss 2.041 (1.685) Acc@1 57.81 (59.38) Acc@5 71.88 (82.38)
Test: [40/79] Time 0.366 (0.384) Loss 1.540 (1.739) Acc@1 63.28 (58.12) Acc@5 85.94 (81.67)
Test: [50/79] Time 0.370 (0.381) Loss 2.007 (1.746) Acc@1 52.34 (57.98) Acc@5 74.22 (81.34)
Test: [60/79] Time 0.375 (0.379) Loss 1.575 (1.779) Acc@1 67.19 (57.39) Acc@5 83.59 (80.81)
Test: [70/79] Time 0.329 (0.378) Loss 2.373 (1.804) Acc@1 46.88 (56.95) Acc@5 75.00 (80.30)
* Acc@1 57.400 Acc@5 80.950
Accuracy of tuned INT8 model: 57.400
Accuracy drop of tuned INT8 model over pre-trained FP32 model: -1.880
Export INT8 Model to ONNX¶
if not int8_onnx_path.exists():
warnings.filterwarnings("ignore", category=TracerWarning)
warnings.filterwarnings("ignore", category=UserWarning)
# Export INT8 model to ONNX that is supported by the OpenVINO™ toolkit
compression_ctrl.export_model(int8_onnx_path)
print(f"INT8 ONNX model exported to {int8_onnx_path}.")
INT8 ONNX model exported to output/resnet18_int8.onnx.
Convert ONNX models to OpenVINO Intermediate Representation (IR)¶
Call the OpenVINO Model Optimizer tool to convert the ONNX model to
OpenVINO IR, with FP16 precision. The models are saved to the current
directory. We add the mean values to the model and scale the output with
the standard deviation by --mean_values
and --scale_values
arguments. It is not necessary to normalize input data before
propagating it through the network with these options.
See the Model Optimizer Developer Guide for more information about Model Optimizer.
Executing this command may take a while. There may be some errors or
warnings in the output. Model Optimizer successfully converted the model
to IR if the last lines of the output include:
[ SUCCESS ] Generated IR version 10 model
if not fp32_ir_path.exists():
!mo --input_model $fp32_onnx_path --input_shape "[1,3, $image_size, $image_size]" --mean_values "[123.675, 116.28 , 103.53]" --scale_values "[58.395, 57.12 , 57.375]" --data_type FP16 --output_dir $OUTPUT_DIR
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/runner/work/openvino_notebooks/openvino_notebooks/notebooks/302-pytorch-quantization-aware-training/output/resnet18_fp32.onnx
- Path for generated IR: /home/runner/work/openvino_notebooks/openvino_notebooks/notebooks/302-pytorch-quantization-aware-training/output
- IR output name: resnet18_fp32
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,3, 64, 64]
- Source layout: Not specified
- Target layout: Not specified
- Layout: Not specified
- Mean values: [123.675, 116.28 , 103.53]
- Scale values: [58.395, 57.12 , 57.375]
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- User transformations: Not specified
- Reverse input channels: False
- Enable IR generation for fixed input shape: False
- Use the transformations config file: None
Advanced parameters:
- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False
- Force the usage of new Frontend of Model Optimizer for model conversion into IR: False
OpenVINO runtime found in: /opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/openvino
OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/runner/work/openvino_notebooks/openvino_notebooks/notebooks/302-pytorch-quantization-aware-training/output/resnet18_fp32.xml
[ SUCCESS ] BIN file: /home/runner/work/openvino_notebooks/openvino_notebooks/notebooks/302-pytorch-quantization-aware-training/output/resnet18_fp32.bin
[ SUCCESS ] Total execution time: 0.78 seconds.
[ SUCCESS ] Memory consumed: 161 MB.
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2022_bu_IOTG_OpenVINO-2022-1&content=upg_all&medium=organic or on the GitHub*
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai
if not int8_ir_path.exists():
!mo --input_model $int8_onnx_path --input_shape "[1,3, $image_size, $image_size]" --mean_values "[123.675, 116.28 , 103.53]" --scale_values "[58.395, 57.12 , 57.375]" --data_type FP16 --output_dir $OUTPUT_DIR
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/runner/work/openvino_notebooks/openvino_notebooks/notebooks/302-pytorch-quantization-aware-training/output/resnet18_int8.onnx
- Path for generated IR: /home/runner/work/openvino_notebooks/openvino_notebooks/notebooks/302-pytorch-quantization-aware-training/output
- IR output name: resnet18_int8
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,3, 64, 64]
- Source layout: Not specified
- Target layout: Not specified
- Layout: Not specified
- Mean values: [123.675, 116.28 , 103.53]
- Scale values: [58.395, 57.12 , 57.375]
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- User transformations: Not specified
- Reverse input channels: False
- Enable IR generation for fixed input shape: False
- Use the transformations config file: None
Advanced parameters:
- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False
- Force the usage of new Frontend of Model Optimizer for model conversion into IR: False
OpenVINO runtime found in: /opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/openvino
OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/runner/work/openvino_notebooks/openvino_notebooks/notebooks/302-pytorch-quantization-aware-training/output/resnet18_int8.xml
[ SUCCESS ] BIN file: /home/runner/work/openvino_notebooks/openvino_notebooks/notebooks/302-pytorch-quantization-aware-training/output/resnet18_int8.bin
[ SUCCESS ] Total execution time: 1.29 seconds.
[ SUCCESS ] Memory consumed: 167 MB.
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2022_bu_IOTG_OpenVINO-2022-1&content=upg_all&medium=organic or on the GitHub*
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai
Benchmark Model Performance by Computing Inference Time¶
Finally, we will measure the inference performance of the FP32 and INT8 models. To do this, we use Benchmark Tool - OpenVINO’s inference performance measurement tool. By default, Benchmark Tool runs inference for 60 seconds in asynchronous mode on CPU. It returns inference speed as latency (milliseconds per image) and throughput (frames per second) values.
NOTE: In this notebook we run benchmark_app for 15 seconds to give a quick indication of performance. For more accurate performance, we recommended running benchmark_app in a terminal/command prompt after closing other applications. Run
benchmark_app -m model.xml -d CPU
to benchmark async inference on CPU for one minute. Change CPU to GPU to benchmark on GPU. Runbenchmark_app --help
to see an overview of all command line options.
def parse_benchmark_output(benchmark_output):
parsed_output = [line for line in benchmark_output if not (line.startswith(r"[") or line.startswith(" ") or line == "")]
print(*parsed_output, sep='\n')
print('Benchmark FP32 model (IR)')
benchmark_output = ! benchmark_app -m $fp32_ir_path -d CPU -api async -t 15
parse_benchmark_output(benchmark_output)
print('Benchmark INT8 model (IR)')
benchmark_output = ! benchmark_app -m $int8_ir_path -d CPU -api async -t 15
parse_benchmark_output(benchmark_output)
Benchmark FP32 model (IR)
Count: 5107 iterations
Duration: 15002.97 ms
Latency:
Throughput: 340.40 FPS
Benchmark INT8 model (IR)
Count: 16422 iterations
Duration: 15001.35 ms
Latency:
Throughput: 1094.70 FPS
Show CPU Information for reference
ie = Core()
ie.get_property(device_name="CPU", name="FULL_DEVICE_NAME")
'Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz'