Interactive question answering with OpenVINO

This demo shows interactive question answering with OpenVINO. We use small BERT-large-like model distilled and quantized to INT8 on SQuAD v1.1 training set from larger BERT-large model. The model comes from Open Model Zoo. At the bottom of this notebook, you will see live inference results from your inputs.

Imports

import operator
import time
from urllib import parse

import numpy as np
from openvino.runtime import Core

import html_reader as reader
import tokens_bert as tokens

The model

Download the model

We use omz_downloader, which is a command-line tool from the openvino-dev package. omz_downloader automatically creates a directory structure and downloads the selected model. If the model is already downloaded, this step is skipped.

You can download and use any of the following models: bert-large-uncased-whole-word-masking-squad-0001, bert-large-uncased-whole-word-masking-squad-int8-0001, bert-small-uncased-whole-word-masking-squad-0001, bert-small-uncased-whole-word-masking-squad-0002, bert-small-uncased-whole-word-masking-squad-int8-0002, just change the model name below. Any of these models are already converted to OpenVINO Intermediate Representation (IR), so there is no need to use omz_converter.

# directory where model will be downloaded
base_model_dir = "model"

# desired precision
precision = "FP16-INT8"

# model name as named in Open Model Zoo
model_name = "bert-small-uncased-whole-word-masking-squad-int8-0002"

model_path = f"model/intel/{model_name}/{precision}/{model_name}.xml"
model_weights_path = f"model/intel/{model_name}/{precision}/{model_name}.bin"

download_command = f"omz_downloader " \
                   f"--name {model_name} " \
                   f"--precision {precision} " \
                   f"--output_dir {base_model_dir} " \
                   f"--cache_dir {base_model_dir}"
! $download_command
################|| Downloading bert-small-uncased-whole-word-masking-squad-int8-0002 ||################

========== Downloading model/intel/bert-small-uncased-whole-word-masking-squad-int8-0002/vocab.txt


========== Downloading model/intel/bert-small-uncased-whole-word-masking-squad-int8-0002/FP16-INT8/bert-small-uncased-whole-word-masking-squad-int8-0002.xml


========== Downloading model/intel/bert-small-uncased-whole-word-masking-squad-int8-0002/FP16-INT8/bert-small-uncased-whole-word-masking-squad-int8-0002.bin

Load the model

Downloaded models are located in a fixed structure, which indicates vendor, model name and precision. Only a few lines of code are required to run the model. First, we create an Inference Engine object. Then we read the network architecture and model weights from the .xml and .bin files. Finally, we compile the network for the desired device. You can choose CPU or GPU in the case of this model.

# initialize inference engine
core = Core()
# read the network and corresponding weights from file
model = core.read_model(model=model_path, weights=model_weights_path)
# load the model on the CPU (you can use GPU as well)
compiled_model = core.compile_model(model=model, device_name="CPU")

# get input and output names of nodes
input_keys = list(compiled_model.inputs)
output_keys = list(compiled_model.outputs)

# get network input size
input_size = compiled_model.input(0).shape[1]

Input keys are the names of the input nodes and output keys contain names of output nodes of the network. In the case of the BERT-large-like model, we have four inputs and two outputs.

[i.any_name for i in input_keys], [o.any_name for o in output_keys]
(['input_ids', 'attention_mask', 'token_type_ids', 'position_ids'],
 ['output_s', 'output_e'])

Processing

NLP models usually take a list of tokens as standard input. A token is a single word converted to some integer. To provide the proper input, we need the vocabulary for such mapping. We also define some special tokens like separators or padding and a function to load the content from provided URLs.

# path to vocabulary file
vocab_file_path = "data/vocab.txt"

# create dictionary with words and their indices
vocab = tokens.load_vocab_file(vocab_file_path)

# define special tokens
cls_token = vocab["[CLS]"]
pad_token = vocab["[PAD]"]
sep_token = vocab["[SEP]"]


# function to load text from given urls
def load_context(sources):
    input_urls = []
    paragraphs = []
    for source in sources:
        result = parse.urlparse(source)
        if all([result.scheme, result.netloc]):
            input_urls.append(source)
        else:
            paragraphs.append(source)

    paragraphs.extend(reader.get_paragraphs(input_urls))
    # produce one big context string
    return "\n".join(paragraphs)

Preprocessing

The input size in this case is 384 tokens long. The main input (input_ids) to used BERT model consist of two parts: question tokens and context tokens separated by some special tokens. If question + context are shorter than 384 tokens, padding tokens are added. If question + context is longer than 384 tokens, the context must be split into parts and the question with different parts of context must be fed to the network many times. We use overlapping, so neighbor parts of the context are overlapped by half size of the context part (if the context part equals 300 tokens, neighbor context parts overlap with 150 tokens). We also need to provide: attention_mask, which is a sequence of integer values representing the mask of valid values in the input; token_type_ids, which is a sequence of integer values representing the segmentation of the input_ids into question and context; position_ids, which is a sequence of integer values from 0 to 383 representing the position index for each input token. To know more about input, please read this.

# generator of a sequence of inputs
def prepare_input(question_tokens, context_tokens):
    # length of question in tokens
    question_len = len(question_tokens)
    # context part size
    context_len = input_size - question_len - 3

    if context_len < 16:
        raise RuntimeError("Question is too long in comparison to input size. No space for context")

    # take parts of context with overlapping by 0.5
    for start in range(0, max(1, len(context_tokens) - context_len), context_len // 2):
        # part of context
        part_context_tokens = context_tokens[start:start + context_len]
        # input: question and context separated by special tokens
        input_ids = [cls_token] + question_tokens + [sep_token] + part_context_tokens + [sep_token]
        # 1 for any index if there is no padding token, 0 otherwise
        attention_mask = [1] * len(input_ids)
        # 0 for question tokens, 1 for context part
        token_type_ids = [0] * (question_len + 2) + [1] * (len(part_context_tokens) + 1)

        # add padding at the end
        (input_ids, attention_mask, token_type_ids), pad_number = pad(input_ids=input_ids,
                                                                      attention_mask=attention_mask,
                                                                      token_type_ids=token_type_ids)

        # create input to feed the model
        input_dict = {
            "input_ids": np.array([input_ids], dtype=np.int32),
            "attention_mask": np.array([attention_mask], dtype=np.int32),
            "token_type_ids": np.array([token_type_ids], dtype=np.int32),
        }

        # some models require additional position_ids
        if "position_ids" in [i_key.any_name for i_key in input_keys]:
            position_ids = np.arange(len(input_ids))
            input_dict["position_ids"] = np.array([position_ids], dtype=np.int32)

        yield input_dict, pad_number, start


# function to add padding
def pad(input_ids, attention_mask, token_type_ids):
    # how many padding tokens
    diff_input_size = input_size - len(input_ids)

    if diff_input_size > 0:
        # add padding to all inputs
        input_ids = input_ids + [pad_token] * diff_input_size
        attention_mask = attention_mask + [0] * diff_input_size
        token_type_ids = token_type_ids + [0] * diff_input_size

    return (input_ids, attention_mask, token_type_ids), diff_input_size

Postprocessing

The results from the network are raw (logits). We need to use the softmax function to get the probability distribution. Then, we are looking for the best answer in the current part of the context (the highest score) and we return the score and the context range for the answer.

# based on https://github.com/openvinotoolkit/open_model_zoo/blob/bf03f505a650bafe8da03d2747a8b55c5cb2ef16/demos/common/python/openvino/model_zoo/model_api/models/bert.py#L163
def postprocess(output_start, output_end, question_tokens, context_tokens_start_end, padding, start_idx):

    def get_score(logits):
        out = np.exp(logits)
        return out / out.sum(axis=-1)

    # get start-end scores for context
    score_start = get_score(output_start)
    score_end = get_score(output_end)

    # index of first context token in tensor
    context_start_idx = len(question_tokens) + 2
    # index of last+1 context token in tensor
    context_end_idx = input_size - padding - 1

    # find product of all start-end combinations to find the best one
    max_score, max_start, max_end = find_best_answer_window(start_score=score_start,
                                                            end_score=score_end,
                                                            context_start_idx=context_start_idx,
                                                            context_end_idx=context_end_idx)

    # convert to context text start-end index
    max_start = context_tokens_start_end[max_start + start_idx][0]
    max_end = context_tokens_start_end[max_end + start_idx][1]

    return max_score, max_start, max_end


# based on https://github.com/openvinotoolkit/open_model_zoo/blob/bf03f505a650bafe8da03d2747a8b55c5cb2ef16/demos/common/python/openvino/model_zoo/model_api/models/bert.py#L188
def find_best_answer_window(start_score, end_score, context_start_idx, context_end_idx):
    context_len = context_end_idx - context_start_idx
    score_mat = np.matmul(
        start_score[context_start_idx:context_end_idx].reshape((context_len, 1)),
        end_score[context_start_idx:context_end_idx].reshape((1, context_len)),
    )
    # reset candidates with end before start
    score_mat = np.triu(score_mat)
    # reset long candidates (>16 words)
    score_mat = np.tril(score_mat, 16)
    # find the best start-end pair
    max_s, max_e = divmod(score_mat.flatten().argmax(), score_mat.shape[1])
    max_score = score_mat[max_s, max_e]

    return max_score, max_s, max_e

Firstly, we need to create a list of tokens from the context and the question. Then, we are looking for the best answer by trying different parts of the context. The best answer should come with the highest score.

def get_best_answer(question, context):
    # convert context string to tokens
    context_tokens, context_tokens_start_end = tokens.text_to_tokens(text=context.lower(),
                                                                     vocab=vocab)
    # convert question string to tokens
    question_tokens, _ = tokens.text_to_tokens(text=question.lower(), vocab=vocab)

    results = []
    # iterate through different parts of context
    for network_input, padding, start_idx in prepare_input(question_tokens=question_tokens,
                                                           context_tokens=context_tokens):
        # get output layers
        output_start_key = compiled_model.output("output_s")
        output_end_key = compiled_model.output("output_e")

        # openvino inference
        result = compiled_model(network_input)
        # postprocess the result getting the score and context range for the answer
        score_start_end = postprocess(output_start=result[output_start_key][0],
                                      output_end=result[output_end_key][0],
                                      question_tokens=question_tokens,
                                      context_tokens_start_end=context_tokens_start_end,
                                      padding=padding,
                                      start_idx=start_idx)
        results.append(score_start_end)

    # find the highest score
    answer = max(results, key=operator.itemgetter(0))
    # return the part of the context, which is already an answer
    return context[answer[1]:answer[2]], answer[0]

Main Processing Function

Run question answering on specific knowledge base (websites) and iterate through the questions.

def run_question_answering(sources, example_question=None):
    print(f"Context: {sources}", flush=True)
    context = load_context(sources)

    if len(context) == 0:
        print("Error: Empty context or outside paragraphs")
        return

    if example_question is not None:
        start_time = time.perf_counter()
        answer, score = get_best_answer(question=example_question, context=context)
        end_time = time.perf_counter()

        print(f"Question: {example_question}")
        print(f"Answer: {answer}")
        print(f"Score: {score:.2f}")
        print(f"Time: {end_time - start_time:.2f}s")
    else:
        while True:
            question = input()
            # if no question - break
            if question == "":
                break

            # measure processing time
            start_time = time.perf_counter()
            answer, score = get_best_answer(question=question, context=context)
            end_time = time.perf_counter()

            print(f"Question: {question}")
            print(f"Answer: {answer}")
            print(f"Score: {score:.2f}")
            print(f"Time: {end_time - start_time:.2f}s")

Run

Run on local paragraphs

Change sources to your own to answer your questions. You can use as many sources as you want. Usually, you need to wait a few seconds for the answer, but the longer context the longer the waiting time. The model is very limited and sensitive for the input. The answer can depend on whether there is a question mark at the end. The model will try to answer any of your questions even there is no good answer in the context, so in that case, you can see random results.

Sample source: Computational complexity theory paragraph (from here)

Sample questions: - What is the term for a task that generally lends itself to being solved by a computer? - By what main attribute are computational problems classified utilizing computational complexity theory? - What branch of theoretical computer science deals with broadly classifying computational problems by difficulty and class of relationship?

If you want to stop the processing just put an empty string.

Note: Firstly, run the code below and then put your questions in the box.

sources = ["Computational complexity theory is a branch of the theory of computation in theoretical computer "
           "science that focuses on classifying computational problems according to their inherent difficulty, "
           "and relating those classes to each other. A computational problem is understood to be a task that "
           "is in principle amenable to being solved by a computer, which is equivalent to stating that the "
           "problem may be solved by mechanical application of mathematical steps, such as an algorithm."]

run_question_answering(sources, example_question="What is the term for a task that generally lends itself to being solved by a computer?")
Context: ['Computational complexity theory is a branch of the theory of computation in theoretical computer science that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm.']
Question: What is the term for a task that generally lends itself to being solved by a computer?
Answer: computational problem
Score: 0.48
Time: 0.10s

Run on websites

You can also provide urls. Note that the context (knowledge base) is built from website paragraphs. If some information is outside the paragraphs, the algorithm won’t able to find it.

Sample source: OpenVINO wiki

Sample questions: - What does OpenVINO mean? - What is the license for OpenVINO? - Where can you deploy OpenVINO code?

If you want to stop the processing just put an empty string.

Note: Firstly, run the code below and then put your questions in the box.

sources = ["https://en.wikipedia.org/wiki/OpenVINO"]

run_question_answering(sources, example_question="What does OpenVINO mean?")
Context: ['https://en.wikipedia.org/wiki/OpenVINO']
Question: What does OpenVINO mean?
Answer: Open Visual Inference and Neural network Optimization
Score: 0.94
Time: 0.17s