This is an interface of asynchronous infer request.
Methods
InferRequest& operator = (const InferRequest& other)
Default copy assignment operator.
Parameters:
Returns:
reference to the current object
InferRequest& operator = (InferRequest&& other)
Default move assignment operator.
Parameters:
Returns:
reference to the current object
void SetBlob(const std::string& name, const Blob::Ptr& data)
Sets input/output data to infer.
Memory allocation does not happen
Parameters:
name |
Name of input or output blob. |
data |
Reference to input or output blob. The type of a blob must match the network input precision and size. |
Blob::Ptr GetBlob(const std::string& name)
Gets input/output data for inference.
Memory allocation does not happen
Parameters:
name |
A name of Blob to get |
Returns:
A shared pointer to a Blob with a name name
. If a blob is not found, an exception is thrown.
void SetBlob(
const std::string& name,
const Blob::Ptr& data,
const PreProcessInfo& info
)
Sets blob with a pre-process information.
Returns an error in case if data blob is output
Parameters:
name |
Name of input blob. |
data |
A reference to input. The type of Blob must correspond to the network input precision and size. |
info |
Preprocess info for blob. |
const PreProcessInfo& GetPreProcess(const std::string& name) const
Gets pre-process for input data.
Parameters:
Returns:
pointer to pre-process info of blob with name
void Infer()
Infers specified input(s) in synchronous mode.
blocks all methods of InferRequest while request is ongoing (running or waiting in queue)
void Cancel()
Cancels inference request.
std::map<std::string, InferenceEngineProfileInfo> GetPerformanceCounts() const
Queries performance measures per layer to get feedback of what is the most time consuming layer.
not all plugins provide meaningful data
Returns:
Map of layer names to profiling information for that layer
void SetInput(const BlobMap& inputs)
Sets input data to infer.
Memory allocation doesn’t happen
Parameters:
inputs |
A reference to a map of input blobs accessed by input names. The type of Blob must correspond to the network input precision and size. |
void SetOutput(const BlobMap& results)
Sets data that will contain result of the inference.
Memory allocation doesn’t happen
Parameters:
void SetBatch(const int batch)
Sets new batch size when dynamic batching is enabled in executable network that created this request.
Parameters:
batch |
new batch size to be used by all the following inference calls for this request. |
void StartAsync()
Start inference of specified input(s) in asynchronous mode.
It returns immediately. Inference starts also immediately.
StatusCode Wait(int64_t millis_timeout = RESULT_READY)
Waits for the result to become available. Blocks until specified millis_timeout has elapsed or the result becomes available, whichever comes first.
There are special cases when millis_timeout is equal some value of the WaitMode enum:
Parameters:
millis_timeout |
Maximum duration in milliseconds to block for |
Returns:
A status code of operation
template <typename F>
void SetCompletionCallback(F callbackToSet)
Sets a callback function that will be called on success or failure of asynchronous request.
Parameters:
callbackToSet |
callback object which will be called on when inference finish. |
std::vector<VariableState> QueryState()
Gets state control interface for given infer request.
State control essential for recurrent networks
Returns:
A vector of Memory State objects
operator std::shared_ptr< IInferRequest > ()
IInferRequest pointer to be used directly in CreateInferRequest functions.
Returns:
A shared pointer to IInferRequest interface
bool operator ! () const
Checks if current InferRequest object is not initialized.
Returns:
true if current InferRequest object is not initialized, false - otherwise
operator bool () const
Checks if current InferRequest object is initialized.
Returns:
true if current InferRequest object is initialized, false - otherwise
bool operator != (const InferRequest&) const
Compares whether this request wraps the same impl underneath.
Returns:
true if current InferRequest object doesn’t wrap the same impl as the operator’s arg
bool operator == (const InferRequest&) const
Compares whether this request wraps the same impl underneath.
Returns:
true if current InferRequest object wraps the same impl as the operator’s arg