class InferenceEngine::CNNNetwork¶
Overview¶
This class contains all the information about the Neural Network and the related binary information. More…
#include <ie_cnn_network.h>
class CNNNetwork
{
public:
// construction
CNNNetwork();
CNNNetwork(std::shared_ptr<ICNNNetwork> network);
CNNNetwork(
const std::shared_ptr<ngraph::Function>& network,
const std::vector<std::shared_ptr<IExtension>>& exts = {}
);
// methods
OutputsDataMap getOutputsInfo() const;
InputsDataMap getInputsInfo() const;
size_t layerCount() const;
const std::string& getName() const;
void setBatchSize(const size_t size);
size_t getBatchSize() const;
operator ICNNNetwork::Ptr ();
operator ICNNNetwork & ();
operator const ICNNNetwork & () const;
std::shared_ptr<ngraph::Function> getFunction();
std::shared_ptr<const ngraph::Function> getFunction() const;
void addOutput(const std::string& layerName, size_t outputIndex = 0);
ICNNNetwork::InputShapes getInputShapes() const;
void reshape(const ICNNNetwork::InputShapes& inputShapes);
void serialize(const std::string& xmlPath, const std::string& binPath = {}) const;
void serialize(std::ostream& xmlBuf, std::ostream& binBuf) const;
void serialize(std::ostream& xmlBuf, Blob::Ptr& binBlob) const;
std::string getOVNameForTensor(const std::string& orig_name) const;
};
Detailed Documentation¶
This class contains all the information about the Neural Network and the related binary information.
Construction¶
CNNNetwork()
A default constructor.
CNNNetwork(std::shared_ptr<ICNNNetwork> network)
Allows helper class to manage lifetime of network object.
Deprecated Don’t use this constructor. It will be removed soon
Parameters:
network |
Pointer to the network object |
CNNNetwork(
const std::shared_ptr<ngraph::Function>& network,
const std::vector<std::shared_ptr<IExtension>>& exts = {}
)
A constructor from ngraph::Function object This constructor wraps existing ngraph::Function If you want to avoid modification of original Function, please create a copy.
Parameters:
network |
Pointer to the ngraph::Function object |
exts |
Vector of pointers to IE extension objects |
Methods¶
OutputsDataMap getOutputsInfo() const
Gets the network output Data node information. The received info is stored in the given Data node.
For single and multiple outputs networks.
This method need to be called to find out OpenVINO output names for using them later when calling InferenceEngine::InferRequest::GetBlob or InferenceEngine::InferRequest::SetBlob
If you want to use framework names, you can use InferenceEngine::CNNNetwork::getOVNameForTensor method to map framework names to OpenVINO names
Returns:
the InferenceEngine::OutputsDataMap object
InputsDataMap getInputsInfo() const
Gets the network input Data node information. The received info is stored in the given InputsDataMap object.
For single and multiple inputs networks. This method need to be called to find out OpenVINO input names for using them later when calling InferenceEngine::InferRequest::SetBlob
If you want to use framework names, you can use InferenceEngine::ICNNNetwork::getOVNameForTensor method to map framework names to OpenVINO names
Returns:
The InferenceEngine::InputsDataMap object.
size_t layerCount() const
Returns the number of layers in the network as an integer value.
Returns:
The number of layers as an integer value
const std::string& getName() const
Returns the network name.
Returns:
Network name
void setBatchSize(const size_t size)
Changes the inference batch size.
There are several limitations and it’s not recommended to use it. Set batch to the input shape and call InferenceEngine::CNNNetwork::reshape.
Current implementation of the function sets batch size to the first dimension of all layers in the networks. Before calling it make sure that all your layers have batch in the first dimension, otherwise the method works incorrectly. This limitation is resolved via shape inference feature by using InferenceEngine::ICNNNetwork::reshape method. To read more refer to the Shape Inference section in documentation
Current implementation of the function sets batch size to the first dimension of all layers in the networks. Before calling it make sure that all your layers have batch in the first dimension, otherwise the method works incorrectly. This limitation is resolved via shape inference feature by using InferenceEngine::ICNNNetwork::reshape method. To read more refer to the Shape Inference section in documentation
Parameters:
size |
Size of batch to set |
size_t getBatchSize() const
Gets the inference batch size.
Returns:
The size of batch as a size_t value
operator ICNNNetwork::Ptr ()
An overloaded operator cast to get pointer on current network.
Deprecated InferenceEngine::ICNNNetwork interface is deprecated
Returns:
A shared pointer of the current network
operator ICNNNetwork & ()
An overloaded operator & to get current network.
Deprecated InferenceEngine::ICNNNetwork interface is deprecated
Returns:
An instance of the current network
operator const ICNNNetwork & () const
An overloaded operator & to get current network.
Deprecated InferenceEngine::ICNNNetwork interface is deprecated
Returns:
A const reference of the current network
std::shared_ptr<ngraph::Function> getFunction()
Returns constant nGraph function.
Returns:
constant nGraph function
std::shared_ptr<const ngraph::Function> getFunction() const
Returns constant nGraph function.
Returns:
constant nGraph function
void addOutput(const std::string& layerName, size_t outputIndex = 0)
Adds output to the layer.
Parameters:
layerName |
Name of the layer |
outputIndex |
Index of the output |
ICNNNetwork::InputShapes getInputShapes() const
Helper method to get collect all input shapes with names of corresponding Data objects.
Returns:
Map of pairs: input name and its dimension.
void reshape(const ICNNNetwork::InputShapes& inputShapes)
Run shape inference with new input shapes for the network.
Parameters:
inputShapes |
A map of pairs: name of corresponding data and its dimension. |
void serialize(const std::string& xmlPath, const std::string& binPath = {}) const
Serialize network to IR and weights files.
Parameters:
xmlPath |
Path to output IR file. |
binPath |
Path to output weights file. The parameter is skipped in case of executable graph info serialization. |
void serialize(std::ostream& xmlBuf, std::ostream& binBuf) const
Serialize network to IR and weights streams.
Parameters:
xmlBuf |
output IR stream. |
binBuf |
output weights stream. |
void serialize(std::ostream& xmlBuf, Blob::Ptr& binBlob) const
Serialize network to IR stream and weights Blob::Ptr.
Parameters:
xmlBuf |
output IR stream. |
binBlob |
output weights Blob::Ptr. |
std::string getOVNameForTensor(const std::string& orig_name) const
Method maps framework tensor name to OpenVINO name.
Parameters:
orig_name |
Framework tensor name |
Returns:
OpenVINO name