class ngraph::pass::low_precision::MarkupCanBeQuantized¶
Overview¶
MarkupCanBeQuantized transformation marks Convolution, ConvolutionBackpropData, GroupConvolution and Concat operations as able to be quantized or not. If an operation is not quantized, then PrecisionsAttribute attribute instance is created with empty precisions. More…
#include <markup_can_be_quantized.hpp>
class MarkupCanBeQuantized: public ov::pass::ModelPass
{
public:
// construction
MarkupCanBeQuantized(const std::vector<ngraph::element::Type> defaultPrecisions = { ngraph::element::u8, ngraph::element::i8 });
// methods
OPENVINO_RTTI("MarkupCanBeQuantized", "0");
bool run_on_model(const std::shared_ptr<ngraph::Function>& m);
};
Inherited Members¶
public:
// typedefs
typedef DiscreteTypeInfo type_info_t;
// methods
bool get_property(const PassPropertyMask& prop_mask) const;
void set_name(const std::string& name);
std::string get_name() const;
void set_callback(const param_callback& callback);
virtual void set_pass_config(const std::shared_ptr<PassConfig>& pass_config);
std::shared_ptr<PassConfig> get_pass_config();
bool m_transformation_callback(const std::shared_ptr<const Node>& node);
bool transformation_callback(const std::shared_ptr<const Node>& node);
virtual const type_info_t& get_type_info() const = 0;
OPENVINO_RTTI("ov::pass::ModelPass");
virtual bool run_on_function(std::shared_ptr<ov::Model> m);
virtual bool run_on_model(const std::shared_ptr<ov::Model>& m);
Detailed Documentation¶
MarkupCanBeQuantized transformation marks Convolution, ConvolutionBackpropData, GroupConvolution and Concat operations as able to be quantized or not. If an operation is not quantized, then PrecisionsAttribute attribute instance is created with empty precisions.
For more details about the transformation, refer to MarkupCanBeQuantized page in the Inference Engine Developer Guide.