furiosa.quantizer.frontend.onnx.quantizer package

Submodules

furiosa.quantizer.frontend.onnx.quantizer.fuse_clipper module

class furiosa.quantizer.frontend.onnx.quantizer.fuse_clipper.FuseClipper(*args, **kwds)

Bases: furiosa.quantizer.interfaces.transformer.Transformer

transform(model: onnx.onnx_ml_pb2.ModelProto) onnx.onnx_ml_pb2.ModelProto
class furiosa.quantizer.frontend.onnx.quantizer.fuse_clipper.Pattern_1(model)

Bases: furiosa.quantizer.frontend.onnx.transformer.ONNXTransformer

transform

prev –> Conv –> Relu –> next

to

prev –> Conv –> next

make_new_node(matched_nodes)
pattern_matching(base_node)
pattern_to_match = ['Conv', 'Relu']
class furiosa.quantizer.frontend.onnx.quantizer.fuse_clipper.Pattern_2(model)

Bases: furiosa.quantizer.frontend.onnx.quantizer.fuse_clipper.Pattern_1

transform

prev –> Conv –> Clip –> next

to

prev –> Conv –> next

pattern_to_match = ['Conv', 'Clip']
class furiosa.quantizer.frontend.onnx.quantizer.fuse_clipper.Pattern_3(model)

Bases: furiosa.quantizer.frontend.onnx.quantizer.fuse_clipper.Pattern_1

transform

prev –> Add –> Relu –> next

to

prev –> Add –> next

make_new_node(matched_nodes)
pattern_to_match = ['Add', 'Relu']
class furiosa.quantizer.frontend.onnx.quantizer.fuse_clipper.Pattern_4(model)

Bases: furiosa.quantizer.frontend.onnx.quantizer.fuse_clipper.Pattern_3

transform

prev –> Add –> Clip –> next

to

prev –> Add –> next

pattern_to_match = ['Add', 'Clip']

furiosa.quantizer.frontend.onnx.quantizer.quantizer module

class furiosa.quantizer.frontend.onnx.quantizer.quantizer.DFGImportable(model, raw_data)

Bases: object

remove_quantizelinear_operator_with_initializer()
transform()
transform_to_integer_arithmetic_operator()
class furiosa.quantizer.frontend.onnx.quantizer.quantizer.FuriosaONNXQuantizer(model: onnx.onnx_ml_pb2.ModelProto, per_channel: bool, static: bool, mode: furiosa.quantizer.frontend.onnx.quantizer.utils.QuantizationMode, dynamic_ranges: Dict[str, Tuple[float, float]], raw_data=True)

Bases: object

build_quantized_model()
check_model()
make_quant_dequant_node(node_input)
pre_optimize()
quantize() onnx.onnx_ml_pb2.ModelProto
quantize_model()
class furiosa.quantizer.frontend.onnx.quantizer.quantizer.ONNXRuntimeExecutable(model, raw_data)

Bases: furiosa.quantizer.frontend.onnx.quantizer.quantizer.DFGImportable

transform()

furiosa.quantizer.frontend.onnx.quantizer.utils module

class furiosa.quantizer.frontend.onnx.quantizer.utils.QuantizationMode

Bases: object

dfg = 0
fake = 1
furiosa.quantizer.frontend.onnx.quantizer.utils.activation_scale_zeropoint(rmin, rmax, activation_qtype)
furiosa.quantizer.frontend.onnx.quantizer.utils.append_suffix(name: str, suffix: List[str]) List[str]

Helper function to append suffixes to the given name.

furiosa.quantizer.frontend.onnx.quantizer.utils.asymmetric_scale_zeropoint(rmin, rmax, activation_qtype)

source: onnxruntime quantization tools

furiosa.quantizer.frontend.onnx.quantizer.utils.calculate_activation_quant_params(dynamic_ranges: Dict, node_list: List[onnx.onnx_ml_pb2.NodeProto], activation_qtype: onnx.onnx_ml_pb2.TensorProto, value_info: Dict) Dict
furiosa.quantizer.frontend.onnx.quantizer.utils.calculate_weight_quant_params(data: numpy.array, weight_qtype: onnx.onnx_ml_pb2.TensorProto, name: str) Tuple[int, float]
Parameters
  • data – data to quantize

  • weight_qtype – quantization data type of weight

  • name – name of tensor to quantize

Returns

quantized weights, zero point, scale

To pack weights, we compute a linear transformation
  • when data type == uint8 mode, from [rmin, rmax] -> [0, 2^{b-1}] and

  • when data type == int8, from [-m , m] -> [-(2^{b-1}-1), 2^{b-1}-1] where

    m = max(abs(rmin), abs(rmax))

and add necessary intermediate nodes to trasnform quantized weight to full weight using the equation r = S(q-z), where

r: real original value q: quantized value S: scale z: zero point

source: onnxruntime quantization tools

furiosa.quantizer.frontend.onnx.quantizer.utils.get_input_tensors(model: onnx.onnx_ml_pb2.ModelProto) List[Tuple[str, List[int], str]]
furiosa.quantizer.frontend.onnx.quantizer.utils.get_qrange(qtype)

source: onnxruntime quantization tools

furiosa.quantizer.frontend.onnx.quantizer.utils.get_vi_dtype(vi)

This function returns value_info’s data type

Parameters

vi – graph.value_info

Returns

graph.value_info.type.tensor_type.elem_type

furiosa.quantizer.frontend.onnx.quantizer.utils.is_float_tensor(vi)

Module contents