furiosa.optimizer.frontend.onnx.transformer package

Submodules

furiosa.optimizer.frontend.onnx.transformer.convert_negative_pads_to_slice module

class furiosa.optimizer.frontend.onnx.transformer.convert_negative_pads_to_slice.ConvertNegativePadsToSlice

Bases: furiosa.optimizer.interfaces.transformer.Transformer

transform(model: onnx.onnx_ml_pb2.ModelProto) onnx.onnx_ml_pb2.ModelProto
class furiosa.optimizer.frontend.onnx.transformer.convert_negative_pads_to_slice.Pattern_1(model, name_nodes=True)

Bases: furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer

transform

prev –> Pad –> next

to

prev –> Slice –> Pad –> next

if
  1. Pad’s pads input (pad.input[1]) is an initializer

  2. at least one of pads value is negative

  3. Sum of negative pads value in axis i does not exceed corresponding input_shape[i] (if not, model is invalid)

  4. absolute value of each negative pads in axis i is less than corresponding input_shape[i] (if not, leads to invalid model/’nan’ output)

make_new_init_and_vi(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.NodeProto]
make_new_node(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.NodeProto]
pattern_condition_checker(nodes_to_check: Iterable[onnx.onnx_ml_pb2.NodeProto]) bool
pattern_matching(base_node: onnx.onnx_ml_pb2.NodeProto) Iterable[str]
pattern_to_match = ['Pad']
class furiosa.optimizer.frontend.onnx.transformer.convert_negative_pads_to_slice.Pattern_2(model, name_nodes=True)

Bases: furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer

transform

prev –> Pad –> next

to

prev –> next

if
  1. Pad’s pads value are all zero

pattern_condition_checker(nodes_to_check: Iterable[onnx.onnx_ml_pb2.NodeProto]) bool
pattern_matching(base_node: onnx.onnx_ml_pb2.NodeProto) Iterable[str]
pattern_to_match = ['Pad']

furiosa.optimizer.frontend.onnx.transformer.convert_prelu_to_relu module

class furiosa.optimizer.frontend.onnx.transformer.convert_prelu_to_relu.ConvertPReluToRelu

Bases: furiosa.optimizer.interfaces.transformer.Transformer

transform(model: onnx.onnx_ml_pb2.ModelProto) onnx.onnx_ml_pb2.ModelProto
class furiosa.optimizer.frontend.onnx.transformer.convert_prelu_to_relu.Pattern_1(model, name_nodes=True)

Bases: furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer

transform

PRelu(x) = slope * x if x < 0, x if x >=0

into

(1 - slope) * Relu(x) + slope * x

if
  1. PRelu’s input[1] is an initializer

make_new_init(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.NodeProto]
make_new_node(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.NodeProto]
make_new_vi(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.NodeProto]
pattern_condition_checker(nodes_to_check: Iterable[onnx.onnx_ml_pb2.NodeProto]) bool
pattern_matching(base_node: onnx.onnx_ml_pb2.NodeProto) Iterable[str]
pattern_to_match = ['PRelu']
class furiosa.optimizer.frontend.onnx.transformer.convert_prelu_to_relu.Pattern_2(model, name_nodes=True)

Bases: furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer

transform

PRelu(x) = slope * x if x < 0, x if x >=0

into

(1 - slope) * Relu(x) + slope * x

if
  1. PRelu’s input[1] is not an initializer

make_new_init(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.NodeProto]
make_new_node(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.NodeProto]
make_new_vi(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.NodeProto]
pattern_condition_checker(nodes_to_check: Iterable[onnx.onnx_ml_pb2.NodeProto]) bool
pattern_matching(base_node: onnx.onnx_ml_pb2.NodeProto) Iterable[str]
pattern_to_match = ['PRelu']

furiosa.optimizer.frontend.onnx.transformer.fuse_batchnorm module

class furiosa.optimizer.frontend.onnx.transformer.fuse_batchnorm.FuseBatchNorm

Bases: furiosa.optimizer.interfaces.transformer.Transformer

transform(model: onnx.onnx_ml_pb2.ModelProto) onnx.onnx_ml_pb2.ModelProto
class furiosa.optimizer.frontend.onnx.transformer.fuse_batchnorm.Pattern_1(model, name_nodes=True)

Bases: furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer

transform

prev –> Conv –> BatchNormalization –> next

to

prev –> Conv –> next

pattern_matching(base_node: onnx.onnx_ml_pb2.NodeProto) Iterable[str]
pattern_to_match = ['Conv', 'BatchNormalization']
class furiosa.optimizer.frontend.onnx.transformer.fuse_batchnorm.Pattern_2(model, name_nodes=True)

Bases: furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer

transform

prev –> ConvTranspose –> BatchNormalization –> next

to

prev –> ConvTranspose –> next

pattern_matching(base_node: onnx.onnx_ml_pb2.NodeProto) Iterable[str]
pattern_to_match = ['ConvTranspose', 'BatchNormalization']
class furiosa.optimizer.frontend.onnx.transformer.fuse_batchnorm.Pattern_3(model, name_nodes=True)

Bases: furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer

transform

prev –> Conv –> Mul –> Add –> next

to

prev –> Conv –> next

if 1. Mul has only one initializer
  1. Add has only one initializer

pattern_condition_checker(nodes_to_check: List[onnx.onnx_ml_pb2.NodeProto]) bool
pattern_matching(base_node: onnx.onnx_ml_pb2.NodeProto) Iterable[str]
pattern_to_match = ['Conv', 'Mul', 'Add']
class furiosa.optimizer.frontend.onnx.transformer.fuse_batchnorm.Pattern_4(model, name_nodes=True)

Bases: furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer

transform

prev –> BatchNormalization –> next

to

prev –> Mul –> Add –> next

if prev.op_type != Conv

make_new_init(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.TensorProto]
make_new_vi(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.ValueInfoProto]
pattern_matching(base_node: onnx.onnx_ml_pb2.NodeProto) Iterable[str]
pattern_to_match = ['BatchNormalization']

furiosa.optimizer.frontend.onnx.transformer.fuse_gather_matmul module

class furiosa.optimizer.frontend.onnx.transformer.fuse_gather_matmul.FuseGatherMatMul

Bases: furiosa.optimizer.interfaces.transformer.Transformer

transform(model: onnx.onnx_ml_pb2.ModelProto) onnx.onnx_ml_pb2.ModelProto
class furiosa.optimizer.frontend.onnx.transformer.fuse_gather_matmul.Pattern_1(model, name_nodes=True)

Bases: furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer

transform

prev –> Gather –> MatMul –> next

to

prev –> Gather –> next

if 1. MatMul must have exactly one initializer
  1. Gather.data must be defined in graph.initializer

  2. MatMul weight’s data_type == onnx.TensorProto.FLOAT

  3. rank(MatMul weight) == 2

  4. Gather.data’s data_type == onnx.TensorProto.FLOAT

  5. rank(Gather.data) == 2

  6. (Gather.axis == 0 and Matmul.input[1] is initializer) or (Gather.axis == 1 and Matmul.input[0] is initializer)

make_new_init(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.TensorProto]
static make_new_node(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.NodeProto]
pattern_condition_checker(nodes_to_check: Iterable[onnx.onnx_ml_pb2.NodeProto]) bool
pattern_matching(base_node: onnx.onnx_ml_pb2.NodeProto) Iterable[str]
pattern_to_match = ['Gather', 'MatMul']

furiosa.optimizer.frontend.onnx.transformer.infer_squeeze_axes module

class furiosa.optimizer.frontend.onnx.transformer.infer_squeeze_axes.InferSqueezeAxes

Bases: furiosa.optimizer.interfaces.transformer.Transformer

transform(model: onnx.onnx_ml_pb2.ModelProto) onnx.onnx_ml_pb2.ModelProto
class furiosa.optimizer.frontend.onnx.transformer.infer_squeeze_axes.Pattern_1(model, name_nodes=True)

Bases: furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer

transform

prev –> Squeeze (axes attribute is None) –> next

to

prev –> Squeeze (axes attribute is filled using input’s value info) –> next

if 1. model’s opset < 13.
  1. axes attribute of Squeeze does not exist

  2. Squeeze.input[0] has shape info (graph input or shape inferred value info)

make_new_node(matched_nodes: Iterable[onnx.onnx_ml_pb2.NodeProto]) List[onnx.onnx_ml_pb2.NodeProto]
pattern_condition_checker(nodes_to_check: Iterable[onnx.onnx_ml_pb2.NodeProto]) bool
pattern_matching(base_node: onnx.onnx_ml_pb2.NodeProto) Iterable[str]
pattern_to_match = ['Squeeze']

furiosa.optimizer.frontend.onnx.transformer.polish_model module

class furiosa.optimizer.frontend.onnx.transformer.polish_model.PolishModel(input_shapes: Optional[Dict[str, List[int]]] = None)

Bases: furiosa.optimizer.interfaces.transformer.Transformer[onnx.onnx_ml_pb2.ModelProto]

Essential graph transformer/optimizers

transform(model: onnx.onnx_ml_pb2.ModelProto) onnx.onnx_ml_pb2.ModelProto

furiosa.optimizer.frontend.onnx.transformer.utils module

furiosa.optimizer.frontend.onnx.transformer.utils.check_value_info(model: onnx.onnx_ml_pb2.ModelProto) None
furiosa.optimizer.frontend.onnx.transformer.utils.eliminate_initializer_from_graph_input(model: onnx.onnx_ml_pb2.ModelProto) onnx.onnx_ml_pb2.ModelProto
furiosa.optimizer.frontend.onnx.transformer.utils.eliminate_unused_initializer(model)

This function eliminates every initializers not used by node input, regardless of any graph fields they are defined in.

furiosa.optimizer.frontend.onnx.transformer.utils.eliminate_unused_input(model)
furiosa.optimizer.frontend.onnx.transformer.utils.eliminate_unused_output(model)
furiosa.optimizer.frontend.onnx.transformer.utils.eliminate_unused_protos(model)
furiosa.optimizer.frontend.onnx.transformer.utils.eliminate_unused_value_info(model)
furiosa.optimizer.frontend.onnx.transformer.utils.fix_batch_size_as_one(model)

fix batch_size = 1 if dim_param is given.

furiosa.optimizer.frontend.onnx.transformer.utils.fixed_point(x: furiosa.optimizer.frontend.onnx.transformer.utils.T, functions: Iterable[Callable[[furiosa.optimizer.frontend.onnx.transformer.utils.T], furiosa.optimizer.frontend.onnx.transformer.utils.T]]) furiosa.optimizer.frontend.onnx.transformer.utils.T
furiosa.optimizer.frontend.onnx.transformer.utils.get_attribute(attrs: Iterable[onnx.onnx_ml_pb2.AttributeProto], attr_name: str, default: Optional[Any] = None) Any
furiosa.optimizer.frontend.onnx.transformer.utils.get_node_attributes(node: onnx.onnx_ml_pb2.NodeProto) Dict[str, Any]
furiosa.optimizer.frontend.onnx.transformer.utils.get_node_input_names(model)
furiosa.optimizer.frontend.onnx.transformer.utils.get_node_output_names(model)
furiosa.optimizer.frontend.onnx.transformer.utils.is_op_type(op_type: str, target_op_types: Iterable[str]) bool
furiosa.optimizer.frontend.onnx.transformer.utils.make_initializer_name_unique(model)
furiosa.optimizer.frontend.onnx.transformer.utils.make_unhashables_unique(values)
furiosa.optimizer.frontend.onnx.transformer.utils.name_nodes(model)
furiosa.optimizer.frontend.onnx.transformer.utils.rebuild_model(model: onnx.onnx_ml_pb2.ModelProto, new_nodes: List[onnx.onnx_ml_pb2.NodeProto], eliminate: bool = True, renaming: bool = True)

Module contents

class furiosa.optimizer.frontend.onnx.transformer.ONNXTransformer(model, name_nodes=True)

Bases: object

bridge_disconnected_nodes(node_0: onnx.onnx_ml_pb2.NodeProto, next_nodes: List[onnx.onnx_ml_pb2.NodeProto], new_input)
For a graph changed, for example,

before) prev –> node_1 –> node_0 –> next after) prev –> node_1 –> ( ) -/-> next

This function bridges node_1 and next as follows:

prev –> node_1 –> next by assigning next.input[y] = node_1.output[x]

build_optimized_model(model, check=True)
check_runnable = True
copy_value_info(name)
find_next_node(node: onnx.onnx_ml_pb2.NodeProto) List[onnx.onnx_ml_pb2.NodeProto]
find_prev_node(node_input: str) onnx.onnx_ml_pb2.NodeProto
get_data_node_input(node)
get_init_node_input(node)
get_initializer_array(node_input)
get_map_values(field)
get_value_info_dtype(tensor_name: str) int
get_value_info_shape(tensor_name: str) List[int]
is_same_shape(input_1, input_2)
pattern_matcher(node, pattern_to_match: List[str])
pattern_matching(base_node)
pop_multiple_optimizer_map(nodes: List[onnx.onnx_ml_pb2.NodeProto])
pop_single_optimizer_map(node: onnx.onnx_ml_pb2.NodeProto)
transform()
transform_to_eliminate(nodes_to_remove: List[onnx.onnx_ml_pb2.NodeProto], new_input)

This function eliminates designated nodes and bridges the previous and next nodes of them.

For example, if [B, C] is given to be removed, then removes [B, C] in A - B - C - D and connects [A, D] to make A - D.

transform_to_fuse(nodes_to_remove: List[onnx.onnx_ml_pb2.NodeProto], nodes_to_add: Optional[List[onnx.onnx_ml_pb2.NodeProto]] = None, inits_to_add: Optional[List[onnx.onnx_ml_pb2.TensorProto]] = None, vis_to_add: Optional[List[onnx.onnx_ml_pb2.ValueInfoProto]] = None)
traverse_prev_node(producer_map_key: str, target_op_types: List[str])
update_graph_fields(model)
update_multiple_initializer_map(initializers: List[onnx.onnx_ml_pb2.TensorProto])
update_multiple_optimizer_map(nodes: List[onnx.onnx_ml_pb2.NodeProto], dest_name)
update_multiple_value_info_map(value_infos: List[onnx.onnx_ml_pb2.ValueInfoProto])
update_single_initializer_map(initializer: onnx.onnx_ml_pb2.TensorProto)
update_single_optimizer_map(node: onnx.onnx_ml_pb2.NodeProto, dest_name)
update_single_value_info_map(value_info: onnx.onnx_ml_pb2.ValueInfoProto)