EfficientNetV2-S
EfficientNetV2-S is the smallest and most efficient model in the EfficientNetV2 family. Introduced in the paper "EfficientNetV2: Smaller Models and Faster Training", EfficientNetV2-S achieves state-of-the-art performance on image classification tasks, and it can be trained much faster and has a smaller model size of up to 6.8 times when compared to previous state-of-the-art models. It uses a combination of advanced techniques such as Swish activation function, Squeeze-and-Excitation blocks, and efficient channel attention to optimize its performance and efficiency.
Overall
- Framework: PyTorch
- Model format: ONNX
- Model task: Image classification
- Source: torchvision
Usages
from furiosa.models.vision import EfficientNetV2s
from furiosa.runtime.sync import create_runner
image = "tests/assets/cat.jpg"
effnetv2s = EfficientNetV2s()
with create_runner(effnetv2s.model_source()) as runner:
inputs, _ = effnetv2s.preprocess(image)
outputs = runner.run(inputs)
effnetv2s.postprocess(outputs)
Inputs
The input is a 3-channel image of 384x384 (height, width).
- Data Type:
numpy.float32
- Tensor Shape:
[1, 3, 384, 384]
- Memory Format: NCHW, where:
- N - batch size
- C - number of channels
- H - image height
- W - image width
- Color Order: BGR
- Optimal Batch Size (minimum: 1): <= 8
Outputs
The output is a numpy.float32
tensor with the shape ([1,]
), including
a class id. postprocess()
transforms the class id to a label string.
Pre/Postprocessing
furiosa.models.vision.EfficientNetV2s
class provides preprocess
and postprocess
methods that
convert input images to input tensors and the model outputs to labels respectively.
You can find examples at EfficientNetV2-S Usage.
furiosa.models.vision.EfficientNetV2s.preprocess
Read and preprocess an image located at image_path.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
Union[str, Path, ArrayLike]
|
A path of an image. |
required |
with_scaling |
bool
|
Whether to apply model-specific techniques that involve scaling the model's input and converting its data type to float32. Refer to the code to gain a precise understanding of the techniques used. Defaults to False. |
False
|
Returns:
Type | Description |
---|---|
Tuple[ndarray, None]
|
The first element of the tuple is a numpy array that meets the input requirements of the model. The second element of the tuple is unused in this model and has no value. To learn more information about the output numpy array, please refer to Inputs. |
furiosa.models.vision.EfficientNetV2s.postprocess
Convert the outputs of a model to a label string, such as car and cat.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_outputs |
Sequence[ArrayLike]
|
the outputs of the model. Please learn more about the output of model, please refer to Outputs. |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
A classified label, e.g., "tabby, tabby cat". |
Notes on the source field of this model
There is a significant update in sdk version 0.9.0, which involves that the Furiosa's quantization tool adopts DFG(Data Flow Graph) as its output format instead of onnx. DFG is an IR of FuriosaAI that supports more diverse quantization schemes than onnx and is more specialized for FuriosaAI’s Warboy.
The EfficientNetV2-S we offer has been quantized with furiosa-sdk 0.9.0 and thus formatted in DFG. The ONNX file in the source field is the original f32 model, not yet quantized.
In case you need to use a different batch size or start from scratch, you can either start from the DFG or use the original ONNX file (and repeat the quantization process).