Skip to content

YOLOv7w6Pose

YOLOv7w6 Pose Estimation Model. You can find more details at https://github.com/WongKinYiu/yolov7#pose-estimation.

Overall

Usage

import cv2

from furiosa.models.vision import YOLOv7w6Pose
from furiosa.runtime.sync import create_runner

yolo_pose = YOLOv7w6Pose()

with create_runner(yolo_pose.model_source()) as runner:
    image = cv2.imread("tests/assets/yolov5-test.jpg")
    inputs, contexts = yolo_pose.preprocess([image])
    output = runner.run(inputs)
    yolo_pose.postprocess(output, contexts=contexts)

Inputs

The input is a 3-channel image of 384, 640 (height, width).

  • Data Type: numpy.uint8
  • Tensor Shape: [1, 3, 384, 640]
  • Memory Format: NHWC, where
    • N - batch size
    • C - number of channels
    • H - image height
    • W - image width
  • Color Order: RGB
  • Optimal Batch Size (minimum: 1): <= 4

Outputs

The outputs are 3 numpy.float32 tensors in various shapes as the following. You can refer to postprocess() function to learn how to decode boxes, classes, and confidence scores.

Tensor Shape Data Type Data Type Description
0 (1, 18, 48, 80) float32 NCHW
1 (1, 153, 48, 80) float32 NCHW
2 (1, 18, 24, 40) float32 NCHW
3 (1, 153, 24, 40) float32 NCHW
4 (1, 18, 12, 20) float32 NCHW
5 (1, 153, 12, 20) float32 NCHW
6 (1, 18, 6, 10) float32 NCHW
7 (1, 153, 6, 10) float32 NCHW

Pre/Postprocessing

furiosa.models.vision.YOLOv7w6Pose class provides preprocess and postprocess methods. preprocess method converts input images to input tensors, and postprocess method converts model output tensors to a list of PoseEstimationResult. You can find examples at YOLOv7w6Pose Usage.

furiosa.models.vision.YOLOv7w6Pose.preprocess

Preprocess input images to a batch of input tensors

Parameters:

Name Type Description Default
images Sequence[Union[str, ndarray]]

Color images have (NHWC: Batch, Height, Width, Channel) dimensions.

required
with_scaling bool

Whether to apply model-specific techniques that involve scaling the model's input and converting its data type to float32. Refer to the code to gain a precise understanding of the techniques used. Defaults to False.

False

Returns:

Type Description
Tuple[ndarray, List[Dict[str, Any]]]

a pre-processed image, scales and padded sizes(width,height) per images. The first element is a stacked numpy array containing a batch of images. To learn more about the outputs of preprocess (i.e., model inputs), please refer to YOLOv7w6Pose Inputs.

The second element is a list of dict objects about the original images. Each dict object has the following keys. 'scale' key of the returned dict has a rescaled ratio per width(=target/width) and height(=target/height), and the 'pad' key has padded width and height pixels. Specially, the last dictionary element of returning tuple will be passed to postprocessing as a parameter to calculate predicted coordinates on normalized coordinates back to an input image coordinator.

furiosa.models.vision.YOLOv7w6Pose.postprocess

Postprocess output tensors to a list of PoseEstimationResult. It transforms the model's output into a list of PoseEstimationResult instances. Each PoseEstimationResult contains information about the overall pose, including a bounding box, confidence score, and keypoint details such as nose, eyes, shoulders, etc. Please refer to the followings for more details.

Keypoint

The Keypoint class represents a keypoint detected by the YOLOv7W6 Pose Estimation model. It contains the following attributes:

Attribute Description
x The x-coordinate of the keypoint as a floating-point number.
y The y-coordinate of the keypoint as a floating-point number.
confidence Confidence score associated with the keypoint as a floating-point number.

See the source code for more details.

Source code in furiosa/models/vision/yolov7_w6_pose/postprocess.py
class Keypoint(BaseModel):
    x: float
    y: float
    confidence: float

PoseEstimationResult

The PoseEstimationResult class represents the overall result of the YOLOv7W6 Pose Estimation model. It includes the following attributes:

Attribute Description
bounding_box A list of four floating-point numbers representing the bounding box coordinates of the detected pose.
confidence Confidence score associated with the overall pose estimation as a floating-point number.
nose Instance of the Keypoint class representing the nose keypoint.
left_eye Instance of the Keypoint class representing the left eye keypoint.
right_eye Instance of the Keypoint class representing the right eye keypoint.
left_ear Instance of the Keypoint class representing the left ear keypoint.
right_ear Instance of the Keypoint class representing the right ear keypoint.
left_shoulder Instance of the Keypoint class representing the left shoulder keypoint.
right_shoulder Instance of the Keypoint class representing the right shoulder keypoint.
left_elbow Instance of the Keypoint class representing the left elbow keypoint.
right_elbow Instance of the Keypoint class representing the right elbow keypoint.
left_wrist Instance of the Keypoint class representing the left wrist keypoint.
right_wrist Instance of the Keypoint class representing the right wrist keypoint.
left_hip Instance of the Keypoint class representing the left hip keypoint.
right_hip Instance of the Keypoint class representing the right hip keypoint.
left_knee Instance of the Keypoint class representing the left knee keypoint.
right_knee Instance of the Keypoint class representing the right knee keypoint.
left_ankle Instance of the Keypoint class representing the left ankle keypoint.
right_ankle Instance of the Keypoint class representing the right ankle keypoint.

See the source code for more details.

Source code in furiosa/models/vision/yolov7_w6_pose/postprocess.py
class PoseEstimationResult(BaseModel):
    bounding_box: List[float]
    confidence: float

    nose: Keypoint
    left_eye: Keypoint
    right_eye: Keypoint
    left_ear: Keypoint
    right_ear: Keypoint
    left_shoulder: Keypoint
    right_shoulder: Keypoint
    left_elbow: Keypoint
    right_elbow: Keypoint
    left_wrist: Keypoint
    right_wrist: Keypoint
    left_hip: Keypoint
    right_hip: Keypoint
    left_knee: Keypoint
    right_knee: Keypoint
    left_ankle: Keypoint
    right_ankle: Keypoint

Furthermore, for convenience, the YOLOv7w6Pose model includes an example visualize function. The image at the top of this document was generated using this utility function. The code used is as follows:

Usage

import cv2

from furiosa.models.vision import YOLOv7w6Pose
from furiosa.runtime.sync import create_runner

yolo_pose = YOLOv7w6Pose()

with create_runner(yolo_pose.model_source()) as runner:
    image = cv2.imread("tests/assets/pose_demo.jpg")
    inputs, contexts = yolo_pose.preprocess([image])
    output = runner.run(inputs)
    results = yolo_pose.postprocess(output, contexts=contexts)
    yolo_pose.visualize(image, results[0])
    cv2.imwrite("./pose_result.jpg", image)

furiosa.models.vision.YOLOv7w6Pose.visualize

This visualize function is an example of how to visualize the output of the model. It draws a skeleton of the human body on the input image in in-place manner.

Parameters:

Name Type Description Default
image ndarray

an input image

required
results List[PoseEstimationResult]

a list of PoseEstimationResult objects

required
Source code in furiosa/models/vision/yolov7_w6_pose/__init__.py
@staticmethod
def visualize(image: np.ndarray, results: List[PoseEstimationResult]):
    """This visualize function is an example of how to visualize the output of the model.
    It draws a skeleton of the human body on the input image in in-place manner.

    Args:
        image: an input image
        results: a list of PoseEstimationResult objects
    """

    keypoints = [
        "nose",
        "left_eye",
        "right_eye",
        "left_ear",
        "right_ear",
        "left_shoulder",
        "right_shoulder",
        "left_elbow",
        "right_elbow",
        "left_wrist",
        "right_wrist",
        "left_hip",
        "right_hip",
        "left_knee",
        "right_knee",
        "left_ankle",
        "right_ankle",
    ]
    palette = np.array(
        [
            [255, 128, 0],
            [255, 153, 51],
            [255, 178, 102],
            [230, 230, 0],
            [255, 153, 255],
            [153, 204, 255],
            [255, 102, 255],
            [255, 51, 255],
            [102, 178, 255],
            [51, 153, 255],
            [255, 153, 153],
            [255, 102, 102],
            [255, 51, 51],
            [153, 255, 153],
            [102, 255, 102],
            [51, 255, 51],
            [0, 255, 0],
            [0, 0, 255],
            [255, 0, 0],
            [255, 255, 255],
        ],
        np.int32,
    )

    skeletons = [
        [16, 14],
        [14, 12],
        [17, 15],
        [15, 13],
        [12, 13],
        [6, 12],
        [7, 13],
        [6, 7],
        [6, 8],
        [7, 9],
        [8, 10],
        [9, 11],
        [2, 3],
        [1, 2],
        [1, 3],
        [2, 4],
        [3, 5],
        [4, 6],
        [5, 7],
    ]

    pose_limb_color = palette[[9, 9, 9, 9, 7, 7, 7, 0, 0, 0, 0, 0, 16, 16, 16, 16, 16, 16, 16]]
    pose_kpt_color = palette[[16, 16, 16, 16, 16, 0, 0, 0, 0, 0, 0, 9, 9, 9, 9, 9, 9]]

    def to_int_position(keypoint):
        return tuple(map(int, [keypoint.x, keypoint.y]))

    def is_valid_keypoint(keypoint):
        return (
            keypoint.x % 640 != 0
            and keypoint.y % 640 != 0
            and keypoint.x >= 0
            and keypoint.y >= 0
        )

    for result in results:
        for color, keypoint_name in zip(pose_kpt_color, keypoints):
            point = getattr(result, keypoint_name)
            if is_valid_keypoint(point):
                cv2.circle(
                    image,
                    to_int_position(point),
                    radius=3,
                    color=color.tolist(),
                    thickness=-1,
                )

        for color, skeleton in zip(pose_limb_color, skeletons):
            pos1 = getattr(result, keypoints[skeleton[0] - 1])
            pos2 = getattr(result, keypoints[skeleton[1] - 1])
            if is_valid_keypoint(pos1) and is_valid_keypoint(pos2):
                cv2.line(
                    image,
                    to_int_position(pos1),
                    to_int_position(pos2),
                    color.tolist(),
                    thickness=2,
                )