Skip to content

🛠 Pipelines Overview

License Plate Detection

🚗 License Plate Detection allows you to detect and identify license plates in images using a specialized pipeline based on the YOLOv9 model.

The LicensePlateDetector is specialized for license plate detection. It utilizes the YOLOv9 object detection model to recognize license plates in images.

Bases: YoloV9ObjectDetector

Specialized detector for license plates using YoloV9 model. Inherits from YoloV9ObjectDetector and sets up license plate specific configuration.

Parameters:

Name Type Description Default
detection_model PlateDetectorModel

Detection model to use, see PlateDetectorModel.

required
conf_thresh float

Confidence threshold for filtering predictions.

0.25
providers Sequence[str | tuple[str, dict]] | None

Optional sequence of providers in order of decreasing precedence. If not specified, all available providers are used.

None
sess_options SessionOptions

Advanced session options for ONNX Runtime.

None
Source code in open_image_models/detection/pipeline/license_plate.py
def __init__(
    self,
    detection_model: PlateDetectorModel,
    conf_thresh: float = 0.25,
    providers: Sequence[str | tuple[str, dict]] | None = None,
    sess_options: ort.SessionOptions = None,
) -> None:
    """
    Initializes the LicensePlateDetector with the specified detection model and inference device.

    Args:
        detection_model: Detection model to use, see `PlateDetectorModel`.
        conf_thresh: Confidence threshold for filtering predictions.
        providers: Optional sequence of providers in order of decreasing precedence. If not specified, all available
            providers are used.
        sess_options: Advanced session options for ONNX Runtime.
    """
    # Download model if needed
    detector_model_path = download_model(detection_model)
    super().__init__(
        model_path=detector_model_path,
        conf_thresh=conf_thresh,
        class_labels=["License Plate"],
        providers=providers,
        sess_options=sess_options,
    )
    LOGGER.info("Initialized LicensePlateDetector with model %s", detector_model_path)

predict

predict(images: ndarray) -> list[DetectionResult]
predict(
    images: list[ndarray],
) -> list[list[DetectionResult]]
predict(images: str) -> list[DetectionResult]
predict(images: list[str]) -> list[list[DetectionResult]]
predict(images: PathLike[str]) -> list[DetectionResult]
predict(
    images: list[PathLike[str]],
) -> list[list[DetectionResult]]
predict(
    images: Any,
) -> list[DetectionResult] | list[list[DetectionResult]]

Perform license plate detection on one or multiple images.

This method is a specialized version of the YoloV9ObjectDetector.predict method, focusing on detecting license plates in images.

Parameters:

Name Type Description Default
images Any

A single image as a numpy array, a single image path as a string, a list of images as numpy arrays, or a list of image file paths.

required

Returns:

Type Description
list[DetectionResult] | list[list[DetectionResult]]

A list of DetectionResult for a single image input, or a list of lists of DetectionResult for multiple images.

Example usage:

from open_image_models import LicensePlateDetector

lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-384-license-plate-end2end")
lp_detector.predict("path/to/license_plate_image.jpg")

Raises:

Type Description
ValueError

If the image could not be loaded or processed.

Source code in open_image_models/detection/pipeline/license_plate.py
def predict(self, images: Any) -> list[DetectionResult] | list[list[DetectionResult]]:
    """
    Perform license plate detection on one or multiple images.

    This method is a specialized version of the `YoloV9ObjectDetector.predict` method,
    focusing on detecting license plates in images.

    Args:
        images: A single image as a numpy array, a single image path as a string, a list of images as numpy arrays,
                or a list of image file paths.

    Returns:
        A list of `DetectionResult` for a single image input, or a list of lists of `DetectionResult` for multiple
            images.

    Example usage:

    ```python
    from open_image_models import LicensePlateDetector

    lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-384-license-plate-end2end")
    lp_detector.predict("path/to/license_plate_image.jpg")
    ```

    Raises:
        ValueError: If the image could not be loaded or processed.
    """
    return super().predict(images)

Core API Documentation

The core module provides base classes and protocols for object detection models, including essential data structures like BoundingBox and DetectionResult.

🔧 Core Components

The following components are used across detection pipelines and models:

  • BoundingBox: Represents a bounding box for detected objects.
  • DetectionResult: Stores label, confidence, and bounding box for a detection.
  • ObjectDetector: Protocol defining essential methods like predict, show_benchmark, and display_predictions.

BoundingBox dataclass

BoundingBox(x1: int, y1: int, x2: int, y2: int)

Represents a bounding box with top-left and bottom-right coordinates.

x1 instance-attribute

x1: int

X-coordinate of the top-left corner

y1 instance-attribute

y1: int

Y-coordinate of the top-left corner

x2 instance-attribute

x2: int

X-coordinate of the bottom-right corner

y2 instance-attribute

y2: int

Y-coordinate of the bottom-right corner

width property

width: int

Returns the width of the bounding box.

height property

height: int

Returns the height of the bounding box.

area property

area: int

Returns the area of the bounding box.

center property

center: tuple[float, float]

Returns the (x, y) coordinates of the center of the bounding box.

intersection

intersection(other: BoundingBox) -> Optional[BoundingBox]

Returns the intersection of this bounding box with another bounding box. If they do not intersect, returns None.

Source code in open_image_models/detection/core/base.py
def intersection(self, other: "BoundingBox") -> Optional["BoundingBox"]:
    """
    Returns the intersection of this bounding box with another bounding box. If they do not intersect, returns None.
    """
    x1 = max(self.x1, other.x1)
    y1 = max(self.y1, other.y1)
    x2 = min(self.x2, other.x2)
    y2 = min(self.y2, other.y2)

    if x2 > x1 and y2 > y1:
        return BoundingBox(x1, y1, x2, y2)

    return None

iou

iou(other: BoundingBox) -> float

Computes the Intersection-over-Union (IoU) between this bounding box and another bounding box.

Source code in open_image_models/detection/core/base.py
def iou(self, other: "BoundingBox") -> float:
    """
    Computes the Intersection-over-Union (IoU) between this bounding box and another bounding box.
    """
    inter = self.intersection(other)

    if inter is None:
        return 0.0

    inter_area = inter.area
    union_area = self.area + other.area - inter_area
    return inter_area / union_area if union_area > 0 else 0.0

to_xywh

to_xywh() -> tuple[int, int, int, int]

Converts bounding box to (x, y, width, height) format, where (x, y) is the top-left corner.

:returns: A tuple containing the top-left x and y coordinates, width, and height of the bounding box.

Source code in open_image_models/detection/core/base.py
def to_xywh(self) -> tuple[int, int, int, int]:
    """
    Converts bounding box to (x, y, width, height) format, where (x, y) is the top-left corner.

    :returns: A tuple containing the top-left x and y coordinates, width, and height of the bounding box.
    """
    return self.x1, self.y1, self.width, self.height

clamp

clamp(max_width: int, max_height: int) -> BoundingBox

Returns a new BoundingBox with coordinates clamped within the range [0, max_width] and [0, max_height].

:param max_width: The maximum width. :param max_height: The maximum height. :return: A new, clamped BoundingBox.

Source code in open_image_models/detection/core/base.py
def clamp(self, max_width: int, max_height: int) -> "BoundingBox":
    """
    Returns a new `BoundingBox` with coordinates clamped within the range [0, max_width] and [0, max_height].

    :param max_width: The maximum width.
    :param max_height: The maximum height.
    :return: A new, clamped `BoundingBox`.
    """
    return BoundingBox(
        x1=max(0, min(self.x1, max_width)),
        y1=max(0, min(self.y1, max_height)),
        x2=max(0, min(self.x2, max_width)),
        y2=max(0, min(self.y2, max_height)),
    )

is_valid

is_valid(frame_width: int, frame_height: int) -> bool

Checks if the bounding box is valid by ensuring that:

  1. The coordinates are in the correct order (x1 < x2 and y1 < y2).
  2. The bounding box lies entirely within the frame boundaries.

:param frame_width: The width of the frame. :param frame_height: The height of the frame. :return: True if the bounding box is valid, False otherwise.

Source code in open_image_models/detection/core/base.py
def is_valid(self, frame_width: int, frame_height: int) -> bool:
    """
    Checks if the bounding box is valid by ensuring that:

    1. The coordinates are in the correct order (x1 < x2 and y1 < y2).
    2. The bounding box lies entirely within the frame boundaries.

    :param frame_width: The width of the frame.
    :param frame_height: The height of the frame.
    :return: True if the bounding box is valid, False otherwise.
    """
    return 0 <= self.x1 < self.x2 <= frame_width and 0 <= self.y1 < self.y2 <= frame_height

DetectionResult dataclass

DetectionResult(
    label: str, confidence: float, bounding_box: BoundingBox
)

Represents the result of an object detection.

label instance-attribute

label: str

Detected object label

confidence instance-attribute

confidence: float

Confidence score of the detection

bounding_box instance-attribute

bounding_box: BoundingBox

Bounding box of the detected object

from_detection_data classmethod

from_detection_data(
    bbox_data: tuple[int, int, int, int],
    confidence: float,
    class_id: str,
) -> DetectionResult

Creates a DetectionResult instance from bounding box data, confidence, and a class label.

:param bbox_data: A tuple containing bounding box coordinates (x1, y1, x2, y2). :param confidence: The detection confidence score. :param class_id: The detected class label as a string. :return: A DetectionResult instance.

Source code in open_image_models/detection/core/base.py
@classmethod
def from_detection_data(
    cls,
    bbox_data: tuple[int, int, int, int],
    confidence: float,
    class_id: str,
) -> "DetectionResult":
    """
    Creates a `DetectionResult` instance from bounding box data, confidence, and a class label.

    :param bbox_data: A tuple containing bounding box coordinates (x1, y1, x2, y2).
    :param confidence: The detection confidence score.
    :param class_id: The detected class label as a string.
    :return: A `DetectionResult` instance.
    """
    bounding_box = BoundingBox(*bbox_data)
    return cls(class_id, confidence, bounding_box)

ObjectDetector

Bases: Protocol

predict

predict(
    images: Any,
) -> list[DetectionResult] | list[list[DetectionResult]]

Perform object detection on one or multiple images.

Parameters:

Name Type Description Default
images Any

A single image as a numpy array, a single image path as a string, a list of images as numpy arrays, or a list of image file paths.

required

Returns:

Type Description
list[DetectionResult] | list[list[DetectionResult]]

A list of DetectionResult for a single image input,

list[DetectionResult] | list[list[DetectionResult]]

or a list of lists of DetectionResult for multiple images.

Source code in open_image_models/detection/core/base.py
def predict(self, images: Any) -> list[DetectionResult] | list[list[DetectionResult]]:
    """
    Perform object detection on one or multiple images.

    Args:
        images: A single image as a numpy array, a single image path as a string, a list of images as numpy arrays,
                or a list of image file paths.

    Returns:
        A list of DetectionResult for a single image input,
        or a list of lists of DetectionResult for multiple images.
    """

show_benchmark

show_benchmark(num_runs: int = 10) -> None

Display the benchmark results of the model with a single random image.

Parameters:

Name Type Description Default
num_runs int

Number of times to run inference on the image for averaging.

10
Displays

Model information and benchmark results in a formatted table.

Source code in open_image_models/detection/core/base.py
def show_benchmark(self, num_runs: int = 10) -> None:
    """
    Display the benchmark results of the model with a single random image.

    Args:
        num_runs: Number of times to run inference on the image for averaging.

    Displays:
        Model information and benchmark results in a formatted table.
    """

display_predictions

display_predictions(image: ndarray) -> ndarray

Run object detection on the input image and display the predictions on the image.

Parameters:

Name Type Description Default
image ndarray

An input image as a numpy array.

required

Returns:

Type Description
ndarray

The image with bounding boxes and labels drawn on it.

Source code in open_image_models/detection/core/base.py
def display_predictions(self, image: np.ndarray) -> np.ndarray:
    """
    Run object detection on the input image and display the predictions on the image.

    Args:
        image: An input image as a numpy array.

    Returns:
        The image with bounding boxes and labels drawn on it.
    """

Open Image Models HUB.

PlateDetectorModel module-attribute

PlateDetectorModel = Literal[
    "yolo-v9-s-608-license-plate-end2end",
    "yolo-v9-t-640-license-plate-end2end",
    "yolo-v9-t-512-license-plate-end2end",
    "yolo-v9-t-416-license-plate-end2end",
    "yolo-v9-t-384-license-plate-end2end",
    "yolo-v9-t-256-license-plate-end2end",
]

Available ONNX models for doing detection.