Skip to content

🛠 Pipelines Overview

License Plate Detection

🚗 License Plate Detection allows you to detect and identify license plates in images using a specialized pipeline based on the YOLOv9 model.

The LicensePlateDetector is specialized for license plate detection. It utilizes the YOLOv9 object detection model to recognize license plates in images.

Bases: YoloV9ObjectDetector

Specialized detector for license plates using YoloV9 model. Inherits from YoloV9ObjectDetector and sets up license plate specific configuration.

Source code in open_image_models/detection/pipeline/license_plate.py
class LicensePlateDetector(YoloV9ObjectDetector):
    """
    Specialized detector for license plates using YoloV9 model.
    Inherits from YoloV9ObjectDetector and sets up license plate specific configuration.
    """

    def __init__(
        self,
        detection_model: PlateDetectorModel,
        conf_thresh: float = 0.25,
        providers: Sequence[str | tuple[str, dict]] | None = None,
        sess_options: ort.SessionOptions = None,
    ) -> None:
        """
        Initializes the LicensePlateDetector with the specified detection model and inference device.

        Args:
            detection_model: Detection model to use, see `PlateDetectorModel`.
            conf_thresh: Confidence threshold for filtering predictions.
            providers: Optional sequence of providers in order of decreasing precedence. If not specified, all available
                providers are used.
            sess_options: Advanced session options for ONNX Runtime.
        """
        # Download model if needed
        detector_model_path = download_model(detection_model)
        super().__init__(
            model_path=detector_model_path,
            conf_thresh=conf_thresh,
            class_labels=["License Plate"],
            providers=providers,
            sess_options=sess_options,
        )
        LOGGER.info("Initialized LicensePlateDetector with model %s", detector_model_path)

    # pylint: disable=duplicate-code
    @overload
    def predict(self, images: np.ndarray) -> list[DetectionResult]: ...

    @overload
    def predict(self, images: list[np.ndarray]) -> list[list[DetectionResult]]: ...

    @overload
    def predict(self, images: str) -> list[DetectionResult]: ...

    @overload
    def predict(self, images: list[str]) -> list[list[DetectionResult]]: ...

    @overload
    def predict(self, images: os.PathLike[str]) -> list[list[DetectionResult]]: ...

    @overload
    def predict(self, images: list[os.PathLike[str]]) -> list[list[DetectionResult]]: ...

    def predict(self, images: Any) -> list[DetectionResult] | list[list[DetectionResult]]:
        """
        Perform license plate detection on one or multiple images.

        This method is a specialized version of the `YoloV9ObjectDetector.predict` method,
        focusing on detecting license plates in images.

        Args:
            images: A single image as a numpy array, a single image path as a string, a list of images as numpy arrays,
                    or a list of image file paths.

        Returns:
            A list of `DetectionResult` for a single image input, or a list of lists of `DetectionResult` for multiple
                images.

        Example usage:

        ```python
        from open_image_models import LicensePlateDetector

        lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-384-license-plate-end2end")
        lp_detector.predict("path/to/license_plate_image.jpg")
        ```

        Raises:
            ValueError: If the image could not be loaded or processed.
        """
        return super().predict(images)

__init__(detection_model, conf_thresh=0.25, providers=None, sess_options=None)

Initializes the LicensePlateDetector with the specified detection model and inference device.

Parameters:

Name Type Description Default
detection_model PlateDetectorModel

Detection model to use, see PlateDetectorModel.

required
conf_thresh float

Confidence threshold for filtering predictions.

0.25
providers Sequence[str | tuple[str, dict]] | None

Optional sequence of providers in order of decreasing precedence. If not specified, all available providers are used.

None
sess_options SessionOptions

Advanced session options for ONNX Runtime.

None
Source code in open_image_models/detection/pipeline/license_plate.py
def __init__(
    self,
    detection_model: PlateDetectorModel,
    conf_thresh: float = 0.25,
    providers: Sequence[str | tuple[str, dict]] | None = None,
    sess_options: ort.SessionOptions = None,
) -> None:
    """
    Initializes the LicensePlateDetector with the specified detection model and inference device.

    Args:
        detection_model: Detection model to use, see `PlateDetectorModel`.
        conf_thresh: Confidence threshold for filtering predictions.
        providers: Optional sequence of providers in order of decreasing precedence. If not specified, all available
            providers are used.
        sess_options: Advanced session options for ONNX Runtime.
    """
    # Download model if needed
    detector_model_path = download_model(detection_model)
    super().__init__(
        model_path=detector_model_path,
        conf_thresh=conf_thresh,
        class_labels=["License Plate"],
        providers=providers,
        sess_options=sess_options,
    )
    LOGGER.info("Initialized LicensePlateDetector with model %s", detector_model_path)

predict(images)

Perform license plate detection on one or multiple images.

This method is a specialized version of the YoloV9ObjectDetector.predict method, focusing on detecting license plates in images.

Parameters:

Name Type Description Default
images Any

A single image as a numpy array, a single image path as a string, a list of images as numpy arrays, or a list of image file paths.

required

Returns:

Type Description
list[DetectionResult] | list[list[DetectionResult]]

A list of DetectionResult for a single image input, or a list of lists of DetectionResult for multiple images.

Example usage:

from open_image_models import LicensePlateDetector

lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-384-license-plate-end2end")
lp_detector.predict("path/to/license_plate_image.jpg")

Raises:

Type Description
ValueError

If the image could not be loaded or processed.

Source code in open_image_models/detection/pipeline/license_plate.py
def predict(self, images: Any) -> list[DetectionResult] | list[list[DetectionResult]]:
    """
    Perform license plate detection on one or multiple images.

    This method is a specialized version of the `YoloV9ObjectDetector.predict` method,
    focusing on detecting license plates in images.

    Args:
        images: A single image as a numpy array, a single image path as a string, a list of images as numpy arrays,
                or a list of image file paths.

    Returns:
        A list of `DetectionResult` for a single image input, or a list of lists of `DetectionResult` for multiple
            images.

    Example usage:

    ```python
    from open_image_models import LicensePlateDetector

    lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-384-license-plate-end2end")
    lp_detector.predict("path/to/license_plate_image.jpg")
    ```

    Raises:
        ValueError: If the image could not be loaded or processed.
    """
    return super().predict(images)

Core API Documentation

The core module provides base classes and protocols for object detection models, including essential data structures like BoundingBox and DetectionResult.

🔧 Core Components

The following components are used across detection pipelines and models:

  • BoundingBox: Represents a bounding box for detected objects.
  • DetectionResult: Stores label, confidence, and bounding box for a detection.
  • ObjectDetector: Protocol defining essential methods like predict, show_benchmark, and display_predictions.

BoundingBox dataclass

Represents a bounding box with top-left and bottom-right coordinates.

Source code in open_image_models/detection/core/base.py
@dataclass(frozen=True)
class BoundingBox:
    """
    Represents a bounding box with top-left and bottom-right coordinates.
    """

    x1: int
    """X-coordinate of the top-left corner"""
    y1: int
    """Y-coordinate of the top-left corner"""
    x2: int
    """X-coordinate of the bottom-right corner"""
    y2: int
    """Y-coordinate of the bottom-right corner"""

x1: int instance-attribute

X-coordinate of the top-left corner

x2: int instance-attribute

X-coordinate of the bottom-right corner

y1: int instance-attribute

Y-coordinate of the top-left corner

y2: int instance-attribute

Y-coordinate of the bottom-right corner

DetectionResult dataclass

Represents the result of an object detection.

Source code in open_image_models/detection/core/base.py
@dataclass(frozen=True)
class DetectionResult:
    """
    Represents the result of an object detection.
    """

    label: str
    """Detected object label"""
    confidence: float
    """Confidence score of the detection"""
    bounding_box: BoundingBox
    """Bounding box of the detected object"""

bounding_box: BoundingBox instance-attribute

Bounding box of the detected object

confidence: float instance-attribute

Confidence score of the detection

label: str instance-attribute

Detected object label

ObjectDetector

Bases: Protocol

Source code in open_image_models/detection/core/base.py
class ObjectDetector(Protocol):
    def predict(self, images: Any) -> list[DetectionResult] | list[list[DetectionResult]]:
        """
        Perform object detection on one or multiple images.

        Args:
            images: A single image as a numpy array, a single image path as a string, a list of images as numpy arrays,
                    or a list of image file paths.

        Returns:
            A list of DetectionResult for a single image input,
            or a list of lists of DetectionResult for multiple images.
        """

    def show_benchmark(self, num_runs: int = 10) -> None:
        """
        Display the benchmark results of the model with a single random image.

        Args:
            num_runs: Number of times to run inference on the image for averaging.

        Displays:
            Model information and benchmark results in a formatted table.
        """

    def display_predictions(self, image: np.ndarray) -> np.ndarray:
        """
        Run object detection on the input image and display the predictions on the image.

        Args:
            image: An input image as a numpy array.

        Returns:
            The image with bounding boxes and labels drawn on it.
        """

display_predictions(image)

Run object detection on the input image and display the predictions on the image.

Parameters:

Name Type Description Default
image ndarray

An input image as a numpy array.

required

Returns:

Type Description
ndarray

The image with bounding boxes and labels drawn on it.

Source code in open_image_models/detection/core/base.py
def display_predictions(self, image: np.ndarray) -> np.ndarray:
    """
    Run object detection on the input image and display the predictions on the image.

    Args:
        image: An input image as a numpy array.

    Returns:
        The image with bounding boxes and labels drawn on it.
    """

predict(images)

Perform object detection on one or multiple images.

Parameters:

Name Type Description Default
images Any

A single image as a numpy array, a single image path as a string, a list of images as numpy arrays, or a list of image file paths.

required

Returns:

Type Description
list[DetectionResult] | list[list[DetectionResult]]

A list of DetectionResult for a single image input,

list[DetectionResult] | list[list[DetectionResult]]

or a list of lists of DetectionResult for multiple images.

Source code in open_image_models/detection/core/base.py
def predict(self, images: Any) -> list[DetectionResult] | list[list[DetectionResult]]:
    """
    Perform object detection on one or multiple images.

    Args:
        images: A single image as a numpy array, a single image path as a string, a list of images as numpy arrays,
                or a list of image file paths.

    Returns:
        A list of DetectionResult for a single image input,
        or a list of lists of DetectionResult for multiple images.
    """

show_benchmark(num_runs=10)

Display the benchmark results of the model with a single random image.

Parameters:

Name Type Description Default
num_runs int

Number of times to run inference on the image for averaging.

10
Displays

Model information and benchmark results in a formatted table.

Source code in open_image_models/detection/core/base.py
def show_benchmark(self, num_runs: int = 10) -> None:
    """
    Display the benchmark results of the model with a single random image.

    Args:
        num_runs: Number of times to run inference on the image for averaging.

    Displays:
        Model information and benchmark results in a formatted table.
    """

Open Image Models HUB.

PlateDetectorModel = Literal['yolo-v9-t-640-license-plate-end2end', 'yolo-v9-t-512-license-plate-end2end', 'yolo-v9-t-384-license-plate-end2end', 'yolo-v9-t-256-license-plate-end2end'] module-attribute

Available ONNX models for doing detection.