Reference¶
This page shows the public API of FastALPR.
At a Glance¶
- Use
ALPR.predict()to get structured ALPR results - Use
ALPR.draw_predictions()to get an annotated image and the same ALPR results BoundingBoxandDetectionResultcome fromopen-image-models
Imports¶
Common Inputs¶
- A NumPy image in BGR format
- A string path to an image file
Common Returns¶
ALPR.predict(...)returnslist[ALPRResult]ALPR.draw_predictions(...)returnsDrawPredictionsResult
ALPRResult contains:
detection: box, label, and detection confidenceocr: recognized text and OCR confidence, orNone
DrawPredictionsResult contains:
image: the image with boxes and text drawn on itresults: the same ALPR results used for drawing
Available Models¶
See the available detection models in open-image-models and OCR models in fast-plate-ocr.
Main Class¶
ALPR ¶
ALPR(
detector: BaseDetector | None = None,
ocr: BaseOCR | None = None,
detector_model: PlateDetectorModel = "yolo-v9-t-384-license-plate-end2end",
detector_conf_thresh: float = 0.4,
detector_providers: Sequence[str | tuple[str, dict]]
| None = None,
detector_sess_options: SessionOptions = None,
ocr_model: OcrModel | None = "cct-xs-v2-global-model",
ocr_device: Literal["cuda", "cpu", "auto"] = "auto",
ocr_providers: Sequence[str | tuple[str, dict]]
| None = None,
ocr_sess_options: SessionOptions | None = None,
ocr_model_path: str | PathLike | None = None,
ocr_config_path: str | PathLike | None = None,
ocr_force_download: bool = False,
)
Automatic License Plate Recognition (ALPR) system class.
This class combines a detector and an OCR model to recognize license plates in images.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
detector
|
BaseDetector | None
|
An instance of BaseDetector. If None, the DefaultDetector is used. |
None
|
ocr
|
BaseOCR | None
|
An instance of BaseOCR. If None, the DefaultOCR is used. |
None
|
detector_model
|
PlateDetectorModel
|
The name of the detector model or a PlateDetectorModel enum instance. Defaults to "yolo-v9-t-384-license-plate-end2end". |
'yolo-v9-t-384-license-plate-end2end'
|
detector_conf_thresh
|
float
|
Confidence threshold for the detector. |
0.4
|
detector_providers
|
Sequence[str | tuple[str, dict]] | None
|
Execution providers for the detector. |
None
|
detector_sess_options
|
SessionOptions
|
Session options for the detector. |
None
|
ocr_model
|
OcrModel | None
|
The name of the OCR model from the model hub. This can be none and
|
'cct-xs-v2-global-model'
|
ocr_device
|
Literal['cuda', 'cpu', 'auto']
|
The device to run the OCR model on ("cuda", "cpu", or "auto"). |
'auto'
|
ocr_providers
|
Sequence[str | tuple[str, dict]] | None
|
Execution providers for the OCR. If None, the default providers are used. |
None
|
ocr_sess_options
|
SessionOptions | None
|
Session options for the OCR. If None, default session options are used. |
None
|
ocr_model_path
|
str | PathLike | None
|
Custom model path for the OCR. If None, the model is downloaded from the hub or cache. |
None
|
ocr_config_path
|
str | PathLike | None
|
Custom config path for the OCR. If None, the default configuration is used. |
None
|
ocr_force_download
|
bool
|
Whether to force download the OCR model. |
False
|
Source code in fast_alpr/alpr.py
Functions¶
predict ¶
predict(frame: ndarray | str) -> list[ALPRResult]
Run plate detection and OCR on an image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
frame
|
ndarray | str
|
Unprocessed frame (Colors in order: BGR) or image path. |
required |
Returns:
| Type | Description |
|---|---|
list[ALPRResult]
|
A list of ALPRResult objects, one for each detected plate. |
Source code in fast_alpr/alpr.py
draw_predictions ¶
draw_predictions(
frame: ndarray | str,
) -> DrawPredictionsResult
Draw detections and OCR results on an image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
frame
|
ndarray | str
|
The original frame or image path. |
required |
Returns:
| Type | Description |
|---|---|
DrawPredictionsResult
|
A DrawPredictionsResult with the annotated image and the ALPR results. |
Source code in fast_alpr/alpr.py
149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 | |
Result Types¶
ALPRResult
dataclass
¶
ALPRResult(
detection: DetectionResult, ocr: OcrResult | None
)
Detection and OCR output for one license plate.
Attributes:
| Name | Type | Description |
|---|---|---|
detection |
DetectionResult
|
Detector output for the plate. |
ocr |
OcrResult | None
|
OCR output for the plate, or None if OCR does not return a result. |
DrawPredictionsResult
dataclass
¶
DrawPredictionsResult(
image: ndarray, results: list[ALPRResult]
)
Return value from draw_predictions.
Attributes:
| Name | Type | Description |
|---|---|---|
image |
ndarray
|
The input image with boxes and text drawn on it. |
results |
list[ALPRResult]
|
The ALPR results used to draw the annotations. |
OcrResult
dataclass
¶
OcrResult(
text: str,
confidence: float | list[float],
region: str | None = None,
region_confidence: float | None = None,
)
OCR output for one cropped plate image.
Attributes:
| Name | Type | Description |
|---|---|---|
text |
str
|
Recognized plate text. |
confidence |
float | list[float]
|
OCR confidence as one value or one value per character. |
region |
str | None
|
Optional region or country prediction. |
region_confidence |
float | None
|
Confidence for the region prediction. |
Interfaces¶
BaseDetector ¶
Bases: ABC
Functions¶
predict
abstractmethod
¶
predict(frame: ndarray) -> list[DetectionResult]
BaseOCR ¶
External Types¶
See BoundingBox
and DetectionResult.