Exporting a Trained OCR Model¶
The export command converts a trained .keras model into alternative formats like ONNX, TFLite, or CoreML, enabling
deployment to different platforms and devices.
Inside fast-plate-ocr ecosystem, only ONNX inference is supported. But you are free to export trained models and easily
use in any other of the exported formats!
Export to ONNX¶
Basic Usage¶
fast-plate-ocr export \
--model trained_models/best.keras \
--plate-config-file config/latin_plates.yaml \
--format onnx
Use the exported ONNX with fast-plate-ocr¶
If you want to use the exported model again with LicensePlateRecognizer, keep the default ONNX export settings.
Then load it like this:
from fast_plate_ocr import LicensePlateRecognizer
plate_recognizer = LicensePlateRecognizer(
onnx_model_path="path/to/trained_model/best.onnx",
plate_config_path="path/to/trained_model/plate_config.yaml",
)
print(plate_recognizer.run("test_plate.png"))
Use the plate_config.yaml from the same trained model that produced the ONNX file.
Channels first AND input dtype float32¶
By default, the ONNX models are exported with channels last and input dtype of uint8. There might be cases that
you want channels first (BxCxHxW) and input dtype of float32. This is useful for
RKNN See this issue for context:
fast-plate-ocr/issues/46.
fast-plate-ocr export \
--model trained_models/best.keras \
--plate-config-file config/latin_plates.yaml \
--format onnx \
--onnx-data-format channels_first \
--onnx-input-dtype float32
Warning
A channels_first / float32 ONNX export is useful for other runtimes, but it is not the right format for
LicensePlateRecognizer. For fast-plate-ocr inference, use the default ONNX export settings.
Model shape compatibility
Some formats (like TFLite) only support fixed batch sizes, whereas ONNX allows dynamic batching. The export script handles these differences automatically.
Export to TFLite¶
TensorFlow Lite is ideal for deploying models to mobile and edge devices.
fast-plate-ocr export \
--model trained_models/best.keras \
--plate-config-file config/latin_plates.yaml \
--format tflite
TFLite batch dim
TFLite does not support dynamic batch sizes, so input is fixed to batch_size=1.
Export to CoreML¶
fast-plate-ocr export \
--model trained_models/best.keras \
--plate-config-file config/latin_plates.yaml \
--format coreml
This will produce a .mlpackage file, compatible with CoreML and Xcode deployments.