Compare commits

...

24 Commits
v1.1.0 ... main

Author SHA1 Message Date
cdbc322fb3
update readme 2025-07-11 16:45:12 +02:00
dc74b776b4
optimized build script 2025-07-11 16:30:37 +02:00
da0094ec9b
added support for multiples plates support 2025-07-11 15:58:24 +02:00
10345b4dd9
update alvr sdk 2025-07-11 11:19:52 +02:00
145452b0ae
revert shared libraries stripping, messes with tensorflow and use tensorflow v1 instead of v2 on cpu (smaller size) 2024-08-05 11:32:18 +02:00
59a54945ff
improve setup script, reduce libs .so size (strip) 2024-08-04 20:36:34 +02:00
87e2d6dd7b
add grid UI for wanted_cells 2024-08-04 19:49:41 +02:00
e026c61ea3
add logo 2024-08-04 19:10:36 +02:00
64791828d1
add grey logo 2024-08-04 18:46:55 +02:00
b15b05c157
update readme 2024-08-04 18:46:29 +02:00
b8c04b133d
update readme 2024-08-04 18:44:03 +02:00
7d31c06a9f
update readme 2024-08-04 18:43:40 +02:00
8bc5187d5e
update readme 2024-08-04 18:41:20 +02:00
224d566d65
update readme 2024-08-04 13:23:52 +02:00
8d5c55fb88
some refactor 2024-08-04 13:09:19 +02:00
9cf457511e
some refactor 2024-08-04 12:58:59 +02:00
0d7351d651
add grid_debug endpoint, previews, try/except to avoid errors 2024-08-04 12:31:51 +02:00
6976f690ff
improve structure/readability, add config file 2024-08-01 12:03:57 +02:00
d81d955a8b
fix recursion error mess 2024-08-01 11:47:01 +02:00
a4d297fc33
improve engine reload system 2024-07-30 11:31:12 +02:00
cd0caed325
restart engien after 6 hours 2024-07-26 10:26:09 +02:00
e63ff4a220
update readme 2024-07-22 22:52:24 +02:00
cb0c4145de
update readme 2024-07-22 22:49:22 +02:00
4e9e956589
add grid size parameter 2024-07-22 22:38:54 +02:00
12 changed files with 855 additions and 252 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
.git-assets/logo.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

View File

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

176
README.md
View File

@ -1,55 +1,172 @@
# Easy local ALPR (Automatic License Plate Recognition)
![logo](.git-assets/logo.webp)
![ALPR](.git-assets/preview-webui.webp)
![ALPR](preview-webui.webp)
# Easy Local ALPR (Automatic License Plate Recognition)
This project is a simple local ALPR (Automatic License Plate Recognition) server that uses the [ultimateALPR-SDK](https://github.com/DoubangoTelecom/ultimateALPR-SDK) to
process images and return the license plate information found in the image while focusing on being:
- **Fast** *(~100ms per image on decent CPU)*
- **Lightweight** *(~100MB of RAM)*
- **Easy to use** *(REST API)*
- **Easy to setup** *(one command setup)*
- **Offline** *(no internet connection required)*
This script is a REST API server that uses [ultimateALPR-SDK](https://github.com/DoubangoTelecom/ultimateALPR-SDK) to process images and return the license plate
information. The server is created using Flask and the ultimateALPR SDK is used to process the images.
This script is intended to be used as a faster local alternative to the large and resource heavy [CodeProject AI](https://www.codeproject.com/AI/docs) software.
> [!IMPORTANT]
> The ultimateALPR SDK is a lightweight and much faster alternative (on CPU and GPU) to the CodeProject AI software but it has **a few limitations** with it's free version:
> - The last character of the license plate is masked with an asterisk
> - The SDK supposedly has a limit of requests per program execution *(never encountered yet)* **but I have implemented a workaround for this by restarting the SDK after 3000 requests just in case.**
> This project relies on the [ultimateALPR-SDK](https://github.com/DoubangoTelecom/ultimateALPR-SDK), which is a commercial product but has a free version with a few limitations.
> For any commercial use, you will need to take a look at their licensing terms.
> **I am not affiliated with ultimateALPR-SDK in any way, and I am not responsible for any misuse of the software.**
> [!NOTE]
> The [ultimateALPR-SDK](https://github.com/DoubangoTelecom/ultimateALPR-SDK) is a lightweight and much faster alternative (on CPU and GPU) than existing solutions but it has **one important restriction** with it's free version:
> - The last character of the license plate is masked with an asterisk *(e.g. ``ABC1234`` -> ``ABC123*``)*
## Installation
Simply download the latest release from the [releases page](./releases) and run the executable.
The following platforms are currently supported:
- **Linux** (x86_64)
## Usage
The server listens on port 5000 and has one endpoint: /v1/image/alpr. The endpoint accepts POST requests with an
image file in the 'upload' field. The image is processed using the ultimateALPR SDK and the license plate
information is returned in JSON format. The reponse follows the CodeProject AI ALPR API format. So it can be used
as a drop-in replacement for the [CodeProject AI ALPR API](https://www.codeproject.com/AI/docs/api/api_reference.html#license-plate-reader).
The server listens on port 5000 and has a few endpoints documented below, the most important one being [``/v1/image/alpr``](#v1visionalpr).
### /v1/vision/alpr
> POST: http://localhost:5000/v1/vision/alpr
**Description**
This endpoint processes an image and returns the license plate information (if any) found in the image.
**Parameters**
- upload: (File) The image file to process. (see [Pillow.Image.open()](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.open) for supported formats)
- upload: (File) The image file to process. *(
see [Pillow.Image.open()](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.open) for supported
formats, almost any image format is supported)*
- grid_size: (Integer, optional) Size of grid to divide the image into and retry on each cell when no match have been
found on the whole image, must be ``>=2`` *(default: 0, disabled)* **[(more info)](#more-information-about-the-grid-parameter)**
- wanted_cells: (String, optional) The cells you want to process *(default: all cells)* **[(see here for more details)](#v1visionalpr_grid_debug)**
- format: ``1,2,3,4,...`` *(comma separated list of integers, max: ``grid_size^2``)*
- *Example for a grid_size of 3:*
```
1 | 2 | 3
4 | 5 | 6
7 | 8 | 9
```
- whole_image_fallback: (Boolean, optional) Only applies when ``grid_size`` is greater than 2.
If set to true, the server will first try to detect the plate from the ``wanted_cells`` parameter and if no plate is found, it will then try to detect the plate on the whole image.
If set to false, the server will only try to detect the plate on the specified cells. *(default: true)*
**Response**
```jsonc
{
"duration": (Float) // The time taken to process the image in milliseconds.
"plates": List // An array of plates found in the image.
"predictions": (Object[]) // An array of objects with the x_max, x_min, y_max, y_min bounds of the plate, image, the plate chars and confidence.
}
```
**Example**
```json
{
"success": (Boolean) // True if successful.
"message": (String) // A summary of the inference operation.
"error": (String) // (Optional) An description of the error if success was false.
"predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min bounds of the plate, label, the plate chars and confidence.
"processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
"duration": 142.02,
"plates": [
"XX12345*",
"YY5432*"
],
"predictions": [
{
"confidence": 0.9009034,
"image": "data:image/png;base64,xxxxx==",
"plate": "XX12345*",
"x_max": 680,
"x_min": 610,
"y_max": 386,
"y_min": 355
},
{
"confidence": 0.8930383999999999,
"image": "data:image/png;base64,xxxxx==",
"plate": "YY5432*",
"x_max": 680,
"x_min": 483,
"y_max": 706,
"y_min": 624
}
]
}
```
### /v1/vision/alpr_grid_debug
> POST: http://localhost:5000/v1/vision/alpr_grid_debug
**Description**
This endpoint displays the grid and each cell's number on the image.
It is intended to be used for debugging purposes to see which cells are being processed.
**Parameters**
*same as [v1/vision/alpr](#v1visionalpr)*
**Response**
```jsonc
{
"image": (Base64) // The image with the grid and cell numbers drawn on it.
}
```
## More information about the grid parameter
Sometimes, the ALPR software cannot find any plate because the image is too big or the plate is too small in the image.
To solve this problem, you can make use of the ``grid_size`` parameter in each of your requests.
If you set the ``grid_size`` parameter to a value greater than 2, the server will divide the image into a grid of cells
and retry the ALPR software on each cell.
You can speed up the processing time by specifying the ``wanted_cells`` parameter. This parameter allows you to specify
which cells you want to run plate detection on.
This can be useful if you know the plates can only be in certain areas of the image.
> [!TIP]
> You can use the [``/v1/vision/alpr_grid_debug`` endpoint](#v1visionalpr_grid_debug) to see the grid and cell numbers
> overlaid on your image.
> You can then specify the ``wanted_cells`` parameter to only process the cells you want.
**If you wish not to use the grid, you can set the ``grid_size`` parameter to
0 or leave it empty *(and leave the ``wanted_cells`` parameter empty)*.**
### Example
Let's say your driveway camera looks something like this:
![Driveway camera](.git-assets/example_grid.webp)
If you set the ``grid_size`` parameter to 2, the image will be divided into a 2x2 grid like this:
![Driveway camera grid](.git-assets/example_grid_2.webp)
You can see that cell 1 and 2 are empty and cells 3 and 4 might contain license plates.
You can then set the ``wanted_cells`` parameter to ``3,4`` to only process cells 3 and 4, reducing the processing time
as only half the image will be processed.
## Included models in built executable
When using the built executable, only the **latin** charset models are bundled by default. If you want to use a different
charset, you need to set the charset in the JSON_CONFIG variable and rebuild the executable with the according
models found [here](https://github.com/DoubangoTelecom/ultimateALPR-SDK/tree/master/assets)
To build the executable, you can use the ``build_alpr_api.sh`` script, which will create an executable named ``alpr_api`` in
the ``dist`` folder.
When using the built executable, only the **latin** charset models are bundled by default. If you want to use a
different charset, you need to set the charset in the JSON_CONFIG variable and rebuild the executable with the
according models found [here](https://github.com/DoubangoTelecom/ultimateALPR-SDK/tree/master/assets)
To build the executable, you can use the ``build_alpr_api.sh`` script, which will create an executable
named ``alpr_api`` in the ``dist`` folder.
## Setup development environment
### Use automatic setup script
You can use the ``build_and_setup_ultimatealvr.sh`` script to automatically install the necessary packages and build the ultimateALPR SDK wheel, copy the assets and the libs.
> [!IMPORTANT]
> Make sure to install the package python3-dev (APT) python3-devel (RPM) before running the build and setup script.
> You can use the ``build_and_setup_ultimatealvr.sh`` script to automatically install the necessary packages and build
> the
> ultimateALPR SDK wheel, copy the assets and the libs.
The end structure should look like this:
```bash
.
├── alpr_api.py
@ -64,16 +181,23 @@ The end structure should look like this:
```
### Important notes
When running, building or developing the script, make sure to set the ``LD_LIBRARY_PATH`` environment variable to the libs folder
When running, building or developing the script, make sure to set the ``LD_LIBRARY_PATH`` environment variable to the
libs folder
*(limitation of the ultimateALPR SDK)*.
```bash
export LD_LIBRARY_PATH=libs:$LD_LIBRARY_PATH
```
### Error handling
#### GLIBC_ABI_DT_RELR not found
If you encounter an error like this:
```bash
/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_ABI_DT_RELR' not found
```
Then make sure your GLIBC version is >= 2.36

View File

@ -1,16 +1,25 @@
import base64
import io
import json
import logging
import os
import sys
import threading
import time
from time import sleep
import traceback
import ultimateAlprSdk
from PIL import Image
from PIL import Image, ImageDraw, ImageFont
from flask import Flask, request, jsonify, render_template
# Setup logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
counter_lock = threading.Lock()
counter = 0
bundle_dir = getattr(sys, '_MEIPASS', os.path.abspath(os.path.dirname(__file__)))
boot_time = time.time()
"""
Hi there!
@ -21,32 +30,46 @@ information. The server is created using Flask and the ultimateALPR SDK is used
See the README.md file for more information on how to run this script.
"""
# Defines the default JSON configuration. More information at https://www.doubango.org/SDKs/anpr/docs/Configuration_options.html
JSON_CONFIG = {
"debug_level": "info",
"debug_write_input_image_enabled": False,
"debug_internal_data_path": ".",
# Load configuration
CONFIG_PATH = os.path.join(bundle_dir,
'config.json') # TODO: store config file outside of bundle (to remove need for compilation by users)
if os.path.exists(CONFIG_PATH):
with open(CONFIG_PATH, 'r') as config_file:
JSON_CONFIG = json.load(config_file)
else:
JSON_CONFIG = {
"assets_folder": os.path.join(bundle_dir, "assets"),
"charset": "latin",
"car_noplate_detect_enabled": False,
"ienv_enabled": True, # night vision enhancements
"openvino_enabled": True,
"openvino_device": "CPU",
"npu_enabled": False,
"klass_lpci_enabled": False, # License Plate Country Identification
"klass_vcr_enabled": False, # Vehicle Color Recognition (paid)
"klass_vmmr_enabled": False, # Vehicle Make and Model Recognition
"klass_vbsr_enabled": False, # Vehicle Body Style Recognition (paid)
"license_token_file": "",
"license_token_data": "",
"num_threads": -1,
"gpgpu_enabled": True,
"max_latency": -1,
"klass_vcr_gamma": 1.5,
"detect_roi": [0, 0, 0, 0],
"detect_minscore": 0.35,
"car_noplate_detect_min_score": 0.8,
"pyramidal_search_enabled": False,
"pyramidal_search_sensitivity": 0.38, # default 0.28
"pyramidal_search_minscore": 0.8,
"pyramidal_search_min_image_size_inpixels": 800,
"recogn_rectify_enabled": True, # heavy on cpu
"recogn_minscore": 0.4,
"recogn_score_type": "min"
}
"debug_level": "fatal",
"debug_write_input_image_enabled": False,
"debug_internal_data_path": ".",
"num_threads": -1,
"gpgpu_enabled": True,
"max_latency": -1,
"klass_vcr_gamma": 1.5,
"detect_roi": [0, 0, 0, 0],
"detect_minscore": 0.35,
"car_noplate_detect_min_score": 0.8,
"pyramidal_search_enabled": False,
"pyramidal_search_sensitivity": 0.38,
"pyramidal_search_minscore": 0.8,
"pyramidal_search_min_image_size_inpixels": 800,
"recogn_rectify_enabled": True,
"recogn_minscore": 0.4,
"recogn_score_type": "min"
}
IMAGE_TYPES_MAPPING = {
'RGB': ultimateAlprSdk.ULTALPR_SDK_IMAGE_TYPE_RGB24,
@ -54,34 +77,42 @@ IMAGE_TYPES_MAPPING = {
'L': ultimateAlprSdk.ULTALPR_SDK_IMAGE_TYPE_Y
}
config = json.dumps(JSON_CONFIG)
def start_backend_loop():
global boot_time, counter
while True:
load_engine()
# loop for about an hour or 3000 requests then reload the engine (fix for trial license)
while counter < 3000 and time.time() - boot_time < 3600:
# every 120 sec
if int(time.time()) % 120 == 0:
if not is_engine_loaded():
unload_engine()
load_engine()
time.sleep(1)
unload_engine()
# Reset counter and boot_time to restart the loop
with counter_lock:
counter = 0
boot_time = time.time()
def is_engine_loaded():
# hacky way to check if the engine is loaded cause the SDK doesn't provide a method for it
return ultimateAlprSdk.UltAlprSdkEngine_requestRuntimeLicenseKey().isOK()
def load_engine():
JSON_CONFIG["assets_folder"] = os.path.join(bundle_dir, "assets")
JSON_CONFIG.update({
"charset": "latin",
"car_noplate_detect_enabled": False,
"ienv_enabled": False,
"openvino_enabled": True,
"openvino_device": "CPU",
"npu_enabled": False,
"klass_lpci_enabled": False,
"klass_vcr_enabled": False,
"klass_vmmr_enabled": False,
"klass_vbsr_enabled": False,
"license_token_file": "",
"license_token_data": ""
})
result = ultimateAlprSdk.UltAlprSdkEngine_init(json.dumps(JSON_CONFIG))
result = ultimateAlprSdk.UltAlprSdkEngine_init(config)
if not result.isOK():
raise RuntimeError("Init failed: %s" % result.phrase())
while counter < 3000:
sleep(1)
unload_engine()
load_engine()
def unload_engine():
result = ultimateAlprSdk.UltAlprSdkEngine_deInit()
@ -91,27 +122,20 @@ def unload_engine():
def process_image(image: Image) -> str:
global counter
counter += 1
with counter_lock:
counter += 1
width, height = image.size
if image.mode in IMAGE_TYPES_MAPPING:
image_type = IMAGE_TYPES_MAPPING[image.mode]
else:
raise ValueError("Invalid mode: %s" % image.mode)
image_type = IMAGE_TYPES_MAPPING.get(image.mode, None)
if image_type is None:
raise ValueError(f"Invalid mode: {image.mode}")
result = ultimateAlprSdk.UltAlprSdkEngine_process(
image_type,
image.tobytes(),
width,
height,
0, # stride
1 # exifOrientation
image_type, image.tobytes(), width, height, 0, 1
)
if not result.isOK():
raise RuntimeError("Process failed: %s" % result.phrase())
else:
return result.json()
raise RuntimeError(f"Process failed: {result.phrase()}")
return result.json()
def create_rest_server_flask():
@ -119,30 +143,102 @@ def create_rest_server_flask():
@app.route('/v1/image/alpr', methods=['POST'])
def alpr():
"""
The function receives an image and processes it using the ultimateALPR SDK.
Parameters:
- upload: The image to be processed
- grid_size: The number of cells to split the image into (e.g. 3)
- wanted_cells: The cells to process in the grid separated by commas (e.g. 1,2,3,4) (max: grid_size²)
- whole_image_fallback: If set to true, the whole image will be processed if no plates are found in the specified cells. (default: true)
"""
interference = time.time()
whole_image_fallback = request.form.get('whole_image_fallback', 'true').lower() == 'true'
if 'upload' not in request.files:
return jsonify({'error': 'No image found'})
try:
if 'upload' not in request.files:
return jsonify({'error': 'No image found'}), 400
image = request.files['upload']
if image.filename == '':
return jsonify({'error': 'No selected file'})
grid_size = int(request.form.get('grid_size', 1))
wanted_cells = _get_wanted_cells_from_request(request, grid_size)
image = Image.open(image)
result = process_image(image)
result = convert_to_cpai_compatible(result)
image_file = request.files['upload']
if image_file.filename == '':
return jsonify({'error': 'No selected file'}), 400
if not result['predictions']:
print("No plate found in the image, attempting to split the image")
image = _load_image_from_request(request)
predictions_found = find_best_plate_with_split(image)
result = {
'predictions': [],
'plates': [],
'duration': 0
}
if predictions_found:
result['predictions'].append(max(predictions_found, key=lambda x: x['confidence']))
if grid_size < 2:
logger.debug("Grid size < 2, processing the whole image")
response = process_image(image)
result.update(_parse_result_from_ultimatealpr(response))
else:
logger.debug(f"Grid size: {grid_size}, processing specified cells: {wanted_cells}")
predictions_found = _find_best_plate_with_grid_split(image, grid_size, wanted_cells)
result['predictions'].extend(predictions_found)
result['processMs'] = round((time.time() - interference) * 1000, 2)
result['inferenceMs'] = result['processMs']
return jsonify(result)
if not result['predictions']:
if grid_size >= 2 and whole_image_fallback:
logger.debug("No plates found in the specified cells, trying whole image as last resort")
response = process_image(image)
result.update(_parse_result_from_ultimatealpr(response))
if result['predictions'] and len(result['predictions']) > 0:
all_plates = []
for plate in result['predictions']:
all_plates.append(plate.get('plate'))
isolated_plate_image = isolate_plate_in_image(image, plate)
plate['image'] = f"data:image/png;base64,{image_to_base64(isolated_plate_image, compress=True)}"
result['plates'] = all_plates
duration = round((time.time() - interference) * 1000, 2)
result.update({'duration': duration})
return jsonify(result)
except Exception as e:
logger.error(f"Error processing image: {e}")
logger.error(traceback.format_exc())
return jsonify({'error': 'Error processing image'}), 500
@app.route('/v1/image/alpr_grid_debug', methods=['POST'])
def alpr_grid_debug():
"""
The function receives an image and returns it with the grid overlayed on it (for debugging purposes).
Parameters:
- upload: The image to be processed
- grid_size: The number of cells to split the image into (e.g. 3)
- wanted_cells: The cells to process in the grid separated by commas (e.g. 1,2,3,4) (max: grid_size²)
Returns:
- The image with the grid overlayed on it
"""
try:
if 'upload' not in request.files:
return jsonify({'error': 'No image found'}), 400
grid_size = int(request.form.get('grid_size', 3))
wanted_cells = _get_wanted_cells_from_request(request, grid_size)
image_file = request.files['upload']
if image_file.filename == '':
return jsonify({'error': 'No selected file'}), 400
image = _load_image_from_request(request)
image = draw_grid_and_cell_numbers_on_image(image, grid_size, wanted_cells)
image_base64 = image_to_base64(image, compress=True)
return jsonify({"image": f"data:image/png;base64,{image_base64}"})
except Exception as e:
logger.error(f"Error processing image: {e}")
logger.error(traceback.format_exc())
return jsonify({'error': 'Error processing image'}), 500
@app.route('/')
def index():
@ -151,63 +247,85 @@ def create_rest_server_flask():
return app
def convert_to_cpai_compatible(result):
result = json.loads(result)
def _get_wanted_cells_from_request(request, grid_size) -> list:
"""
Helper function to extract wanted cells from the request.
If no cells are specified, it returns all cells in the grid.
"""
wanted_cells = request.form.get('wanted_cells')
if wanted_cells:
wanted_cells = [int(cell) for cell in wanted_cells.split(',')]
else:
wanted_cells = list(range(1, grid_size * grid_size + 1))
if not all(1 <= cell <= grid_size * grid_size for cell in wanted_cells):
raise ValueError("Invalid cell numbers provided.")
return wanted_cells
def _load_image_from_request(request) -> Image:
"""
Helper function to load an image from the request.
It expects the image to be in the 'upload' field of the request.
"""
if 'upload' not in request.files:
raise ValueError("No image found in request.")
image_file = request.files['upload']
if image_file.filename == '':
raise ValueError("No selected file.")
try:
image = Image.open(image_file)
return correct_image_orientation(image)
except Exception as e:
raise ValueError(f"Error loading image: {e}")
def _parse_result_from_ultimatealpr(result) -> dict:
result = json.loads(result)
response = {
'success': "true",
'processMs': result['duration'],
'inferenceMs': result['duration'],
'predictions': [],
'message': '',
'moduleId': 'ALPR',
'moduleName': 'License Plate Reader',
'code': 200,
'command': 'alpr',
'requestId': 'null',
'inferenceDevice': 'none',
'analysisRoundTripMs': 0,
'processedBy': 'none',
'timestamp': ''
}
if 'plates' in result:
plates = result['plates']
for plate in plates:
warpedBox = plate['warpedBox']
x_coords = warpedBox[0::2]
y_coords = warpedBox[1::2]
x_min = min(x_coords)
x_max = max(x_coords)
y_min = min(y_coords)
y_max = max(y_coords)
response['predictions'].append({
'confidence': plate['confidences'][0] / 100,
'label': "Plate: " + plate['text'],
'plate': plate['text'],
'x_min': x_min,
'x_max': x_max,
'y_min': y_min,
'y_max': y_max
})
for plate in result.get('plates', []):
warpedBox = plate['warpedBox']
x_coords = warpedBox[0::2]
y_coords = warpedBox[1::2]
x_min, x_max = min(x_coords), max(x_coords)
y_min, y_max = min(y_coords), max(y_coords)
response['predictions'].append({
'confidence': plate['confidences'][0] / 100,
'plate': plate['text'],
'x_min': x_min,
'x_max': x_max,
'y_min': y_min,
'y_max': y_max
})
return response
def find_best_plate_with_split(image, split_size=4, wanted_cells=None):
if wanted_cells is None:
wanted_cells = [5, 6, 7, 9, 10, 11, 14, 15] # TODO: use params not specifc to my use case
def _find_best_plate_with_grid_split(image: Image, grid_size: int = 3, wanted_cells: list = None,
stop_at_first_match: bool = False) -> list:
"""
Splits the image into a grid and processes each cell to find the best plate.
Returns a list of predictions found in the specified cells.
"""
if grid_size < 2:
logger.debug("Grid size < 2, skipping split")
return []
predictions_found = []
width, height = image.size
cell_width = width // split_size
cell_height = height // split_size
cell_width = width // grid_size
cell_height = height // grid_size
for cell_index in range(1, split_size * split_size + 1):
row = (cell_index - 1) // split_size
col = (cell_index - 1) % split_size
for cell_index in range(1, grid_size * grid_size + 1):
row = (cell_index - 1) // grid_size
col = (cell_index - 1) % grid_size
left = col * cell_width
upper = row * cell_height
right = left + cell_width
@ -215,34 +333,123 @@ def find_best_plate_with_split(image, split_size=4, wanted_cells=None):
if cell_index in wanted_cells:
cell_image = image.crop((left, upper, right, lower))
result_cell = json.loads(process_image(cell_image))
result = process_image(cell_image)
logger.info(f"Processed image with result (grid): {result}")
result_cell = json.loads(result)
if 'plates' in result_cell:
for plate in result_cell['plates']:
warpedBox = plate['warpedBox']
x_coords = warpedBox[0::2]
y_coords = warpedBox[1::2]
x_min = min(x_coords) + left
x_max = max(x_coords) + left
y_min = min(y_coords) + upper
y_max = max(y_coords) + upper
for plate in result_cell.get('plates', []):
warpedBox = plate['warpedBox']
x_coords = warpedBox[0::2]
y_coords = warpedBox[1::2]
x_min = min(x_coords) + left
x_max = max(x_coords) + left
y_min = min(y_coords) + upper
y_max = max(y_coords) + upper
predictions_found.append({
'confidence': plate['confidences'][0] / 100,
'label': "Plate: " + plate['text'],
'plate': plate['text'],
'x_min': x_min,
'x_max': x_max,
'y_min': y_min,
'y_max': y_max
})
predictions_found.append({
'confidence': plate['confidences'][0] / 100,
'plate': plate['text'],
'x_min': x_min,
'x_max': x_max,
'y_min': y_min,
'y_max': y_max
})
if stop_at_first_match:
logger.debug(f"Found plate in cell {cell_index}: {plate['text']}")
return predictions_found
return predictions_found
def draw_grid_and_cell_numbers_on_image(image: Image, grid_size: int = 3, wanted_cells: list = None) -> Image:
"""
Draws a grid on the image and numbers the cells.
"""
if grid_size < 1:
grid_size = 1
if wanted_cells is None:
wanted_cells = list(range(1, grid_size * grid_size + 1))
width, height = image.size
cell_width = width // grid_size
cell_height = height // grid_size
draw = ImageDraw.Draw(image)
font = ImageFont.truetype(os.path.join(bundle_dir, 'assets', 'fonts', 'GlNummernschildEng-XgWd.ttf'),
image.size[0] // 10)
for cell_index in range(1, grid_size * grid_size + 1):
row = (cell_index - 1) // grid_size
col = (cell_index - 1) % grid_size
left = col * cell_width
upper = row * cell_height
right = left + cell_width
lower = upper + cell_height
if cell_index in wanted_cells:
draw.rectangle([left, upper, right, lower], outline="red", width=4)
draw.text((left + 5, upper + 5), str(cell_index), fill="red", font=font)
return image
def isolate_plate_in_image(image: Image, plate: dict, offset=10) -> Image:
"""
Isolates the plate area in the image and returns a cropped and resized image.
"""
x_min, x_max = plate.get('x_min'), plate.get('x_max')
y_min, y_max = plate.get('y_min'), plate.get('y_max')
cropped_image = image.crop((max(0, x_min - offset), max(0, y_min - offset), min(image.size[0], x_max + offset),
min(image.size[1], y_max + offset)))
resized_image = cropped_image.resize((int(cropped_image.size[0] * 3), int(cropped_image.size[1] * 3)),
resample=Image.Resampling.LANCZOS)
return resized_image
def image_to_base64(img: Image, compress=False) -> str:
"""Convert a Pillow image to a base64-encoded string."""
buffered = io.BytesIO()
if compress:
img = img.resize((img.size[0] // 2, img.size[1] // 2))
img.save(buffered, format="WEBP", quality=35, lossless=False)
else:
img.save(buffered, format="WEBP")
return base64.b64encode(buffered.getvalue()).decode('utf-8')
from PIL import Image, ExifTags
def correct_image_orientation(img):
try:
exif = img._getexif()
if exif is not None:
orientation_key = next(
(k for k, v in ExifTags.TAGS.items() if v == 'Orientation'), None)
if orientation_key is not None:
orientation = exif.get(orientation_key)
if orientation == 3:
img = img.rotate(180, expand=True)
elif orientation == 6:
img = img.rotate(270, expand=True)
elif orientation == 8:
img = img.rotate(90, expand=True)
except Exception as e:
print("EXIF orientation correction failed:", e)
return img
if __name__ == '__main__':
engine = threading.Thread(target=load_engine, daemon=True)
engine.start()
engine_thread = threading.Thread(target=start_backend_loop, daemon=True)
engine_thread.start()
app = create_rest_server_flask()
app.run(host='0.0.0.0', port=5000)

View File

@ -1 +1,14 @@
pyinstaller --noconfirm --onefile --console --add-data libs:. --add-data assets:assets --add-data static:static --add-data templates:templates --name easy-local-alpr-1.1.0-openvinocpu_linux_x86_64 "alpr_api.py"
#!/bin/bash
VERSION=1.6.0
rm -rf buildenv build dist *.spec
python3.10 -m venv buildenv
source buildenv/bin/activate
python3.10 -m pip install --upgrade pip pyinstaller
python3.10 -m pip install ./wheel/ultimateAlprSdk-3.14.1-cp310-cp310-linux_x86_64.whl
pip install -r requirements.txt
pyinstaller --noconfirm --onefile --console --add-data libs:. --add-data assets:assets --add-data static:static --add-data templates:templates --name easy-local-alpr-$VERSION-openvinocpu_linux_x86_64 "alpr_api.py"
deactivate
rm -rf buildenv

View File

@ -1,5 +1,7 @@
#!/bin/bash
deactivate 2>/dev/null
# Function to create virtual environment, install the wheel, and copy assets and libs
install_and_setup() {
echo "Creating virtual environment at the root..."
@ -55,10 +57,10 @@ prompt_auto_setup() {
esac
}
# Directories
# Variables
ROOT_DIR=$(pwd)
BUILD_DIR="$ROOT_DIR/tmp-build-env"
SDK_ZIP_URL="https://github.com/DoubangoTelecom/ultimateALPR-SDK/archive/8130c76140fe8edc60fe20f875796121a8d22fed.zip"
SDK_ZIP_URL="https://github.com/DoubangoTelecom/ultimateALPR-SDK/archive/febe9921e7dd37e64901d84cad01d51eca6c6a71.zip" # 3.14.1
SDK_ZIP="$BUILD_DIR/temp-sdk.zip"
SDK_DIR="$BUILD_DIR/temp-sdk"
BIN_DIR="$SDK_DIR/binaries/linux/x86_64"
@ -69,7 +71,12 @@ cd "$BUILD_DIR" || exit
# Clone SDK
echo "Downloading SDK..."
wget "$SDK_ZIP_URL" -O "$SDK_ZIP" >/dev/null 2>&1
if [ -f "$SDK_ZIP" ]; then
echo "SDK zip already exists."
rm -R "$SDK_DIR"
else
wget "$SDK_ZIP_URL" -O "$SDK_ZIP" >/dev/null 2>&1
fi
if [ $? -ne 0 ]; then
echo "Failed to download SDK."
exit 1
@ -121,26 +128,30 @@ read -r -p "Do you want TensorFlow for CPU or GPU? (cpu/gpu): " tf_choice
mkdir -p "$BIN_DIR/tensorflow"
if [ "$tf_choice" == "gpu" ]; then
echo "Downloading TensorFlow GPU..."
wget https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-linux-x86_64-2.6.0.tar.gz >/dev/null 2>&1
wget https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-linux-x86_64-2.6.0.tar.gz >/dev/null 2>&1 # Use 2.6 for newer GPU support
if [ $? -ne 0 ]; then
echo "Failed to download TensorFlow GPU."
exit 1
fi
echo "Extracting TensorFlow GPU..."
tar -xf libtensorflow-gpu-linux-x86_64-2.6.0.tar.gz -C "$BIN_DIR/tensorflow" >/dev/null 2>&1
mv "$BIN_DIR/tensorflow/lib/libtensorflow.so.1" "$BIN_DIR/libs/libtensorflow.so.1"
mv "$BIN_DIR/tensorflow/lib/libtensorflow_framework.so.2.6.0" "$BIN_DIR/libs/libtensorflow_framework.so.2"
else
echo "Downloading TensorFlow CPU..."
wget https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-linux-x86_64-2.6.0.tar.gz >/dev/null 2>&1
#wget https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-linux-x86_64-2.6.0.tar.gz >/dev/null 2>&1
wget https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-linux-x86_64-1.14.0.tar.gz >/dev/null 2>&1 # Use 1.14 as it's smaller in size
if [ $? -ne 0 ]; then
echo "Failed to download TensorFlow CPU."
exit 1
fi
echo "Extracting TensorFlow CPU..."
tar -xf libtensorflow-cpu-linux-x86_64-2.6.0.tar.gz -C "$BIN_DIR/tensorflow" >/dev/null 2>&1
fi
tar -xf libtensorflow-cpu-linux-x86_64-1.14.0.tar.gz -C "$BIN_DIR/tensorflow" >/dev/null 2>&1
mv "$BIN_DIR/tensorflow/lib/libtensorflow.so.1" "$BIN_DIR/libs/libtensorflow.so.1"
mv "$BIN_DIR/tensorflow/lib/libtensorflow_framework.so.2.6.0" "$BIN_DIR/libs/libtensorflow_framework.so.2"
mv "$BIN_DIR/tensorflow/lib/"* "$BIN_DIR/libs/"
fi
# Build the wheel
echo "Building the wheel..."
@ -157,6 +168,8 @@ mv "$BIN_DIR/plugins.xml" "$BUILD_DIR/libs"
# Move the assets to the root directory
mv "$SDK_DIR/assets" "$BUILD_DIR/assets"
# Removes unused models (only keeps TensorFlow and OpenVINO basic license plat recognition)
rm -Rf "$BUILD_DIR/assets/images" "$BUILD_DIR/assets/models.amlogic_npu" "$BUILD_DIR/assets/models.tensorrt" $BUILD_DIR/assets/models/ultimateALPR-SDK_klass* $BUILD_DIR/assets/models/ultimateALPR-SDK_*mobile* $BUILD_DIR/assets/models/*korean* $BUILD_DIR/assets/models/*chinese* $BUILD_DIR/assets/models/ultimateALPR-SDK_recogn1x100* $BUILD_DIR/assets/models/ultimateALPR-SDK_recogn1x200* $BUILD_DIR/assets/models/ultimateALPR-SDK_recogn1x300* $BUILD_DIR/assets/models/ultimateALPR-SDK_recogn1x400* $BUILD_DIR/assets/models.openvino/ultimateALPR-SDK_klass*
# Deactivate and clean up the build virtual environment
echo "Deactivating and cleaning up virtual environment..."

View File

@ -1,3 +1,3 @@
flask
pillow
Pillow
ultimateAlprSdk

View File

@ -1,11 +1,11 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta content="width=device-width, initial-scale=1.0" name="viewport">
<title>Easy Local ALPR - API</title>
<script src="https://cdn.tailwindcss.com"></script>
<!-- Include Google Sans font -->
<link href="https://fonts.googleapis.com/css2?family=Google+Sans:wght@400;500;700&display=swap" rel="stylesheet">
<style>
body {
@ -14,97 +14,343 @@
background-size: 20px 20px;
font-family: 'Google Sans', sans-serif;
}
.grid-cell {
border: 2px solid #e5e7eb; /* Tailwind gray-200 */
display: flex;
align-items: center;
justify-content: center;
cursor: pointer;
border-radius: 0.5rem; /* Rounded corners */
transition: background-color 0.2s ease, color 0.2s ease, border-color 0.2s ease; /* Smooth transition */
padding-top: 25%; /* More compact rectangular shape */
position: relative;
overflow: hidden;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05); /* Subtle shadow for modern look */
}
.grid-cell span {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
font-weight: 500; /* Semi-bold text */
}
.grid-cell.selected {
background-color: #1f2937; /* Tailwind gray-800 */
color: white;
border-color: #1f2937; /* Match border color with background */
}
.grid-cell:hover {
background-color: #9ca3af; /* Tailwind gray-400 for hover effect */
color: white;
border-color: #9ca3af; /* Match border color with hover effect */
}
.grid-container {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(50px, 1fr));
gap: 8px; /* Adjust gap between cells */
margin-top: 1rem;
}
</style>
</head>
<body class="bg-neutral-100 dark:bg-neutral-900 dark:text-white flex items-center justify-center min-h-screen p-4">
<!-- Logo -->
<div class="absolute top-4 left-4 z-50">
<img id="logo" src="{{ url_for('static', filename='logo_black.webp') }}" alt="Logo" class="h-12 dark:hidden">
<img id="logoDark" src="{{ url_for('static', filename='logo_white.webp') }}" alt="Logo" class="h-12 hidden dark:block">
</div>
<div class="bg-white dark:bg-neutral-800 p-6 rounded-lg shadow-lg w-full max-w-md mt-16">
<h1 class="text-2xl font-bold mb-4 text-center dark:text-gray-200">Upload Image for ALPR</h1>
<form id="uploadForm" enctype="multipart/form-data" class="space-y-4">
<body class="bg-neutral-100 dark:bg-neutral-900 dark:text-white flex items-center justify-center min-h-screen p-4">
<div class="absolute top-4 left-4 z-50">
<img alt="Logo" class="h-12 dark:hidden" id="logo" src="{{ url_for('static', filename='logo_black.webp') }}">
<img alt="Logo" class="h-12 hidden dark:block" id="logoDark"
src="{{ url_for('static', filename='logo_white.webp') }}">
</div>
<div class="bg-white dark:bg-neutral-800 p-6 rounded-lg shadow-lg w-full max-w-xl mt-16">
<h1 class="text-2xl font-bold mb-4 text-center dark:text-gray-200">Select Service</h1>
<form class="space-y-4" id="serviceForm">
<div>
<label class="block text-sm font-medium text-gray-700 dark:text-gray-300" for="service">Choose a
service:</label>
<select class="mt-1 block w-full py-2 px-3 border border-gray-300 bg-white dark:bg-neutral-800 dark:border-neutral-700 rounded-md shadow-sm focus:outline-none focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm"
id="service" name="service" onchange="updateFormFields()">
<option value="alpr">Plate Recognition (ALPR)</option>
<option value="alpr_grid_debug">Grid Size Helper</option>
</select>
</div>
<div class="service-fields hidden" id="alprFields">
<div>
<label for="upload" class="block text-sm font-medium text-gray-700 dark:text-gray-300">Choose an image:</label>
<label for="upload_alpr" class="block text-sm font-medium text-gray-700 dark:text-gray-300">Choose an
image:</label>
<div class="mt-1 flex items-center">
<input type="file" id="upload" name="upload" accept="image/*" class="hidden" onchange="updateFileName()">
<label for="upload" class="cursor-pointer inline-flex items-center justify-center px-4 py-2 border border-gray-400 rounded-md shadow-sm text-sm font-medium text-gray-700 dark:text-gray-300 bg-white dark:bg-neutral-800 hover:bg-neutral-50 dark:hover:bg-neutral-600">
Select file
</label>
<span id="fileName" class="ml-2 text-sm text-gray-600 dark:text-gray-300"></span>
<input type="file" id="upload_alpr" name="upload" accept="image/*" class="hidden"
onchange="updateFileName();">
<label for="upload_alpr"
class="cursor-pointer inline-flex items-center justify-center px-4 py-2 border border-gray-400 rounded-md shadow-sm text-sm font-medium text-gray-700 dark:text-gray-300 bg-white dark:bg-neutral-800 hover:bg-neutral-50 dark:hover:bg-neutral-600">Select
file</label>
<span id="fileName_alpr" class="ml-2 text-sm text-gray-600 dark:text-gray-300"></span>
</div>
</div>
<div id="imagePreview" class="mt-4 hidden">
<img id="previewImage" src="#" alt="Preview" class="max-w-full h-auto rounded-lg">
<div class="mt-4">
<label for="grid_size_alpr" class="block text-sm font-medium text-gray-700 dark:text-gray-300">Grid
Size:</label>
<input type="number" id="grid_size_alpr" name="grid_size" value="3"
class="mt-1 block w-full px-3 py-2 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm dark:bg-neutral-800 dark:border-neutral-700">
</div>
<button type="submit" class="w-full py-2 px-4 bg-black text-white font-semibold rounded-md shadow-sm hover:bg-neutral-900 dark:bg-neutral-900 dark:hover:bg-neutral-950">Upload</button>
</form>
<div class="mt-6">
<h2 class="text-xl font-semibold mb-2 dark:text-gray-200">Response</h2>
<pre id="responseBox" class="bg-neutral-100 dark:bg-neutral-900 p-4 border rounded-lg text-sm text-gray-900 dark:text-gray-200 overflow-x-auto"></pre>
<div class="mt-4">
<label for="wanted_cells_alpr" class="block text-sm font-medium text-gray-700 dark:text-gray-300">Wanted
Cells:</label>
<div id="gridContainer_alpr" class="grid-container"></div>
<input type="hidden" id="wanted_cells_alpr" name="wanted_cells">
</div>
<div class="mt-4 flex flex-row space-between">
<div>
<label for="whole_image_fallback_alpr" class="block text-sm font-medium text-gray-700 dark:text-gray-300">Fallback to whole image if no plate is found in specified cells?</label>
<span class="text-sm text-gray-500 dark:text-gray-400">Only applies if grid size >=2</span>
</div>
<div id="gridContainer_alpr" class="grid-container"></div>
<input type="checkbox" id="whole_image_fallback_alpr" checked>
</div>
<input id="plate_image_alpr" name="plate_image_alpr" type="hidden" value="true">
</div>
<div class="service-fields hidden" id="alpr_grid_debugFields">
<div>
<label for="upload_alpr_grid_debug" class="block text-sm font-medium text-gray-700 dark:text-gray-300">Choose
an image:</label>
<div class="mt-1 flex items-center">
<input type="file" id="upload_alpr_grid_debug" name="upload" accept="image/*" class="hidden"
onchange="updateFileName();">
<label for="upload_alpr_grid_debug"
class="cursor-pointer inline-flex items-center justify-center px-4 py-2 border border-gray-400 rounded-md shadow-sm text-sm font-medium text-gray-700 dark:text-gray-300 bg-white dark:bg-neutral-800 hover:bg-neutral-50 dark:hover:bg-neutral-600">Select
file</label>
<span id="fileName_alpr_grid_debug" class="ml-2 text-sm text-gray-600 dark:text-gray-300"></span>
</div>
</div>
<div>
<label for="grid_size_alpr_grid_debug"
class="block text-sm font-medium text-gray-700 dark:text-gray-300">Grid Size:</label>
<input type="number" id="grid_size_alpr_grid_debug" name="grid_size" value="1"
class="mt-1 block w-full px-3 py-2 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm dark:bg-neutral-800 dark:border-neutral-700">
</div>
<div>
<label for="wanted_cells_alpr_grid_debug"
class="block text-sm font-medium text-gray-700 dark:text-gray-300">Wanted Cells:</label>
<div id="gridContainer_alpr_grid_debug" class="grid-container"></div>
<input type="hidden" id="wanted_cells_alpr_grid_debug" name="wanted_cells">
</div>
</div>
<div id="imagePreview" class="mt-4 hidden">
<label id="imagePreviewLabel"
class="block text-sm font-medium text-gray-700 dark:text-gray-300 mb-2">Identified plate images:</label>
<div id="previewImageContainer" class="grid grid-cols-1 sm:grid-cols-2 gap-4"></div>
<img id="previewImageDebug" src="#" alt="" class="max-w-full h-auto rounded-lg">
</div>
<button class="w-full py-2 px-4 bg-black text-white font-semibold rounded-md shadow-sm hover:bg-neutral-900 dark:bg-neutral-900 dark:hover:bg-neutral-950"
id="submitButton" type="submit">Submit
</button>
</form>
<div class="mt-6">
<h2 class="text-xl font-semibold mb-2 dark:text-gray-200">
Response
<span class="text-sm font-normal" id="timer"></span>
</h2>
<pre class="bg-neutral-100 dark:bg-neutral-900 p-4 border rounded-lg text-sm text-gray-900 dark:text-gray-200 overflow-x-auto"
id="responseBox"></pre>
</div>
</div>
<script src="https://code.jquery.com/jquery-3.5.1.min.js"></script>
<script>
function updateFileName() {
var input = document.getElementById('upload');
var fileName = document.getElementById('fileName');
var imagePreview = document.getElementById('imagePreview');
var previewImage = document.getElementById('previewImage');
<script src="https://code.jquery.com/jquery-3.5.1.min.js"></script>
<script>
function updateFormFields() {
const service = document.getElementById('service').value;
localStorage.setItem('selectedService', service);
fileName.textContent = input.files[0] ? input.files[0].name : '';
document.querySelectorAll('.service-fields').forEach(field => {
field.classList.add('hidden');
field.querySelectorAll('input, select').forEach(field => field.disabled = true);
});
if (input.files && input.files[0]) {
var reader = new FileReader();
const selectedServiceFields = document.getElementById(service + 'Fields');
selectedServiceFields.classList.remove('hidden');
selectedServiceFields.querySelectorAll('input, select').forEach(field => field.disabled = false);
reader.onload = function (e) {
previewImage.src = e.target.result;
imagePreview.classList.remove('hidden');
['responseBox', 'timer', 'fileName_' + service, 'previewImage', 'imagePreview', 'upload_' + service]
.forEach(id => {
const element = document.getElementById(id);
if (element) {
if (element.tagName === 'DIV') element.classList.add('hidden');
if (element.tagName === 'INPUT') element.value = '';
if (element.tagName === 'SPAN' || element.tagName === 'PRE') element.textContent = '';
if (element.tagName === 'IMG') element.src = '';
}
});
reader.readAsDataURL(input.files[0]);
}
updateGrid(service);
}
function initializeForm() {
const savedService = localStorage.getItem('selectedService');
if (savedService) {
document.getElementById('service').value = savedService;
updateFormFields();
}
}
const prefersDarkScheme = window.matchMedia("(prefers-color-scheme: dark)");
function toggleLogo() {
const logo = document.getElementById('logo');
const logoDark = document.getElementById('logoDark');
if (prefersDarkScheme.matches) {
logo.style.display = 'none';
logoDark.style.display = 'block';
} else {
logo.style.display = 'block';
logoDark.style.display = 'none';
}
function toggleLogo() {
const logo = document.getElementById('logo');
const logoDark = document.getElementById('logoDark');
if (window.matchMedia("(prefers-color-scheme: dark)").matches) {
logo.style.display = 'none';
logoDark.style.display = 'block';
} else {
logo.style.display = 'block';
logoDark.style.display = 'none';
}
}
function updateFileName() {
const service = document.getElementById('service').value;
const input = document.getElementById('upload_' + service);
const fileName = document.getElementById('fileName_' + service);
const imagePreview = document.getElementById('imagePreview');
const previewImage = document.getElementById('previewImage');
const imagePreviewLabel = document.getElementById('imagePreviewLabel');
fileName.textContent = input.files[0] ? input.files[0].name : '';
imagePreviewLabel.textContent = 'Preview:';
if (input.files && input.files[0]) {
const reader = new FileReader();
reader.onload = (e) => {
previewImage.src = e.target.result;
imagePreview.classList.remove('hidden');
}
reader.readAsDataURL(input.files[0]);
}
}
function updateGrid(service) {
const gridSize = parseInt(document.getElementById('grid_size_' + service).value);
const gridContainer = document.getElementById('gridContainer_' + service);
gridContainer.innerHTML = '';
gridContainer.style.gridTemplateColumns = `repeat(${gridSize}, minmax(0, 1fr))`;
const wantedCellsInput = document.getElementById('wanted_cells_' + service);
const selectedCells = wantedCellsInput.value ? wantedCellsInput.value.split(',').map(Number) : [];
for (let i = 0; i < gridSize * gridSize; i++) {
const cell = document.createElement('div');
cell.classList.add('grid-cell');
const cellSpan = document.createElement('span');
cellSpan.textContent = i + 1;
cell.appendChild(cellSpan);
if (selectedCells.includes(i + 1)) cell.classList.add('selected');
cell.addEventListener('click', () => {
cell.classList.toggle('selected');
updateWantedCells(service);
});
gridContainer.appendChild(cell);
}
}
function updateWantedCells(service) {
const gridContainer = document.getElementById('gridContainer_' + service);
const selectedCells = [];
gridContainer.querySelectorAll('.grid-cell.selected').forEach(cell => {
selectedCells.push(cell.textContent);
});
document.getElementById('wanted_cells_' + service).value = selectedCells.join(',');
}
$(document).ready(function () {
initializeForm();
toggleLogo();
prefersDarkScheme.addEventListener('change', toggleLogo);
window.matchMedia("(prefers-color-scheme: dark)").addEventListener('change', toggleLogo);
$(document).ready(function () {
$('#uploadForm').on('submit', function (e) {
e.preventDefault();
var formData = new FormData(this);
$.ajax({
url: '/v1/image/alpr',
type: 'POST',
data: formData,
processData: false,
contentType: false,
success: function (data) {
$('#responseBox').text(JSON.stringify(data, null, 2));
},
error: function (xhr, status, error) {
var err = JSON.parse(xhr.responseText);
$('#responseBox').text(JSON.stringify(err, null, 2));
$('#grid_size_alpr, #grid_size_alpr_grid_debug').on('input', function () {
updateGrid(document.getElementById('service').value);
});
$('#serviceForm').on('submit', function (e) {
e.preventDefault();
const service = $('#service').val();
const formData = new FormData(this);
formData.append('whole_image_fallback', $("#whole_image_fallback_alpr").is(":checked") ? "true" : "false");
var url;
if (service === 'alpr') {
url = '/v1/image/alpr';
type = 'POST';
} else if (service === 'alpr_grid_debug') {
url = '/v1/image/alpr_grid_debug';
type = 'POST';
}
$('#submitButton').prop('disabled', true).html('<svg class="animate-spin h-5 w-5 text-white mx-auto" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none"><circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle><path class="opacity-75" fill="currentColor" d="M12 2a10 10 0 00-8 4.9l1.5 1A8 8 0 0112 4V2z"></path></svg>');
const startTime = Date.now();
$.ajax({
url: url,
type: type,
data: formData,
processData: false,
contentType: false,
success: function (data) {
const endTime = Date.now();
const elapsedTime = endTime - startTime;
$('#responseBox').text(JSON.stringify(data, null, 2));
$('#timer').text(`(${elapsedTime} ms)`);
$('#submitButton').prop('disabled', false).text('Submit');
$('#previewImageDebug').attr('src', '');
$('#previewImageContainer').empty();
if (data.image) {
$('#previewImageDebug').attr('src', data.image);
$('#imagePreview').removeClass('hidden');
}
});
if (Array.isArray(data.predictions) && data.predictions.length > 0) {
data.predictions.forEach((prediction, index) => {
if (prediction.image) {
const img = $('<img>')
.attr('src', prediction.image)
.addClass('max-w-full h-auto rounded-lg border border-gray-300 dark:border-gray-700 shadow');
const wrapper = $('<div>').append(
$('<p>').addClass('text-sm mb-1 text-gray-600 dark:text-gray-300').text(`Plate ${index + 1}`),
img
);
$('#previewImageContainer').append(wrapper);
}
});
$('#imagePreview').removeClass('hidden');
$('#imagePreviewLabel').text('Identified plate images:');
} else {
updateFileName(); // fallback if no images found
}
},
error: function (xhr) {
const endTime = Date.now();
const elapsedTime = endTime - startTime;
const err = JSON.parse(xhr.responseText);
$('#responseBox').text(JSON.stringify(err, null, 2));
$('#timer').text(`(${elapsedTime} ms)`);
$('#submitButton').prop('disabled', false).text('Submit');
}
});
});
</script>
});
</script>
</body>
</html>