|
# rupindersingh1313/30_8_2025_dataset |
|
|
|
## Dataset Description |
|
|
|
This dataset contains Punjabi OCR data with page images and their corresponding text annotations, ready for machine learning applications. |
|
|
|
### Dataset Summary |
|
|
|
- **Language**: Punjabi (pa-IN) |
|
- **Script**: Gurmukhi |
|
- **Total Pages**: 769 |
|
- **Source**: Generated using Punjabi OCR annotation pipeline |
|
- **Format**: Image-annotation pairs with original JSON annotations |
|
|
|
### Dataset Splits |
|
|
|
- **Train**: 615 samples |
|
- **Validation**: 76 samples |
|
- **Test**: 78 samples |
|
|
|
The dataset is split into train/validation/test sets with an 80/10/10 ratio by default: |
|
- Training set for model training |
|
- Validation set for hyperparameter tuning and model selection |
|
- Test set for final evaluation |
|
|
|
### Dataset Structure |
|
|
|
Each row contains: |
|
- `image`: The page image (PNG format, high resolution) |
|
- `annotation`: Complete OCR annotation in JSON format (as string) |
|
|
|
The annotation JSON contains the original structure with: |
|
- Document metadata (language, script, image dimensions) |
|
- Text hierarchy (regions, lines, words) |
|
- Bounding box coordinates for all text elements |
|
- Complete text transcription |
|
|
|
### Usage |
|
|
|
```python |
|
from datasets import load_dataset |
|
import json |
|
|
|
# Load the dataset |
|
dataset = load_dataset("rupindersingh1313/30_8_2025_dataset") |
|
|
|
# Access different splits |
|
train_data = dataset["train"] |
|
val_data = dataset["validation"] |
|
test_data = dataset["test"] |
|
|
|
# Iterate through training data |
|
for sample in train_data: |
|
image = sample["image"] |
|
annotation = json.loads(sample["annotation"]) # Parse JSON annotation |
|
print(f"Image shape: {image.size}") |
|
print(f"Annotation keys: {list(annotation.keys())}") |
|
``` |
|
|
|
### Annotation Format |
|
|
|
The annotation field contains JSON with this structure: |
|
```json |
|
{ |
|
"document": { |
|
"id": "doc_001", |
|
"language": "pa-IN", |
|
"script": "Gurmukhi", |
|
"image": {"width": 2481, "height": 3507, "dpi": 300} |
|
}, |
|
"hierarchy": { |
|
"regions": [ |
|
{ |
|
"region_id": 1, |
|
"type": "text_block", |
|
"polygon": [x1, y1, x2, y2, ...], |
|
"lines": [ |
|
{ |
|
"line_id": 1, |
|
"polygon": [x1, y1, x2, y2, ...], |
|
"words": [ |
|
{ |
|
"word_id": 1, |
|
"text": "ਪੰਜਾਬੀ", |
|
"polygon": [x1, y1, x2, y2, ...] |
|
} |
|
] |
|
} |
|
] |
|
} |
|
] |
|
} |
|
} |
|
``` |
|
|
|
### Use Cases |
|
|
|
This dataset is suitable for: |
|
- **OCR Model Training**: Train custom OCR models for Punjabi text |
|
- **Text Detection**: Develop text region detection algorithms |
|
- **Document Layout Analysis**: Analyze document structure and layout |
|
- **Multilingual NLP**: Include Punjabi in multilingual language models |
|
- **Research**: Academic research in OCR and document processing |
|
|
|
### Data Quality |
|
|
|
- High-resolution images (300 DPI) |
|
- Accurate text transcriptions |
|
- Precise bounding box annotations |
|
- Consistent formatting and structure |
|
- Quality-controlled annotation process |
|
|
|
### License |
|
|
|
Please ensure proper attribution when using this dataset. Contact the dataset creators for commercial use permissions. |
|
|
|
### Citation |
|
|
|
If you use this dataset, please cite: |
|
|
|
```bibtex |
|
@dataset{punjabi_ocr_dataset, |
|
title={Punjabi OCR Dataset - rupindersingh1313/30_8_2025_dataset}, |
|
author={Generated using Punjabi OCR Pipeline}, |
|
year={2025}, |
|
url={https://huggingface.co/datasets/rupindersingh1313/30_8_2025_dataset}, |
|
note={High-quality Punjabi OCR dataset with images and annotations} |
|
} |
|
``` |
|
|
|
### Contact |
|
|
|
For questions, issues, or contributions, please contact the dataset maintainers. |
|
|