File size: 3,592 Bytes
3b6833d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
# rupindersingh1313/30_8_2025_dataset
## Dataset Description
This dataset contains Punjabi OCR data with page images and their corresponding text annotations, ready for machine learning applications.
### Dataset Summary
- **Language**: Punjabi (pa-IN)
- **Script**: Gurmukhi
- **Total Pages**: 769
- **Source**: Generated using Punjabi OCR annotation pipeline
- **Format**: Image-annotation pairs with original JSON annotations
### Dataset Splits
- **Train**: 615 samples
- **Validation**: 76 samples
- **Test**: 78 samples
The dataset is split into train/validation/test sets with an 80/10/10 ratio by default:
- Training set for model training
- Validation set for hyperparameter tuning and model selection
- Test set for final evaluation
### Dataset Structure
Each row contains:
- `image`: The page image (PNG format, high resolution)
- `annotation`: Complete OCR annotation in JSON format (as string)
The annotation JSON contains the original structure with:
- Document metadata (language, script, image dimensions)
- Text hierarchy (regions, lines, words)
- Bounding box coordinates for all text elements
- Complete text transcription
### Usage
```python
from datasets import load_dataset
import json
# Load the dataset
dataset = load_dataset("rupindersingh1313/30_8_2025_dataset")
# Access different splits
train_data = dataset["train"]
val_data = dataset["validation"]
test_data = dataset["test"]
# Iterate through training data
for sample in train_data:
image = sample["image"]
annotation = json.loads(sample["annotation"]) # Parse JSON annotation
print(f"Image shape: {image.size}")
print(f"Annotation keys: {list(annotation.keys())}")
```
### Annotation Format
The annotation field contains JSON with this structure:
```json
{
"document": {
"id": "doc_001",
"language": "pa-IN",
"script": "Gurmukhi",
"image": {"width": 2481, "height": 3507, "dpi": 300}
},
"hierarchy": {
"regions": [
{
"region_id": 1,
"type": "text_block",
"polygon": [x1, y1, x2, y2, ...],
"lines": [
{
"line_id": 1,
"polygon": [x1, y1, x2, y2, ...],
"words": [
{
"word_id": 1,
"text": "ਪੰਜਾਬੀ",
"polygon": [x1, y1, x2, y2, ...]
}
]
}
]
}
]
}
}
```
### Use Cases
This dataset is suitable for:
- **OCR Model Training**: Train custom OCR models for Punjabi text
- **Text Detection**: Develop text region detection algorithms
- **Document Layout Analysis**: Analyze document structure and layout
- **Multilingual NLP**: Include Punjabi in multilingual language models
- **Research**: Academic research in OCR and document processing
### Data Quality
- High-resolution images (300 DPI)
- Accurate text transcriptions
- Precise bounding box annotations
- Consistent formatting and structure
- Quality-controlled annotation process
### License
Please ensure proper attribution when using this dataset. Contact the dataset creators for commercial use permissions.
### Citation
If you use this dataset, please cite:
```bibtex
@dataset{punjabi_ocr_dataset,
title={Punjabi OCR Dataset - rupindersingh1313/30_8_2025_dataset},
author={Generated using Punjabi OCR Pipeline},
year={2025},
url={https://huggingface.co/datasets/rupindersingh1313/30_8_2025_dataset},
note={High-quality Punjabi OCR dataset with images and annotations}
}
```
### Contact
For questions, issues, or contributions, please contact the dataset maintainers.
|