--- title: PDF Data Extractor emoji: ๐Ÿงพ colorFrom: indigo colorTo: green sdk: gradio sdk_version: 5.43.1 app_file: app.py pinned: false ---

PDF Document Layout Analysis

A Docker-powered microservice for intelligent PDF document layout analysis, OCR, and content extraction

Python Version FastAPI Docker GPU Support

Built with โค๏ธ by HURIDOCS

โญ Star us on GitHub โ€ข ๐Ÿณ Pull from Docker Hub โ€ข ๐Ÿค— View on Hugging Face

--- ## ๐Ÿš€ Overview This project provides a powerful and flexible PDF analysis microservice built with **Clean Architecture** principles. The service enables OCR, segmentation, and classification of different parts of PDF pages, identifying elements such as texts, titles, pictures, tables, formulas, and more. Additionally, it determines the correct reading order of these identified elements and can convert PDFs to various formats including Markdown and HTML. ### โœจ Key Features - ๐Ÿ” **Advanced PDF Layout Analysis** - Segment and classify PDF content with high accuracy - ๐Ÿ–ผ๏ธ **Visual & Fast Models** - Choose between VGT (Vision Grid Transformer) for accuracy or LightGBM for speed - ๐Ÿ“ **Multi-format Output** - Export to JSON, Markdown, HTML, and visualize PDF segmentations - ๐ŸŒ **OCR Support** - 150+ language support with Tesseract OCR - ๐Ÿ“Š **Table & Formula Extraction** - Extract tables as HTML and formulas as LaTeX - ๐Ÿ—๏ธ **Clean Architecture** - Modular, testable, and maintainable codebase - ๐Ÿณ **Docker-Ready** - Easy deployment with GPU support - โšก **RESTful API** - Comprehensive API with 10+ endpoints
### ๐Ÿ”— Project Links - **GitHub**: [pdf-document-layout-analysis](https://github.com/huridocs/pdf-document-layout-analysis) - **HuggingFace**: [pdf-document-layout-analysis](https://huggingface.co/HURIDOCS/pdf-document-layout-analysis) - **DockerHub**: [pdf-document-layout-analysis](https://hub.docker.com/r/huridocs/pdf-document-layout-analysis/) --- ## ๐Ÿš€ Quick Start ### 1. Start the Service **With GPU support (recommended for better performance):** ```bash make start ``` **Without GPU support:** ```bash make start_no_gpu ``` The service will be available at `http://localhost:5060` **Check service status:** ```bash curl http://localhost:5060/info ``` ### 2. Basic PDF Analysis **Analyze a PDF document (VGT model - high accuracy):** ```bash curl -X POST -F 'file=@/path/to/your/document.pdf' http://localhost:5060 ``` **Fast analysis (LightGBM models - faster processing):** ```bash curl -X POST -F 'file=@/path/to/your/document.pdf' -F "fast=true" http://localhost:5060 ``` ### 3. Stop the Service ```bash make stop ``` > ๐Ÿ’ก **Tip**: Replace `/path/to/your/document.pdf` with the actual path to your PDF file. The service will return a JSON response with segmented content and metadata. ## ๐Ÿ“‹ Table of Contents - [๐Ÿš€ Quick Start](#-quick-start) - [โš™๏ธ Dependencies](#-dependencies) - [๐Ÿ“‹ Requirements](#-requirements) - [๐Ÿ“š API Reference](#-api-reference) - [๐Ÿ’ก Usage Examples](#-usage-examples) - [๐Ÿ—๏ธ Architecture](#-architecture) - [๐Ÿค– Models](#-models) - [๐Ÿ“Š Data](#-data) - [๐Ÿ”ง Development](#-development) - [๐Ÿ“ˆ Benchmarks](#-benchmarks) - [Performance](#performance) - [Speed](#speed) - [๐ŸŒ Installation of More Languages for OCR](#-installation-of-more-languages-for-ocr) - [๐Ÿ”— Related Services](#-related-services) - [๐Ÿค Contributing](#-contributing) ## โš™๏ธ Dependencies ### Required - **Docker Desktop 4.25.0+** - [Installation Guide](https://www.docker.com/products/docker-desktop/) - **Python 3.10+** (for local development) ### Optional - **NVIDIA Container Toolkit** - [Installation Guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) (for GPU support) ## ๐Ÿ“‹ Requirements ### System Requirements - **RAM**: 2 GB minimum - **GPU Memory**: 5 GB (optional, will fallback to CPU if unavailable) - **Disk Space**: 10 GB for models and dependencies - **CPU**: Multi-core recommended for better performance ### Docker Requirements - Docker Engine 20.10+ - Docker Compose 2.0+ ## ๐Ÿ“š API Reference The service provides a comprehensive RESTful API with the following endpoints: ### Core Analysis Endpoints | Endpoint | Method | Description | Parameters | |----------|--------|-------------|------------| | `/` | POST | Analyze PDF layout and extract segments | `file`, `fast`, `parse_tables_and_math` | | `/save_xml/{filename}` | POST | Analyze PDF and save XML output | `file`, `xml_file_name`, `fast` | | `/get_xml/{filename}` | GET | Retrieve saved XML analysis | `xml_file_name` | ### Content Extraction Endpoints | Endpoint | Method | Description | Parameters | |----------|--------|-------------|------------| | `/text` | POST | Extract text by content types | `file`, `fast`, `types` | | `/toc` | POST | Extract table of contents | `file`, `fast` | | `/toc_legacy_uwazi_compatible` | POST | Extract TOC (Uwazi compatible) | `file` | ### Format Conversion Endpoints | Endpoint | Method | Description | Parameters | |----------|--------|-------------|------------| | `/markdown` | POST | Convert PDF to Markdown (includes segmentation data in zip) | `file`, `fast`, `extract_toc`, `dpi`, `output_file` | | `/html` | POST | Convert PDF to HTML (includes segmentation data in zip) | `file`, `fast`, `extract_toc`, `dpi`, `output_file` | | `/visualize` | POST | Visualize segmentation results on the PDF | `file`, `fast` | ### OCR & Utility Endpoints | Endpoint | Method | Description | Parameters | |----------|--------|-------------|------------| | `/ocr` | POST | Apply OCR to PDF | `file`, `language` | | `/info` | GET | Get service information | - | | `/` | GET | Health check and system info | - | | `/error` | GET | Test error handling | - | ### Common Parameters - **`file`**: PDF file to process (multipart/form-data) - **`fast`**: Use LightGBM models instead of VGT (boolean, default: false) - **`parse_tables_and_math`**: Apply OCR to table regions (boolean, default: false) and convert formulas to LaTeX - **`language`**: OCR language code (string, default: "en") - **`types`**: Comma-separated content types to extract (string, default: "all") - **`extract_toc`**: Include table of contents at the beginning of the output (boolean, default: false) - **`dpi`**: Image resolution for conversion (integer, default: 120) ## ๐Ÿ’ก Usage Examples ### Basic PDF Analysis **Standard analysis with VGT model:** ```bash curl -X POST \ -F 'file=@document.pdf' \ http://localhost:5060 ``` **Fast analysis with LightGBM models:** ```bash curl -X POST \ -F 'file=@document.pdf' \ -F 'fast=true' \ http://localhost:5060 ``` **Analysis with table and math parsing:** ```bash curl -X POST \ -F 'file=@document.pdf' \ -F 'parse_tables_and_math=true' \ http://localhost:5060 ``` ### Text Extraction **Extract all text:** ```bash curl -X POST \ -F 'file=@document.pdf' \ -F 'types=all' \ http://localhost:5060/text ``` **Extract specific content types:** ```bash curl -X POST \ -F 'file=@document.pdf' \ -F 'types=title,text,table' \ http://localhost:5060/text ``` ### Format Conversion **Convert to Markdown:** ```bash curl -X POST http://localhost:5060/markdown \ -F 'file=@document.pdf' \ -F 'extract_toc=true' \ -F 'output_file=document.md' \ --output 'document.zip' ``` **Convert to HTML:** ```bash curl -X POST http://localhost:5060/html \ -F 'file=@document.pdf' \ -F 'extract_toc=true' \ -F 'output_file=document.html' \ --output 'document.zip' ``` > **๐Ÿ“‹ Segmentation Data**: Format conversion endpoints automatically include detailed segmentation data in the zip output. The resulting zip file contains a `{filename}_segmentation.json` file with information about each detected document segment including: > - **Coordinates**: `left`, `top`, `width`, `height` > - **Page information**: `page_number`, `page_width`, `page_height` > - **Content**: `text` content and segment `type` (e.g., "Title", "Text", "Table", "Picture") ### OCR Processing **OCR in English:** ```bash curl -X POST \ -F 'file=@scanned_document.pdf' \ -F 'language=en' \ http://localhost:5060/ocr \ --output ocr_processed.pdf ``` **OCR in other languages:** ```bash # French curl -X POST \ -F 'file=@document_french.pdf' \ -F 'language=fr' \ http://localhost:5060/ocr \ --output ocr_french.pdf # Spanish curl -X POST \ -F 'file=@document_spanish.pdf' \ -F 'language=es' \ http://localhost:5060/ocr \ --output ocr_spanish.pdf ``` ### Visualization **Generate visualization PDF:** ```bash curl -X POST \ -F 'file=@document.pdf' \ http://localhost:5060/visualize \ --output visualization.pdf ``` ### Table of Contents Extraction **Extract structured TOC:** ```bash curl -X POST \ -F 'file=@document.pdf' \ http://localhost:5060/toc ``` ### XML Storage and Retrieval **Analyze and save XML:** ```bash curl -X POST \ -F 'file=@document.pdf' \ http://localhost:5060/save_xml/my_analysis ``` **Retrieve saved XML:** ```bash curl http://localhost:5060/get_xml/my_analysis.xml ``` ### Service Information **Get service info and supported languages:** ```bash curl http://localhost:5060/info ``` **Health check:** ```bash curl http://localhost:5060/ ``` ### Response Format Most endpoints return JSON with segment information: ```json [ { "left": 72.0, "top": 84.0, "width": 451.2, "height": 23.04, "page_number": 1, "page_width": 595.32, "page_height": 841.92, "text": "Document Title", "type": "Title" }, { "left": 72.0, "top": 120.0, "width": 451.2, "height": 200.0, "page_number": 1, "page_width": 595.32, "page_height": 841.92, "text": "This is the main text content...", "type": "Text" } ] ``` ### Supported Content Types - `Caption` - Image and table captions - `Footnote` - Footnote text - `Formula` - Mathematical formulas - `List item` - List items and bullet points - `Page footer` - Footer content - `Page header` - Header content - `Picture` - Images and figures - `Section header` - Section headings - `Table` - Table content - `Text` - Regular text paragraphs - `Title` - Document and section titles ## ๐Ÿ—๏ธ Architecture This project follows **Clean Architecture** principles, ensuring separation of concerns, testability, and maintainability. The codebase is organized into distinct layers: ### Directory Structure ``` src/ โ”œโ”€โ”€ domain/ # Enterprise Business Rules โ”‚ โ”œโ”€โ”€ PdfImages.py # PDF image handling domain logic โ”‚ โ”œโ”€โ”€ PdfSegment.py # PDF segment entity โ”‚ โ”œโ”€โ”€ Prediction.py # ML prediction entity โ”‚ โ””โ”€โ”€ SegmentBox.py # Core segment box entity โ”œโ”€โ”€ use_cases/ # Application Business Rules โ”‚ โ”œโ”€โ”€ pdf_analysis/ # PDF analysis use case โ”‚ โ”œโ”€โ”€ text_extraction/ # Text extraction use case โ”‚ โ”œโ”€โ”€ toc_extraction/ # Table of contents extraction โ”‚ โ”œโ”€โ”€ visualization/ # PDF visualization use case โ”‚ โ”œโ”€โ”€ ocr/ # OCR processing use case โ”‚ โ”œโ”€โ”€ markdown_conversion/ # Markdown conversion use case โ”‚ โ””โ”€โ”€ html_conversion/ # HTML conversion use case โ”œโ”€โ”€ adapters/ # Interface Adapters โ”‚ โ”œโ”€โ”€ infrastructure/ # External service adapters โ”‚ โ”œโ”€โ”€ ml/ # Machine learning model adapters โ”‚ โ”œโ”€โ”€ storage/ # File storage adapters โ”‚ โ””โ”€โ”€ web/ # Web framework adapters โ”œโ”€โ”€ ports/ # Interface definitions โ”‚ โ”œโ”€โ”€ services/ # Service interfaces โ”‚ โ””โ”€โ”€ repositories/ # Repository interfaces โ””โ”€โ”€ drivers/ # Frameworks & Drivers โ””โ”€โ”€ web/ # FastAPI application setup ``` ### Layer Responsibilities - **Domain Layer**: Contains core business entities and rules independent of external concerns - **Use Cases Layer**: Orchestrates domain entities to fulfill specific application requirements - **Adapters Layer**: Implements interfaces defined by inner layers and adapts external frameworks - **Drivers Layer**: Contains frameworks, databases, and external agency configurations ### Key Benefits - ๐Ÿ”„ **Dependency Inversion**: High-level modules don't depend on low-level modules - ๐Ÿงช **Testability**: Easy to unit test business logic in isolation - ๐Ÿ”ง **Maintainability**: Changes to external frameworks don't affect business rules - ๐Ÿ“ˆ **Scalability**: Easy to add new features without modifying existing code ## ๐Ÿค– Models The service offers two complementary model approaches, each optimized for different use cases: ### 1. Vision Grid Transformer (VGT) - High Accuracy Model **Overview**: A state-of-the-art visual model developed by Alibaba Research Group that "sees" the entire page layout. **Key Features**: - ๐ŸŽฏ **High Accuracy**: Best-in-class performance on document layout analysis - ๐Ÿ‘๏ธ **Visual Understanding**: Analyzes the entire page context including spatial relationships - ๐Ÿ“Š **Trained on DocLayNet**: Uses the comprehensive [DocLayNet dataset](https://github.com/DS4SD/DocLayNet) - ๐Ÿ”ฌ **Research-Backed**: Based on [Advanced Literate Machinery](https://github.com/AlibabaResearch/AdvancedLiterateMachinery) **Resource Requirements**: - GPU: 5GB+ VRAM (recommended) - CPU: Falls back automatically if GPU unavailable - Processing Speed: ~1.75 seconds/page (GPU [GTX 1070]) or ~13.5 seconds/page (CPU [i7-8700]) ### 2. LightGBM Models - Fast & Efficient **Overview**: Lightweight ensemble of two specialized models using XML-based features from Poppler. **Key Features**: - โšก **High Speed**: ~0.42 seconds per page on CPU (i7-8700) - ๐Ÿ’พ **Low Resource Usage**: CPU-only, minimal memory footprint - ๐Ÿ”„ **Dual Model Approach**: - **Token Type Classifier**: Identifies content types (title, text, table, etc.) - **Segmentation Model**: Determines proper content boundaries - ๐Ÿ“„ **XML-Based**: Uses Poppler's PDF-to-XML conversion for feature extraction **Trade-offs**: - Slightly lower accuracy compared to VGT - No visual context understanding - Excellent for batch processing and resource-constrained environments ### OCR Integration Both models integrate seamlessly with OCR capabilities: - **Engine**: [Tesseract OCR](https://github.com/tesseract-ocr/tesseract) - **Processing**: [ocrmypdf](https://ocrmypdf.readthedocs.io/en/latest/index.html) - **Languages**: 150+ supported languages - **Output**: Searchable PDFs with preserved layout ### Model Selection Guide | Use Case | Recommended Model | Reason | |----------|------------------|---------| | High accuracy requirements | VGT | Superior visual understanding | | Batch processing | LightGBM | Faster processing, lower resources | | GPU available | VGT | Leverages GPU acceleration | | CPU-only environment | LightGBM | Optimized for CPU processing | | Real-time applications | LightGBM | Consistent fast response times | | Research/analysis | VGT | Best accuracy for detailed analysis | ## ๐Ÿ“Š Data ### Training Dataset Both model types are trained on the comprehensive [DocLayNet dataset](https://github.com/DS4SD/DocLayNet), a large-scale document layout analysis dataset containing over 80,000 document pages. ### Document Categories The models can identify and classify 11 distinct content types: | ID | Category | Description | |----|----------|-------------| | 1 | **Caption** | Image and table captions | | 2 | **Footnote** | Footnote references and text | | 3 | **Formula** | Mathematical equations and formulas | | 4 | **List item** | Bulleted and numbered list items | | 5 | **Page footer** | Footer content and page numbers | | 6 | **Page header** | Header content and titles | | 7 | **Picture** | Images, figures, and graphics | | 8 | **Section header** | Section and subsection headings | | 9 | **Table** | Tabular data and structures | | 10 | **Text** | Regular paragraph text | | 11 | **Title** | Document and chapter titles | ### Dataset Characteristics - **Domain Coverage**: Academic papers, technical documents, reports - **Language**: Primarily English with multilingual support - **Quality**: High-quality annotations with bounding boxes and labels - **Diversity**: Various document layouts, fonts, and formatting styles For detailed information about the dataset, visit the [DocLayNet repository](https://github.com/DS4SD/DocLayNet). ## ๐Ÿ”ง Development ### Local Development Setup 1. **Clone the repository:** ```bash git clone https://github.com/huridocs/pdf-document-layout-analysis.git cd pdf-document-layout-analysis ``` 2. **Create virtual environment:** ```bash make install_venv ``` 3. **Activate environment:** ```bash make activate # or manually: source .venv/bin/activate ``` 4. **Install dependencies:** ```bash make install ``` ### Code Quality **Format code:** ```bash make formatter ``` **Check formatting:** ```bash make check_format ``` ### Testing **Run tests:** ```bash make test ``` **Integration tests:** ```bash # Tests are located in src/tests/integration/ python -m pytest src/tests/integration/test_end_to_end.py ``` ### Docker Development **Build and start (detached mode):** ```bash # With GPU make start_detached_gpu # Without GPU make start_detached ``` **Clean up Docker resources:** ```bash # Remove containers make remove_docker_containers # Remove images make remove_docker_images ``` ### Project Structure ``` pdf-document-layout-analysis/ โ”œโ”€โ”€ src/ # Source code โ”‚ โ”œโ”€โ”€ domain/ # Business entities โ”‚ โ”œโ”€โ”€ use_cases/ # Application logic โ”‚ โ”œโ”€โ”€ adapters/ # External integrations โ”‚ โ”œโ”€โ”€ ports/ # Interface definitions โ”‚ โ””โ”€โ”€ drivers/ # Framework configurations โ”œโ”€โ”€ test_pdfs/ # Test PDF files โ”œโ”€โ”€ models/ # ML model storage โ”œโ”€โ”€ docker-compose.yml # Docker configuration โ”œโ”€โ”€ Dockerfile # Container definition โ”œโ”€โ”€ Makefile # Development commands โ”œโ”€โ”€ pyproject.toml # Python project configuration โ””โ”€โ”€ requirements.txt # Python dependencies ``` ### Environment Variables Key configuration options: ```bash # OCR configuration OCR_SOURCE=/tmp/ocr_source # Model paths (auto-configured) MODELS_PATH=./models # Service configuration HOST=0.0.0.0 PORT=5060 ``` ### Adding New Features 1. **Domain Logic**: Add entities in `src/domain/` 2. **Use Cases**: Implement business logic in `src/use_cases/` 3. **Adapters**: Create integrations in `src/adapters/` 4. **Ports**: Define interfaces in `src/ports/` 5. **Controllers**: Add endpoints in `src/adapters/web/` ### Debugging **View logs:** ```bash docker compose logs -f ``` **Access container:** ```bash docker exec -it pdf-document-layout-analysis /bin/bash ``` **Free up disk space:** ```bash make free_up_space ``` ### Order of Output Elements The service returns SegmentBox elements in a carefully determined reading order: #### Reading Order Algorithm 1. **Poppler Integration**: Uses [Poppler](https://poppler.freedesktop.org) PDF-to-XML conversion to establish initial token reading order 2. **Segment Averaging**: Calculates average reading order for multi-token segments 3. **Type-Based Sorting**: Prioritizes content types: - **Headers** placed first - **Main content** in reading order - **Footers and footnotes** placed last #### Non-Text Elements For segments without text (e.g., images): - Processed after text-based sorting - Positioned based on nearest text segment proximity - Uses spatial distance as the primary criterion ### Advanced Table and Formula Extraction #### Default Behavior - **Formulas**: Automatically extracted as LaTeX format in the `text` property - **Tables**: Basic text extraction included by default #### Enhanced Table Extraction Parse tables and extract them in HTML format by setting `parse_tables_and_math=true`: ```bash curl -X POST -F 'file=@document.pdf' -F 'parse_tables_and_math=true' http://localhost:5060 ``` #### Extraction Engines - **Formulas**: [LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR) - **Tables**: [RapidTable](https://github.com/RapidAI/RapidTable) ## ๐Ÿ“ˆ Benchmarks ### Performance VGT model performance on PubLayNet dataset: | Metric | Overall | Text | Title | List | Table | Figure | |--------|---------|------|-------|------|-------|--------| | **F1 Score** | **0.962** | 0.950 | 0.939 | 0.968 | 0.981 | 0.971 | > ๐Ÿ“Š **Comparison**: View comprehensive model comparisons at [Papers With Code](https://paperswithcode.com/sota/document-layout-analysis-on-publaynet-val) ### Speed Performance benchmarks on 15-page academic documents: | Model | Hardware | Speed (sec/page) | Use Case | |-------|----------|------------------|----------| | **LightGBM** | CPU (i7-8700 3.2GHz) | **0.42** | Fast processing | | **VGT** | GPU (GTX 1070) | **1.75** | High accuracy | | **VGT** | CPU (i7-8700 3.2GHz) | 13.5 | CPU fallback | ### Performance Recommendations - **GPU Available**: Use VGT for best accuracy-speed balance - **CPU Only**: Use LightGBM for optimal performance - **Batch Processing**: LightGBM for consistent throughput - **High Accuracy**: VGT with GPU for best results ## ๐ŸŒ Installation of More Languages for OCR The service uses Tesseract OCR with support for 150+ languages. The Docker image includes only common languages to minimize image size. ### Installing Additional Languages #### 1. Access the Container ```bash docker exec -it --user root pdf-document-layout-analysis /bin/bash ``` #### 2. Install Language Packs ```bash # Install specific language apt-get update apt-get install tesseract-ocr-[LANGCODE] ``` #### 3. Common Language Examples ```bash # Korean apt-get install tesseract-ocr-kor # German apt-get install tesseract-ocr-deu # French apt-get install tesseract-ocr-fra # Spanish apt-get install tesseract-ocr-spa # Chinese Simplified apt-get install tesseract-ocr-chi-sim # Arabic apt-get install tesseract-ocr-ara # Japanese apt-get install tesseract-ocr-jpn ``` #### 4. Verify Installation ```bash curl http://localhost:5060/info ``` ### Language Code Reference Find Tesseract language codes in the [ISO to Tesseract mapping](https://github.com/huridocs/pdf-document-layout-analysis/blob/main/src/adapters/infrastructure/ocr/languages.py). ### Supported Languages Common language codes: - `eng` - English - `fra` - French - `deu` - German - `spa` - Spanish - `ita` - Italian - `por` - Portuguese - `rus` - Russian - `chi-sim` - Chinese Simplified - `chi-tra` - Chinese Traditional - `jpn` - Japanese - `kor` - Korean - `ara` - Arabic - `hin` - Hindi ### Usage with Multiple Languages ```bash # OCR with specific language curl -X POST \ -F 'file=@document.pdf' \ -F 'language=fr' \ http://localhost:5060/ocr \ --output french_ocr.pdf ``` ## ๐Ÿ”— Related Services Explore our ecosystem of PDF processing services built on this foundation: ### [PDF Table of Contents Extractor](https://github.com/huridocs/pdf-table-of-contents-extractor) ๐Ÿ” **Purpose**: Intelligent extraction of structured table of contents from PDF documents **Key Features**: - Leverages layout analysis for accurate TOC identification - Hierarchical structure recognition - Multiple output formats supported - Integration-ready API ### [PDF Text Extraction](https://github.com/huridocs/pdf-text-extraction) ๐Ÿ“ **Purpose**: Advanced text extraction with layout awareness **Key Features**: - Content-type aware extraction - Preserves document structure - Reading order optimization - Clean text output with metadata ### Integration Benefits These services work seamlessly together: - **Shared Analysis**: Reuse layout analysis results across services - **Consistent Output**: Standardized JSON format for easy integration - **Scalable Architecture**: Deploy services independently or together - **Docker Ready**: All services containerized for easy deployment ## ๐Ÿค Contributing We welcome contributions to improve the PDF Document Layout Analysis service! ### How to Contribute 1. **Fork the Repository** ```bash git clone https://github.com/your-username/pdf-document-layout-analysis.git ``` 2. **Create a Feature Branch** ```bash git checkout -b feature/your-feature-name ``` 3. **Set Up Development Environment** ```bash make install_venv make install ``` 4. **Make Your Changes** - Follow the Clean Architecture principles - Add tests for new features - Update documentation as needed 5. **Run Tests and Quality Checks** ```bash make test make check_format ``` 6. **Submit a Pull Request** - Provide clear description of changes - Include test results - Reference any related issues ### Contribution Guidelines #### Code Standards - **Python**: Follow PEP 8 with 125-character line length - **Architecture**: Maintain Clean Architecture boundaries - **Testing**: Include unit tests for new functionality - **Documentation**: Update README and docstrings #### Areas for Contribution - ๐Ÿ› **Bug Fixes**: Report and fix issues - โœจ **New Features**: Add new endpoints or functionality - ๐Ÿ“š **Documentation**: Improve guides and examples - ๐Ÿงช **Testing**: Expand test coverage - ๐Ÿš€ **Performance**: Optimize processing speed - ๐ŸŒ **Internationalization**: Add language support #### Development Workflow 1. **Issue First**: Create or comment on relevant issues 2. **Small PRs**: Keep pull requests focused and manageable 3. **Clean Commits**: Use descriptive commit messages 4. **Documentation**: Update relevant documentation 5. **Testing**: Ensure all tests pass ### Getting Help - ๐Ÿ“š **Documentation**: Check this README and inline docs - ๐Ÿ’ฌ **Issues**: Search existing issues or create new ones - ๐Ÿ” **Code**: Explore the codebase structure - ๐Ÿ“ง **Contact**: Reach out to maintainers for guidance --- ### License This project is licensed under the terms specified in the [LICENSE](LICENSE) file.