chandra-OCR-GGUF

Chandra is a highly accurate OCR model designed to convert images and PDFs into structured outputs such as markdown, HTML, and JSON while preserving detailed layout information. It supports over 40 languages and excels in handling complex document elements including handwriting, tables, math expressions, forms with checkboxes, and diagrams with captions. Chandra offers flexible inference modes with local execution via HuggingFace or remote deployment using a vLLM server, making it suitable for both interactive use and large-scale batch processing. Its strong layout preservation and multilingual capabilities make it a versatile choice for document digitization and automated content extraction workflows.

Model Files

File Name Quant Type File Size
chandra-BF16.gguf BF16 16.4 GB
chandra-F16.gguf F16 16.4 GB
chandra-F32.gguf F32 32.8 GB
chandra-Q3_K_M.gguf Q3_K_M 4.12 GB
chandra-Q3_K_S.gguf Q3_K_S 3.77 GB
chandra-Q4_K_M.gguf Q4_K_M 5.03 GB
chandra-Q4_K_S.gguf Q4_K_S 4.8 GB
chandra-Q8_0.gguf Q8_0 8.71 GB
chandra-mmproj-bf16.gguf mmproj-bf16 1.16 GB
chandra-mmproj-f16.gguf mmproj-f16 1.16 GB
chandra-mmproj-f32.gguf mmproj-f32 2.31 GB
chandra-mmproj-q8_0.gguf mmproj-q8_0 752 MB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
1,214
GGUF
Model size
8B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/chandra-OCR-GGUF

Base model

datalab-to/chandra
Quantized
(5)
this model

Collection including prithivMLmods/chandra-OCR-GGUF