DeepSeek-OCR – Apple Metal Performance Shaders (MPS) & CPU Support

This repository uses the weights from the original DeepSeek-OCR and modifies model to support MPS and CPU inference

Usage

Inference using Huggingface transformers on Metal Performance Shaders (MPS) and CPU. Requirements tested on python 3.12.9:

git clone git@hf.co:Dogacel/DeepSeek-OCR-Metal-MPS
cd DeepSeek-OCR-Metal-MPS/demo

# Use mamba or conda
mamba create -n deepseek-ocr python=3.12.9 -y
mamba activate deepseek-ocr
pip install -r requirements.txt

python run_dpsk_ocr.py
from transformers import AutoModel, AutoTokenizer
import torch

model_name = 'Dogacel/DeepSeek-OCR-Metal-MPS'

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token

model = AutoModel.from_pretrained(
        model_name, 
        _attn_implementation='eager',
        trust_remote_code=True,
        use_safetensors=True,
    )

device = torch.device("mps")
dtype = torch.float16

model = model.eval().to(device).to(dtype)

prompt = "<image>\n<|grounding|>Convert the document to markdown. "
image_file = 'image.png'
output_path = 'results4'

res = model.infer(
    tokenizer, 
    device=device,
    dtype=dtype,
    prompt=prompt,
    image_file=image_file, 
    output_path = output_path, 
    base_size=1024, 
    image_size=640,
    crop_mode=False, 
    save_results = True, 
    test_compress = True,
)

vLLM

vLLM integration hasn't been tested yet.

Refer to 🌟GitHub for guidance on model inference acceleration and PDF processing, etc.

uv venv
source .venv/bin/activate
# Until v0.11.1 release, you need to install vLLM from nightly build
uv pip install -U vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
from vllm import LLM, SamplingParams
from vllm.model_executor.models.deepseek_ocr import NGramPerReqLogitsProcessor
from PIL import Image

# Create model instance
llm = LLM(
    model="Dogacel/DeepSeek-OCR-Metal-MPS",
    enable_prefix_caching=False,
    mm_processor_cache_gb=0,
    logits_processors=[NGramPerReqLogitsProcessor]
)

# Prepare batched input with your image file
image_1 = Image.open("path/to/your/image_1.png").convert("RGB")
image_2 = Image.open("path/to/your/image_2.png").convert("RGB")
prompt = "<image>\nFree OCR."

model_input = [
    {
        "prompt": prompt,
        "multi_modal_data": {"image": image_1}
    },
    {
        "prompt": prompt,
        "multi_modal_data": {"image": image_2}
    }
]

sampling_param = SamplingParams(
            temperature=0.0,
            max_tokens=8192,
            # ngram logit processor args
            extra_args=dict(
                ngram_size=30,
                window_size=90,
                whitelist_token_ids={128821, 128822},  # whitelist: <td>, </td>
            ),
            skip_special_tokens=False,
        )
# Generate output
model_outputs = llm.generate(model_input, sampling_param)

# Print output
for output in model_outputs:
    print(output.outputs[0].text)

Visualizations

Downloads last month
237
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Dogacel/DeepSeek-OCR-Metal-MPS

Finetuned
(81)
this model
Finetunes
1 model