vllm / CLAUDE.md
davanstrien's picture
davanstrien HF Staff
Add vLLM classification script
ce61544

vLLM Scripts Development Notes

Repository Purpose

This repository contains UV scripts for vLLM-based inference tasks. Focus on GPU-accelerated inference using vLLM's optimized engine.

Key Patterns

1. GPU Requirements

All scripts MUST check for GPU availability:

if not torch.cuda.is_available():
    logger.error("CUDA is not available. This script requires a GPU.")
    sys.exit(1)

2. vLLM Docker Image

Always use vllm/vllm-openai:latest for HF Jobs - it has all dependencies pre-installed.

3. Dependencies

Include custom PyPI indexes for vLLM and FlashInfer:

# [[tool.uv.index]]
# url = "https://flashinfer.ai/whl/cu126/torch2.6"
# 
# [[tool.uv.index]]
# url = "https://wheels.vllm.ai/nightly"

Current Scripts

  1. classify-dataset.py: BERT-style text classification
    • Uses vLLM's classify task
    • Supports batch processing with configurable size
    • Automatically extracts label mappings from model config

Future Scripts

Potential additions:

  • Text generation with vLLM
  • Embedding generation using sentence transformers
  • Multi-modal inference
  • Structured output generation

Testing

Local testing requires GPU. For scripts without local GPU access:

  1. Use HF Jobs with small test datasets
  2. Verify script runs without syntax errors: python -m py_compile script.py
  3. Check dependencies resolve: uv pip compile

Performance Considerations

  • Default batch size: 10,000 for local, up to 100,000 for HF Jobs
  • L4 GPUs are cost-effective for classification
  • Monitor GPU memory usage and adjust batch sizes accordingly