File size: 1,604 Bytes
ce61544 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
# vLLM Scripts Development Notes
## Repository Purpose
This repository contains UV scripts for vLLM-based inference tasks. Focus on GPU-accelerated inference using vLLM's optimized engine.
## Key Patterns
### 1. GPU Requirements
All scripts MUST check for GPU availability:
```python
if not torch.cuda.is_available():
logger.error("CUDA is not available. This script requires a GPU.")
sys.exit(1)
```
### 2. vLLM Docker Image
Always use `vllm/vllm-openai:latest` for HF Jobs - it has all dependencies pre-installed.
### 3. Dependencies
Include custom PyPI indexes for vLLM and FlashInfer:
```python
# [[tool.uv.index]]
# url = "https://flashinfer.ai/whl/cu126/torch2.6"
#
# [[tool.uv.index]]
# url = "https://wheels.vllm.ai/nightly"
```
## Current Scripts
1. **classify-dataset.py**: BERT-style text classification
- Uses vLLM's classify task
- Supports batch processing with configurable size
- Automatically extracts label mappings from model config
## Future Scripts
Potential additions:
- Text generation with vLLM
- Embedding generation using sentence transformers
- Multi-modal inference
- Structured output generation
## Testing
Local testing requires GPU. For scripts without local GPU access:
1. Use HF Jobs with small test datasets
2. Verify script runs without syntax errors: `python -m py_compile script.py`
3. Check dependencies resolve: `uv pip compile`
## Performance Considerations
- Default batch size: 10,000 for local, up to 100,000 for HF Jobs
- L4 GPUs are cost-effective for classification
- Monitor GPU memory usage and adjust batch sizes accordingly |