
davanstrien
HF Staff
Add plain text prompt support and sample limiting to generate-responses.py
d034c0d
viewer: false | |
tags: [uv-script, vllm, gpu, inference] | |
# vLLM Inference Scripts | |
Ready-to-run UV scripts for GPU-accelerated inference using [vLLM](https://github.com/vllm-project/vllm). | |
These scripts use [UV's inline script metadata](https://docs.astral.sh/uv/guides/scripts/) to automatically manage dependencies - just run with `uv run` and everything installs automatically! | |
## π Available Scripts | |
### classify-dataset.py | |
Batch text classification using BERT-style encoder models (e.g., BERT, RoBERTa, DeBERTa, ModernBERT) with vLLM's optimized inference engine. | |
**Note**: This script is specifically for encoder-only classification models, not generative LLMs. | |
**Features:** | |
- π High-throughput batch processing | |
- π·οΈ Automatic label mapping from model config | |
- π Confidence scores for predictions | |
- π€ Direct integration with Hugging Face Hub | |
**Usage:** | |
```bash | |
# Local execution (requires GPU) | |
uv run classify-dataset.py \ | |
davanstrien/ModernBERT-base-is-new-arxiv-dataset \ | |
username/input-dataset \ | |
username/output-dataset \ | |
--inference-column text \ | |
--batch-size 10000 | |
``` | |
**HF Jobs execution:** | |
```bash | |
hf jobs uv run \ | |
--flavor l4x1 \ | |
--image vllm/vllm-openai \ | |
https://huggingface.co/datasets/uv-scripts/vllm/resolve/main/classify-dataset.py \ | |
davanstrien/ModernBERT-base-is-new-arxiv-dataset \ | |
username/input-dataset \ | |
username/output-dataset \ | |
--inference-column text \ | |
--batch-size 100000 | |
``` | |
### generate-responses.py | |
Generate responses for prompts using generative LLMs (e.g., Llama, Qwen, Mistral) with vLLM's high-performance inference engine. | |
**Features:** | |
- π¬ Automatic chat template application | |
- π Support for both chat messages and plain text prompts | |
- π Multi-GPU tensor parallelism support | |
- π Smart filtering for prompts exceeding context length | |
- π Comprehensive dataset cards with generation metadata | |
- β‘ HF Transfer enabled for fast model downloads | |
- ποΈ Full control over sampling parameters | |
- π― Sample limiting with `--max-samples` for testing | |
**Usage:** | |
```bash | |
# With chat-formatted messages (default) | |
uv run generate-responses.py \ | |
username/input-dataset \ | |
username/output-dataset \ | |
--messages-column messages \ | |
--max-tokens 1024 | |
# With plain text prompts (NEW!) | |
uv run generate-responses.py \ | |
username/input-dataset \ | |
username/output-dataset \ | |
--prompt-column question \ | |
--max-tokens 1024 \ | |
--max-samples 100 | |
# With custom model and parameters | |
uv run generate-responses.py \ | |
username/input-dataset \ | |
username/output-dataset \ | |
--model-id meta-llama/Llama-3.1-8B-Instruct \ | |
--prompt-column text \ | |
--temperature 0.9 \ | |
--top-p 0.95 \ | |
--max-model-len 8192 | |
``` | |
**HF Jobs execution (multi-GPU):** | |
```bash | |
hf jobs uv run \ | |
--flavor l4x4 \ | |
--image vllm/vllm-openai \ | |
-e UV_PRERELEASE=if-necessary \ | |
-s HF_TOKEN=hf_*** \ | |
https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py \ | |
davanstrien/cards_with_prompts \ | |
davanstrien/test-generated-responses \ | |
--model-id Qwen/Qwen3-30B-A3B-Instruct-2507 \ | |
--gpu-memory-utilization 0.9 \ | |
--max-tokens 600 \ | |
--max-model-len 8000 | |
``` | |
### Multi-GPU Tensor Parallelism | |
- Auto-detects available GPUs by default | |
- Use `--tensor-parallel-size` to manually specify | |
- Required for models larger than single GPU memory (e.g., 30B+ models) | |
### Handling Long Contexts | |
The generate-responses.py script includes smart prompt filtering: | |
- **Default behavior**: Skips prompts exceeding max_model_len | |
- **Use `--max-model-len`**: Limit context to reduce memory usage | |
- **Use `--no-skip-long-prompts`**: Fail on long prompts instead of skipping | |
- Skipped prompts receive empty responses and are logged | |
## π About vLLM | |
vLLM is a high-throughput inference engine optimized for: | |
- Fast model serving with PagedAttention | |
- Efficient batch processing | |
- Support for various model architectures | |
- Seamless integration with Hugging Face models | |
## π§ Technical Details | |
### UV Script Benefits | |
- **Zero setup**: Dependencies install automatically on first run | |
- **Reproducible**: Locked dependencies ensure consistent behavior | |
- **Self-contained**: Everything needed is in the script file | |
- **Direct execution**: Run from local files or URLs | |
### Dependencies | |
Scripts use UV's inline metadata for automatic dependency management: | |
```python | |
# /// script | |
# requires-python = ">=3.10" | |
# dependencies = [ | |
# "datasets", | |
# "flashinfer-python", | |
# "huggingface-hub[hf_transfer]", | |
# "torch", | |
# "transformers", | |
# "vllm", | |
# ] | |
# /// | |
``` | |
For bleeding-edge features, use the `UV_PRERELEASE=if-necessary` environment variable to allow pre-release versions when needed. | |
### Docker Image | |
For HF Jobs, we recommend the official vLLM Docker image: `vllm/vllm-openai` | |
This image includes: | |
- Pre-installed CUDA libraries | |
- vLLM and all dependencies | |
- UV package manager | |
- Optimized for GPU inference | |
### Environment Variables | |
- `HF_TOKEN`: Your Hugging Face authentication token (auto-detected if logged in) | |
- `UV_PRERELEASE=if-necessary`: Allow pre-release packages when required | |
- `HF_HUB_ENABLE_HF_TRANSFER=1`: Automatically enabled for faster downloads | |
## π Resources | |
- [vLLM Documentation](https://docs.vllm.ai/) | |
- [UV Documentation](https://docs.astral.sh/uv/) | |
- [UV Scripts Organization](https://huggingface.co/uv-scripts) | |