llama-3.1-8b-GRPO-V2.0 / config.yaml
CodCodingCode's picture
Create config.yaml
fcfefbc verified
raw
history blame contribute delete
53 Bytes
inference:
handler: handler.py
disable_vllm: true