Excited to see this model available in VLLM!
#4
by
xupeng1023
- opened
The model performance looks pretty awesome given the size of the model. If it supported in vllm it will be great for people to try and really use it.
This model is already supported in Vllm. For example, you can run
python3 -m vllm.entrypoints.openai.api_server --model ServiceNow-AI/Apriel-Nemotron-15b-Thinker --dtype auto --tensor-parallel-size 1 --served-model-name apriel_15b --max-logprobs 10 --disable-log-requests --gpu-memory-utilization 0.95
xupeng1023
changed discussion status to
closed
Do we need to provide --enable-auto-tool-choice and --tool-call-parser for proper tool functionality in vllm? If we need to do tool-call-parser, do we use 'mistral'?