commit_hash
stringlengths 40
40
| pr_url
stringlengths 11
47
| has_lm_eval
bool 2
classes | has_performance
bool 2
classes | has_serving
bool 2
classes | has_general_test
bool 2
classes | test_details
stringlengths 4
563
| timeline_text
stringlengths 0
155k
| extracted_at
stringlengths 0
19
|
---|---|---|---|---|---|---|---|---|
baeded25699f9f4851843306f27f685c4d4ee7c5
|
https://github.com/vllm-project/vllm/pull/12601
| false | false | false | true |
TEST: test, ci, ci
|
Copy link Collaborator LucasWilkinson commented Jan 31, 2025 Based off of: #12528 that needs to land first Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . π 7 robertgshaw2-redhat, ywang96, gaocegege, mgoin, tlrmchlsmth, houseroad, and jovany-wang reacted with hooray emoji All reactions π 7 reactions LucasWilkinson and others added 21 commits January 30, 2025 16:57 squashed commits β¦ 27ad92c Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> fix VLLM_MLA_PERFORM_MATRIX_ABSORPTION=0 β¦ c34e5ca Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> more cleanups β¦ f2cac91 Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> Update utils.py β¦ 068e672 Co-authored-by: Michael Goin <mgoin64@gmail.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> Update vllm/attention/backends/mla/utils.py β¦ 31b802c Co-authored-by: Michael Goin <mgoin64@gmail.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> review comments β¦ 634eee6 Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> renaming for consistency β¦ 7487429 Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> Update vllm/config.py β¦ d27826d Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> review comments β¦ 8bdc14a Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> review comments β¦ 09d814c Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> Update vllm/attention/backends/mla/utils.py β¦ 4a46014 Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> disable MLA for v3 for now β¦ 0881475 Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> fix failing test β¦ 37e39f4 Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> fix mypy β¦ cfb2d26 Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> fix mypy β¦ 5afc1bf Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> add cuda graph support β¦ 54ba87d Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> ci fix β¦ 31c34bf Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> Revert "add cuda graph support" β¦ 433322b Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> Fix TP > 1 cuda graphs β¦ f2b2500 Co-authored-by: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> cleanup β¦ 2d61054 Co-authored-by: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> cleanup ⦠645622c Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> LucasWilkinson requested review from tlrmchlsmth , WoosukKwon , mgoin , robertgshaw2-redhat , zhuohan123 , youkaichao , alexm-redhat , comaniac and njhill as code owners January 31, 2025 04:18 35 hidden items Load more⦠mgoin approved these changes Feb 1, 2025 View reviewed changes vllm/model_executor/model_loader/loader.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . simon-mo and others added 2 commits February 1, 2025 00:56 Update loader.py ⦠0d66687 Co-authored-by: Michael Goin <mgoin64@gmail.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> format β¦ 5fe1d1d Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> LucasWilkinson force-pushed the mla-fp8 branch
from 282eec1 to 5fe1d1d Compare February 1, 2025 00:57 LucasWilkinson added 2 commits February 1, 2025 01:13 reduce split kv amount β¦ 5d5071c Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> fix none type error β¦ 7ac6f52 Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> mgoin mentioned this pull request Feb 1, 2025 Disable chunked prefill and/or prefix caching when MLA is enabled #12638 Closed ci fix β¦ dc0e2af Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com> LucasWilkinson mentioned this pull request Feb 1, 2025 [Attention] MLA with chunked prefill #12639 Merged 4 tasks Hide details View details simon-mo merged commit baeded2 into vllm-project : main Feb 1, 2025 42 of 44 checks passed Uh oh! There was an error while loading. Please reload this page . Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 β¦ c22f65d )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: simon-mo <simon.mo@hey.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Signed-off-by: Isotr0py <2037008807@qq.com> srikanthsrnvs pushed a commit
to srikanthsrnvs/vllm
that referenced
this pull request Feb 3, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 β¦ bb94260 )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: simon-mo <simon.mo@hey.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Signed-off-by: Srikanth Srinivas <srikanth@astrum.ai> Syst3m1cAn0maly mentioned this pull request Feb 3, 2025 [Bug]: MLA Warnings when using FP8 KV cache in v0.7.1 #12680 Closed 1 task sahelib25 pushed a commit
to krai/vllm
that referenced
this pull request Feb 3, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 β¦ 06f14ab )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: simon-mo <simon.mo@hey.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com> xuechendi referenced
this pull request
in yangw1234/habana-vllm-fork Feb 3, 2025 [Attention] Deepseek v3 MLA support with FP8 compute (#12601) β¦ baf04c8 This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: simon-mo <simon.mo@hey.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com> houseroad mentioned this pull request Feb 4, 2025 DeepSeek: MLA attention pytorch/pytorch#146330 Open NickLucche pushed a commit
to NickLucche/vllm
that referenced
this pull request Feb 7, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 β¦ 6bb84bb )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: simon-mo <simon.mo@hey.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com> GWS0428 pushed a commit
to GWS0428/VARserve
that referenced
this pull request Feb 12, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 β¦ bd83b50 )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: simon-mo <simon.mo@hey.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com> gshtras reviewed Feb 14, 2025 View reviewed changes vllm/attention/backends/mla/utils.py def get_scale_group_shapes_for_fp8(layer: LinearBase) -> \ Tuple[Tuple[int, int], Tuple[int, int]]: if isinstance(layer.quant_method, Fp8LinearMethod): if layer.quant_method.block_quant is not None: Copy link Collaborator gshtras Feb 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Fp8LinearMethod.block_quant is a boolean, is there meant to be a check for False instead? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member mgoin Feb 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Yes this is a bug, I fixed it here #13181 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions LucasWilkinson mentioned this pull request Feb 25, 2025 Implement MLA for deepseek v3/r1 #12597 Closed yangulei pushed a commit
to yangulei/vllm-fork
that referenced
this pull request Mar 11, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 β¦ b339458 )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: simon-mo <simon.mo@hey.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 β¦ 28320d1 )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: simon-mo <simon.mo@hey.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:46:44
|
fc542144c4477ffec1d3de6fa43e54f8fb5351e8
|
https://github.com/vllm-project/vllm/pull/12563
| false | true | false | true |
PERF: tok/s, tok/s, optimization | TEST: test, CI, CI
|
Copy link Contributor xpbowler commented Jan 29, 2025 β’ edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . [Guided decoding performance optimization] Sending the guided decoding bitmask in xgrammar to the GPU ( self.token_bitmask.to(scores.device) ) is a blocking operation that prevents the CPU from pre-launching the sampler kernels. The CPU waits until decode is complete, then copies the bitmask over. This PR changes the operation to async via setting non-blocking=True . (Current) The CPU is blocked on a cudaStreamSynchronize and only pre-empts the sampling kernels after bitmask application. Below is the Nsys profile for one decode phase from Llama 3.1 8B. With the optimization, this is no longer the case: Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions xpbowler requested a review
from mgoin as a code owner January 29, 2025 21:16 Copy link github-actions bot commented Jan 29, 2025 π Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. π All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . xpbowler force-pushed the main branch
from e91e01a to 99611c5 Compare January 29, 2025 21:26 mgoin approved these changes Jan 29, 2025 View reviewed changes Copy link Member mgoin left a comment β’ edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This makes sense, thanks! LGTM pending green CI Showing the profile is great, also showing an e2e speedup (even if small) would be nice Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin added structured-output ready ONLY add when PR is ready to merge/full CI is needed labels Jan 29, 2025 Copy link Contributor Author xpbowler commented Jan 29, 2025 This makes sense, thanks! LGTM pending green CI Showing the profile is great, also showing an e2e speedup (even if small) would be nice For single request benchmarks with Llama 3.1 8B running on H100, the improvement in tok/s was ~5%: Single request 87.5tok/s, guided unoptimized 92 tok/s, guided optimized π 2 mgoin and njhill reacted with rocket emoji All reactions π 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin added
the performance Performance-related issues label Jan 29, 2025 aarnphm approved these changes Jan 29, 2025 View reviewed changes xpbowler force-pushed the main branch
from 9bae63f to b9681d4 Compare January 30, 2025 15:40 mgoin enabled auto-merge (squash) January 30, 2025 22:16 Ryan N added 3 commits January 31, 2025 20:26 remove blocking bitmask memcpy β¦ 4a3d85f Signed-off-by: Ryan N <ryan.nguyen@centml.ai> re-run ci pipeline β¦ a7914a8 Signed-off-by: Ryan N <ryan.nguyen@centml.ai> pipeline β¦ f8fa0c6 Signed-off-by: Ryan N <ryan.nguyen@centml.ai> auto-merge was automatically disabled January 31, 2025 20:27 Head branch was pushed to by a user without write access xpbowler force-pushed the main branch
from b11a83f to f8fa0c6 Compare January 31, 2025 20:27 Hide details View details simon-mo merged commit fc54214 into vllm-project : main Jan 31, 2025 38 of 44 checks passed Uh oh! There was an error while loading. Please reload this page . Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1β¦ β¦ df7ab19 β¦2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <ryan.nguyen@centml.ai>
Signed-off-by: Isotr0py <2037008807@qq.com> srikanthsrnvs pushed a commit
to srikanthsrnvs/vllm
that referenced
this pull request Feb 3, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1β¦ β¦ d27e55d β¦2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <ryan.nguyen@centml.ai>
Signed-off-by: Srikanth Srinivas <srikanth@astrum.ai> sahelib25 pushed a commit
to krai/vllm
that referenced
this pull request Feb 3, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1β¦ β¦ 51f5127 β¦2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <ryan.nguyen@centml.ai> NickLucche pushed a commit
to NickLucche/vllm
that referenced
this pull request Feb 7, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1β¦ β¦ 5c21ca9 β¦2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <ryan.nguyen@centml.ai> GWS0428 pushed a commit
to GWS0428/VARserve
that referenced
this pull request Feb 12, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1β¦ β¦ bea306f β¦2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <ryan.nguyen@centml.ai> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1β¦ β¦ 76bd88f β¦2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <ryan.nguyen@centml.ai> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:46:50
|
fa63e710c7fbaae3a445f669d3b5ba6b9a4ef412
|
https://github.com/vllm-project/vllm/pull/12094
| false | true | true | true |
PERF: Throughput, Throughput, Throughput | SERVING: serving, serving, serving | TEST: test, test, test
|
Copy link Contributor youngkent commented Jan 15, 2025 β’ edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . We do some runner bookkeeping CPU operations after decoding iteration. We could parallelize some bookkeeping work while waiting on cuda sync. After the cuda sync, we only need to do simple and fast updates. The change should reduce scheduling overhead between decode iterations by ~20%. (See attached gpu trace) Before the optimization, After the optimization, E2E latency benchmark, ran VLLM_USE_V1=1 python3 benchmarks/benchmark_latency.py --model "/data/users/ktong/llama/llm_8b_oss" --tensor-parallel-size 1 --input_len 1000 --batch_size 32 Output (1-2% e2e latency reduction): Avg latency: 2.338167402730323 seconds 10% percentile latency: 2.3207896508742123 seconds 25% percentile latency: 2.3264574960339814 seconds 50% percentile latency: 2.3333765944698825 seconds 75% percentile latency: 2.343035737867467 seconds 90% percentile latency: 2.3567665563430635 seconds 99% percentile latency: 2.3934816433605737 seconds Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . π 3 WoosukKwon, ywang96, and njhill reacted with rocket emoji All reactions π 3 reactions youngkent requested review from WoosukKwon , njhill , ywang96 and comaniac as code owners January 15, 2025 19:16 Copy link github-actions bot commented Jan 15, 2025 π Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. π All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin requested a review
from robertgshaw2-redhat January 15, 2025 20:20 youngkent added 3 commits January 15, 2025 12:28 reduce scheduling overhead in model runner after cuda sync β¦ ff21f9e Signed-off-by: Keyun Tong <tongkeyun@gmail.com> Fix style β¦ 41dba06 Signed-off-by: Keyun Tong <tongkeyun@gmail.com> fix style typo β¦ 9ce3d6e Signed-off-by: Keyun Tong <tongkeyun@gmail.com> youngkent force-pushed the main branch
from 4dc567b to 9ce3d6e Compare January 15, 2025 20:29 youkaichao reviewed Jan 16, 2025 View reviewed changes vllm/v1/outputs.py @@ -8,7 +8,7 @@ class SamplerOutput: # [num_reqs] sampled_token_ids: List[int] Copy link Member youkaichao Jan 16, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment is this necessary? iirc, @tlrmchlsmth use List[int] because they are cheaper to serialize, and would benefit tensor parallel case, where we need to pass them across processes. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator tlrmchlsmth Jan 16, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This is true β I didnβt look at how it impacts the non-TP case though Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator robertgshaw2-redhat Jan 25, 2025 β’ edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The ModelRunnerOutput is what we serialize for TP, we don't serialize the SamplerOutput directly, so this is not a concern Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator tlrmchlsmth Jan 25, 2025 β’ edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Ah, yep that's right -- I did change this line in #9856 , but that was just downstream of changing sampled_token_ids to a List in the ModelRunnerOutput . This looks good to me since that's left as-is! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . π 1 mgoin reacted with thumbs up emoji All reactions π 1 reaction Copy link Collaborator robertgshaw2-redhat commented Jan 16, 2025 Wow, great idea. Im going to run some perfomance analysis on this tomorrow. π 1 youngkent reacted with thumbs up emoji All reactions π 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon reviewed Jan 16, 2025 View reviewed changes vllm/v1/worker/gpu_model_runner.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . njhill reviewed Jan 16, 2025 View reviewed changes vllm/v1/sample/sampler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . njhill mentioned this pull request Jan 17, 2025 [V1] Logprobs and prompt logprobs support #9880 Merged remove outdated comment β¦ 8ca382d Signed-off-by: Keyun Tong <tongkeyun@gmail.com> youngkent force-pushed the main branch
from dfd825e to 8ca382d Compare January 17, 2025 18:15 youkaichao reviewed Jan 18, 2025 View reviewed changes vllm/v1/outputs.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . youkaichao reviewed Jan 18, 2025 View reviewed changes vllm/v1/worker/gpu_model_runner.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Merge branch 'main' into youngkent/main 1cc6492 WoosukKwon requested a review
from alexm-redhat as a code owner January 25, 2025 22:08 WoosukKwon approved these changes Jan 25, 2025 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! Thanks for discovering and fixing this! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/worker/gpu_model_runner.py self.input_batch.req_ids[:num_reqs]), "req_ids contains None" req_ids = cast(List[str], self.input_batch.req_ids[:num_reqs]) # NOTE: GPU -> CPU Sync happens here. Copy link Collaborator WoosukKwon Jan 25, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Just for a record: If top-p or top-k sampling is used (with the FlashInfer kernel), CPU-GPU synchronization happens inside the sampler at vllm/vllm/v1/sample/ops/topk_topp_sampler.py Lines 193 to 194
in 324960a # NOTE: CPU-GPU synchronization happens here. if not success . all (): Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . π 1 youngkent reacted with thumbs up emoji All reactions π 1 reaction Copy link Collaborator robertgshaw2-redhat Jan 25, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Do you think we can avoid this in a follow up PR? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator WoosukKwon Jan 26, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I don't think we can. This is a fundamental limitation of the kernel (or the algorithm itself). The rejection sampling method cannot 100% guarantee the success. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Jan 25, 2025 tlrmchlsmth approved these changes Jan 25, 2025 View reviewed changes robertgshaw2-redhat reviewed Jan 25, 2025 View reviewed changes vllm/v1/worker/gpu_model_runner.py # NOTE: GPU -> CPU Sync happens here. # Move as many CPU operations as possible before this sync point. sampled_token_ids = sampler_output.sampled_token_ids.tolist() Copy link Collaborator robertgshaw2-redhat Jan 25, 2025 β’ edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It might be faster to do sampler_output.sampled_token_ids.cpu() and then sampler_output.sampled_token_ids[i].item()` in the inner loop. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator WoosukKwon Jan 26, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment In my experience, item() took considerable time so should be avoided. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . π 1 robertgshaw2-redhat reacted with thumbs up emoji All reactions π 1 reaction mgoin approved these changes Jan 25, 2025 View reviewed changes Merge remote-tracking branch 'upstream/main' f35e80b Hide details View details WoosukKwon merged commit fa63e71 into vllm-project : main Jan 26, 2025 42 of 44 checks passed Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator WoosukKwon commented Jan 26, 2025 @youngkent Thanks for the PR! This change helps vLLM's performance noticeably. β€οΈ 1 youngkent reacted with heart emoji All reactions β€οΈ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tjtanaa pushed a commit
to EmbeddedLLM/vllm
that referenced
this pull request Jan 28, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( β¦ β¦ 4388fac β¦vllm-project#12094 )
Signed-off-by: Keyun Tong <tongkeyun@gmail.com> rasmith pushed a commit
to rasmith/vllm
that referenced
this pull request Jan 30, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( β¦ β¦ 4a21854 β¦vllm-project#12094 )
Signed-off-by: Keyun Tong <tongkeyun@gmail.com> Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( β¦ β¦ 0442131 β¦vllm-project#12094 )
Signed-off-by: Keyun Tong <tongkeyun@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 3, 2025 [MFM-2025-02-03] Merge Main to llama fp8; With Faster ROCm Paged Atteβ¦ β¦ 479b843 β¦ntion ( #399 )
* [V1] Avoid sending text prompt to core engine ( vllm-project#11963 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [CI/Build] Add markdown linter ( vllm-project#11857 )
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
* [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <akshat@krai.ai>
Signed-off-by: Oleg Mosalov <oleg@krai.ai>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Oleg Mosalov <oleg@krai.ai>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <757486878@qq.com>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <467638484@qq.com>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <f1renze.142857@gmail.com>
* [Doc] Fix build from source and installation link in README.md ( vllm-project#12013 )
Signed-off-by: Yikun <yikunkero@gmail.com>
* Using list
* [Bugfix] Fix deepseekv3 gate bias error ( vllm-project#12002 )
Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <alivanov@banff-cyxtera-s65-4.amd.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
* Trying to make scales work with compileable attention
* [Docs] Add Sky Computing Lab to project intro ( vllm-project#12019 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [HPU][Bugfix] set_forward_context and CI test execution ( vllm-project#12014 )
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
* [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <467638484@qq.com>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <kewang@xilinx.com>
Signed-off-by: kewang2 <kewang2@amd.com>
Co-authored-by: kewang2 <kewang2@amd.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <michael@neuralmagic.com>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <charlifu@amd.com>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <kuntai@uchicago.edu>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <jiang1.li@intel.com>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <wallashss@ibm.com>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <hongxyan@amd.com>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <huseyin.isik000@gmail.com>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <jasoncky96@gmail.com>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <michael@neuralmagic.com>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <nlucches@redhat.com>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <cmq0113@163.com>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis SchΓΆnleber <joennlae@gmail.com>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <rickyx@anyscale.com>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <andy@mistral.ai>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <maleksan@amd.com>
Co-authored-by: maleksan85 <maleksan@amd.com>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <hongxyan@amd.com>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <kevin@anyscale.com>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <liangfc@amazon.com>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <nhill@redhat.com>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <cmq0113@163.com>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <1247714429@qq.com>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <863579016@qq.com>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <maleksan@amd.com>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <zhenweiliu@habana.ai>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: imkero <kerorek@outlook.com>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Micah Williamson <micah.williamson@amd.com>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* Applying scales rename to fp8 config ( #387 )
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <tongkeyun@gmail.com>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <kyle@mistele.com>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
---------
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Joe Shajrawi <17753158+shajrawi@users.noreply.github.com>
Co-authored-by: TJian <tunjian1996@gmail.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Update Dockerfile.rocm
* [Bugfix]: inclucde the env variables required for running FastSyncLLM
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
* fix pre-commit lint
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
---------
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Akshat Tripathi <akshat@krai.ai>
Signed-off-by: Oleg Mosalov <oleg@krai.ai>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Signed-off-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: Chenguang Li <757486878@qq.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Shanshan Shen <467638484@qq.com>
Signed-off-by: elijah <f1renze.142857@gmail.com>
Signed-off-by: Yikun <yikunkero@gmail.com>
Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: yisheng <yi.sheng@intel.com>
Signed-off-by: Abatom <abzhonghua@gmail.com>
Signed-off-by: Liangfu Chen <liangfc@amazon.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
Signed-off-by: Sourashis Roy <sroy@roblox.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Wallas Santos <wallashss@ibm.com>
Signed-off-by: jiang1.li <jiang1.li@intel.com>
Signed-off-by: yan ma <yan.ma@intel.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
Signed-off-by: Ye Qi <yeq@meta.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
Signed-off-by: Kuntai Du <kuntai@uchicago.edu>
Signed-off-by: Ren MinMin <renmm6@chinaunicom.cn>
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
Signed-off-by: Fred Reiss <frreiss@us.ibm.com>
Signed-off-by: shaochangxu.scx <shaochangxu.scx@antgroup.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>
Signed-off-by: kewang-xlnx <kewang@xilinx.com>
Signed-off-by: kewang2 <kewang2@amd.com>
Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: hongxyan <hongxyan@amd.com>
Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>
Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Signed-off-by: Martin Gleize <mgleize@meta.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: isikhi <huseyin.isik000@gmail.com>
Signed-off-by: Jason Cheng <jasoncky96@gmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Signed-off-by: Jannis SchΓΆnleber <joennlae@gmail.com>
Signed-off-by: rickyx <rickyx@anyscale.com>
Signed-off-by: Andy Lo <andy@mistral.ai>
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
Signed-off-by: maleksan85 <maleksan@amd.com>
Signed-off-by: Hongxia Yang <hongxyan@amd.com>
Signed-off-by: kevin <kevin@anyscale.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: xffxff <1247714429@qq.com>
Signed-off-by: wangerxiao <863579016@qq.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: zhenwei <zhenweiliu@habana.ai>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: ElizaWszola <eliza@neuralmagic.com>
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
Signed-off-by: Keyun Tong <tongkeyun@gmail.com>
Signed-off-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Kyle Mistele <kyle@mistele.com>
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Akshat Tripathi <Akshat.tripathi6568@gmail.com>
Co-authored-by: Oleg Mosalov <oleg@krai.ai>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Avshalom Manevich <12231371+avshalomman@users.noreply.github.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-neuralmagic@users.noreply.github.com>
Co-authored-by: Yangcheng Li <liyangcheng.lyc@alibaba-inc.com>
Co-authored-by: Siyuan Li <94890248+liaoyanqing666@users.noreply.github.com>
Co-authored-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
Co-authored-by: Concurrensee <yida.wu@amd.com>
Co-authored-by: Chenguang Li <757486878@qq.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Alex Brooks <alex.brooks@ibm.com>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Shanshan Shen <467638484@qq.com>
Co-authored-by: elijah <30852919+e1ijah1@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Steve Luo <36296769+SunflowerAries@users.noreply.github.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: Alexei V. Ivanov <alivanov@banff-cyxtera-s65-4.amd.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: maang-h <55082429+maang-h@users.noreply.github.com>
Co-authored-by: YiSheng5 <yi.sheng@intel.com>
Co-authored-by: Zhonghua Deng <abatom@163.com>
Co-authored-by: Liangfu Chen <liangfc@amazon.com>
Co-authored-by: XiaobingZhang <xiaobingzhangupc@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Yuan <yuan.zhou@intel.com>
Co-authored-by: jiangjiadi <34134495+jiangjiadi@users.noreply.github.com>
Co-authored-by: jiadi.jjd <jiadi.jjd@antgroup.com>
Co-authored-by: sroy745 <142070531+sroy745@users.noreply.github.com>
Co-authored-by: Jie Fu (ε
ζ°) <jiefu@tencent.com>
Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Co-authored-by: WangErXiao <863579016@qq.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Wallas Henrique <wallashss@users.noreply.github.com>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Yan Ma <yan.ma@intel.com>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
Co-authored-by: Guspan Tanadi <36249910+guspan-tanadi@users.noreply.github.com>
Co-authored-by: Ye (Charlotte) Qi <ye.charlotte.qi@gmail.com>
Co-authored-by: yeq <yeq@devgpu004.lla3.facebook.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Charles Frye <cfrye59@gmail.com>
Co-authored-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: cennn <61925104+cennn@users.noreply.github.com>
Co-authored-by: Kuntai Du <kuntai@uchicago.edu>
Co-authored-by: minmin <rmm0811@gmail.com>
Co-authored-by: Ren MinMin <renmm6@chinaunicom.cn>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Fred Reiss <frreiss@us.ibm.com>
Co-authored-by: shaochangxu <85155497+shaochangxu@users.noreply.github.com>
Co-authored-by: shaochangxu.scx <shaochangxu.scx@antgroup.com>
Co-authored-by: NicolΓ² Lucchesi <nlucches@redhat.com>
Co-authored-by: sixgod <evethwillbeok@outlook.com>
Co-authored-by: Elfie Guo <164945471+elfiegg@users.noreply.github.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>
Co-authored-by: Keyun Tong <tongkeyun@gmail.com>
Co-authored-by: RunningLeon <maningsheng@sensetime.com>
Co-authored-by: kewang-xlnx <73578509+kewang-xlnx@users.noreply.github.com>
Co-authored-by: kewang2 <kewang2@amd.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: tvirolai-amd <teemu.virolainen@amd.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Zhaoyi Li <36555117+Lzy17@users.noreply.github.com>
Co-authored-by: charlifu <charlifu@amd.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: yancong <32220263+ice-tong@users.noreply.github.com>
Co-authored-by: Michal Adamczyk <madamczyk@habana.ai>
Co-authored-by: gujing <925973396@qq.com>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: IΕΔ±k <41375111+isikhi@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cheng Kuan Yong Jason <jasoncky96@gmail.com>
Co-authored-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Thomas Parnell <tpa@zurich.ibm.com>
Co-authored-by: Jannis SchΓΆnleber <joennlae@gmail.com>
Co-authored-by: Ricky Xu <xuchen727@hotmail.com>
Co-authored-by: Andy Lo <andylolu24@gmail.com>
Co-authored-by: Adrian Cole <64215+codefromthecrypt@users.noreply.github.com>
Co-authored-by: Jani Monoses <jani.monoses@gmail.com>
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
Co-authored-by: Aleksandr Malyshev <164964928+maleksan85@users.noreply.github.com>
Co-authored-by: maleksan85 <maleksan@amd.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
Co-authored-by: zhou fan <1247714429@qq.com>
Co-authored-by: ilia-cher <30845429+ilia-cher@users.noreply.github.com>
Co-authored-by: liuzhenwei <zhenweiliu@habana.ai>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: Micah Williamson <micah.williamson@amd.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
Co-authored-by: omer-dayan <omer@run.ai>
Co-authored-by: Mohit Deopujari <mdeopujari@habana.ai>
Co-authored-by: Jeremy Arnold <103538711+JArnoldAMD@users.noreply.github.com>
Co-authored-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Co-authored-by: Kyle Mistele <kyle@mistele.com>
Co-authored-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Bowen Wang <abmfy@icloud.com>
Co-authored-by: Bowen Bao <bowenbao@amd.com>
Co-authored-by: arakowsk-amd <182798202+arakowsk-amd@users.noreply.github.com>
Co-authored-by: sanyalington <shomy.sanyal@amd.com>
Co-authored-by: Joe Shajrawi <17753158+shajrawi@users.noreply.github.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 5, 2025 [Bug Fix] Missing vllm.envs ( #405 ) β¦ 87b3c56 * [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <akshat@krai.ai>
Signed-off-by: Oleg Mosalov <oleg@krai.ai>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Oleg Mosalov <oleg@krai.ai>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <757486878@qq.com>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <467638484@qq.com>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <f1renze.142857@gmail.com>
* [Doc] Fix build from source and installation link in README.md ( vllm-project#12013 )
Signed-off-by: Yikun <yikunkero@gmail.com>
* Using list
* [Bugfix] Fix deepseekv3 gate bias error ( vllm-project#12002 )
Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <alivanov@banff-cyxtera-s65-4.amd.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
* Trying to make scales work with compileable attention
* [Docs] Add Sky Computing Lab to project intro ( vllm-project#12019 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [HPU][Bugfix] set_forward_context and CI test execution ( vllm-project#12014 )
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
* [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <467638484@qq.com>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <kewang@xilinx.com>
Signed-off-by: kewang2 <kewang2@amd.com>
Co-authored-by: kewang2 <kewang2@amd.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <michael@neuralmagic.com>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <charlifu@amd.com>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <kuntai@uchicago.edu>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <jiang1.li@intel.com>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <wallashss@ibm.com>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <hongxyan@amd.com>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <huseyin.isik000@gmail.com>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <jasoncky96@gmail.com>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <michael@neuralmagic.com>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <nlucches@redhat.com>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <cmq0113@163.com>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis SchΓΆnleber <joennlae@gmail.com>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <rickyx@anyscale.com>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <andy@mistral.ai>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <maleksan@amd.com>
Co-authored-by: maleksan85 <maleksan@amd.com>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <hongxyan@amd.com>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <kevin@anyscale.com>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <liangfc@amazon.com>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <nhill@redhat.com>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <cmq0113@163.com>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <1247714429@qq.com>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <863579016@qq.com>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <maleksan@amd.com>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <zhenweiliu@habana.ai>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: imkero <kerorek@outlook.com>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Micah Williamson <micah.williamson@amd.com>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* Applying scales rename to fp8 config ( #387 )
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <tongkeyun@gmail.com>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <kyle@mistele.com>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
---------
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Joe Shajrawi <17753158+shajrawi@users.noreply.github.com>
Co-authored-by: TJian <tunjian1996@gmail.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Update Dockerfile.rocm
* [Bugfix]: inclucde the env variables required for running FastSyncLLM
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
* fix pre-commit lint
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
* [Bugfix] included missing environment variable
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
---------
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Akshat Tripathi <akshat@krai.ai>
Signed-off-by: Oleg Mosalov <oleg@krai.ai>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Signed-off-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: Chenguang Li <757486878@qq.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Shanshan Shen <467638484@qq.com>
Signed-off-by: elijah <f1renze.142857@gmail.com>
Signed-off-by: Yikun <yikunkero@gmail.com>
Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: yisheng <yi.sheng@intel.com>
Signed-off-by: Abatom <abzhonghua@gmail.com>
Signed-off-by: Liangfu Chen <liangfc@amazon.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
Signed-off-by: Sourashis Roy <sroy@roblox.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Wallas Santos <wallashss@ibm.com>
Signed-off-by: jiang1.li <jiang1.li@intel.com>
Signed-off-by: yan ma <yan.ma@intel.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
Signed-off-by: Ye Qi <yeq@meta.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
Signed-off-by: Kuntai Du <kuntai@uchicago.edu>
Signed-off-by: Ren MinMin <renmm6@chinaunicom.cn>
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
Signed-off-by: Fred Reiss <frreiss@us.ibm.com>
Signed-off-by: shaochangxu.scx <shaochangxu.scx@antgroup.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>
Signed-off-by: kewang-xlnx <kewang@xilinx.com>
Signed-off-by: kewang2 <kewang2@amd.com>
Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: hongxyan <hongxyan@amd.com>
Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>
Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Signed-off-by: Martin Gleize <mgleize@meta.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: isikhi <huseyin.isik000@gmail.com>
Signed-off-by: Jason Cheng <jasoncky96@gmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Signed-off-by: Jannis SchΓΆnleber <joennlae@gmail.com>
Signed-off-by: rickyx <rickyx@anyscale.com>
Signed-off-by: Andy Lo <andy@mistral.ai>
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
Signed-off-by: maleksan85 <maleksan@amd.com>
Signed-off-by: Hongxia Yang <hongxyan@amd.com>
Signed-off-by: kevin <kevin@anyscale.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: xffxff <1247714429@qq.com>
Signed-off-by: wangerxiao <863579016@qq.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: zhenwei <zhenweiliu@habana.ai>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: ElizaWszola <eliza@neuralmagic.com>
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
Signed-off-by: Keyun Tong <tongkeyun@gmail.com>
Signed-off-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Kyle Mistele <kyle@mistele.com>
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Akshat Tripathi <Akshat.tripathi6568@gmail.com>
Co-authored-by: Oleg Mosalov <oleg@krai.ai>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Avshalom Manevich <12231371+avshalomman@users.noreply.github.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-neuralmagic@users.noreply.github.com>
Co-authored-by: Yangcheng Li <liyangcheng.lyc@alibaba-inc.com>
Co-authored-by: Siyuan Li <94890248+liaoyanqing666@users.noreply.github.com>
Co-authored-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
Co-authored-by: Concurrensee <yida.wu@amd.com>
Co-authored-by: Chenguang Li <757486878@qq.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Alex Brooks <alex.brooks@ibm.com>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Shanshan Shen <467638484@qq.com>
Co-authored-by: elijah <30852919+e1ijah1@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Steve Luo <36296769+SunflowerAries@users.noreply.github.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: Alexei V. Ivanov <alivanov@banff-cyxtera-s65-4.amd.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: maang-h <55082429+maang-h@users.noreply.github.com>
Co-authored-by: YiSheng5 <yi.sheng@intel.com>
Co-authored-by: Zhonghua Deng <abatom@163.com>
Co-authored-by: Liangfu Chen <liangfc@amazon.com>
Co-authored-by: XiaobingZhang <xiaobingzhangupc@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Yuan <yuan.zhou@intel.com>
Co-authored-by: jiangjiadi <34134495+jiangjiadi@users.noreply.github.com>
Co-authored-by: jiadi.jjd <jiadi.jjd@antgroup.com>
Co-authored-by: sroy745 <142070531+sroy745@users.noreply.github.com>
Co-authored-by: Jie Fu (ε
ζ°) <jiefu@tencent.com>
Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Co-authored-by: WangErXiao <863579016@qq.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Wallas Henrique <wallashss@users.noreply.github.com>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Yan Ma <yan.ma@intel.com>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
Co-authored-by: Guspan Tanadi <36249910+guspan-tanadi@users.noreply.github.com>
Co-authored-by: Ye (Charlotte) Qi <ye.charlotte.qi@gmail.com>
Co-authored-by: yeq <yeq@devgpu004.lla3.facebook.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Charles Frye <cfrye59@gmail.com>
Co-authored-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: cennn <61925104+cennn@users.noreply.github.com>
Co-authored-by: Kuntai Du <kuntai@uchicago.edu>
Co-authored-by: minmin <rmm0811@gmail.com>
Co-authored-by: Ren MinMin <renmm6@chinaunicom.cn>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Fred Reiss <frreiss@us.ibm.com>
Co-authored-by: shaochangxu <85155497+shaochangxu@users.noreply.github.com>
Co-authored-by: shaochangxu.scx <shaochangxu.scx@antgroup.com>
Co-authored-by: NicolΓ² Lucchesi <nlucches@redhat.com>
Co-authored-by: sixgod <evethwillbeok@outlook.com>
Co-authored-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: Elfie Guo <164945471+elfiegg@users.noreply.github.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>
Co-authored-by: Keyun Tong <tongkeyun@gmail.com>
Co-authored-by: RunningLeon <maningsheng@sensetime.com>
Co-authored-by: kewang-xlnx <73578509+kewang-xlnx@users.noreply.github.com>
Co-authored-by: kewang2 <kewang2@amd.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: tvirolai-amd <teemu.virolainen@amd.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Zhaoyi Li <36555117+Lzy17@users.noreply.github.com>
Co-authored-by: charlifu <charlifu@amd.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: yancong <32220263+ice-tong@users.noreply.github.com>
Co-authored-by: Michal Adamczyk <madamczyk@habana.ai>
Co-authored-by: gujing <925973396@qq.com>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: IΕΔ±k <41375111+isikhi@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cheng Kuan Yong Jason <jasoncky96@gmail.com>
Co-authored-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Thomas Parnell <tpa@zurich.ibm.com>
Co-authored-by: Jannis SchΓΆnleber <joennlae@gmail.com>
Co-authored-by: Ricky Xu <xuchen727@hotmail.com>
Co-authored-by: Andy Lo <andylolu24@gmail.com>
Co-authored-by: Adrian Cole <64215+codefromthecrypt@users.noreply.github.com>
Co-authored-by: Jani Monoses <jani.monoses@gmail.com>
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
Co-authored-by: Aleksandr Malyshev <164964928+maleksan85@users.noreply.github.com>
Co-authored-by: maleksan85 <maleksan@amd.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
Co-authored-by: zhou fan <1247714429@qq.com>
Co-authored-by: ilia-cher <30845429+ilia-cher@users.noreply.github.com>
Co-authored-by: liuzhenwei <zhenweiliu@habana.ai>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: Micah Williamson <micah.williamson@amd.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
Co-authored-by: omer-dayan <omer@run.ai>
Co-authored-by: Mohit Deopujari <mdeopujari@habana.ai>
Co-authored-by: Jeremy Arnold <103538711+JArnoldAMD@users.noreply.github.com>
Co-authored-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Co-authored-by: Kyle Mistele <kyle@mistele.com>
Co-authored-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Bowen Wang <abmfy@icloud.com>
Co-authored-by: Bowen Bao <bowenbao@amd.com>
Co-authored-by: arakowsk-amd <182798202+arakowsk-amd@users.noreply.github.com>
Co-authored-by: sanyalington <shomy.sanyal@amd.com>
Co-authored-by: Joe Shajrawi <17753158+shajrawi@users.noreply.github.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com> NickLucche pushed a commit
to NickLucche/vllm
that referenced
this pull request Feb 7, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( β¦ β¦ 42bfed0 β¦vllm-project#12094 )
Signed-off-by: Keyun Tong <tongkeyun@gmail.com> GWS0428 pushed a commit
to GWS0428/VARserve
that referenced
this pull request Feb 12, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( β¦ β¦ 75d4b32 β¦vllm-project#12094 )
Signed-off-by: Keyun Tong <tongkeyun@gmail.com> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 19, 2025 [FEAT] [AITER] Support AITER operators: Fused MoE, Linear, Norm ( #436 ) β¦ 4c8c86d * [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <467638484@qq.com>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <kewang@xilinx.com>
Signed-off-by: kewang2 <kewang2@amd.com>
Co-authored-by: kewang2 <kewang2@amd.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <michael@neuralmagic.com>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <charlifu@amd.com>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <kuntai@uchicago.edu>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <jiang1.li@intel.com>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <wallashss@ibm.com>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <hongxyan@amd.com>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <huseyin.isik000@gmail.com>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <jasoncky96@gmail.com>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <michael@neuralmagic.com>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <nlucches@redhat.com>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <cmq0113@163.com>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis SchΓΆnleber <joennlae@gmail.com>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <rickyx@anyscale.com>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <andy@mistral.ai>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <maleksan@amd.com>
Co-authored-by: maleksan85 <maleksan@amd.com>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <hongxyan@amd.com>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <kevin@anyscale.com>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <liangfc@amazon.com>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <nhill@redhat.com>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <cmq0113@163.com>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <1247714429@qq.com>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <863579016@qq.com>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <maleksan@amd.com>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <zhenweiliu@habana.ai>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: imkero <kerorek@outlook.com>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Micah Williamson <micah.williamson@amd.com>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* Integrated ater: kvcache pa gemm rmsnorm
* fix pa
* fix
* replace topk softmax
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* replace fp moe kernel with aiter kernel
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <rbryant@redhat.com>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
* change ater to aiter
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <youkaichao@gmail.com>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* Applying scales rename to fp8 config
* Applying scales rename to fp8 config ( #387 )
* Update Dockerfile.rocm
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* Using aiter moe kernel
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <tongkeyun@gmail.com>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <ywang@roblox.com>
* fix pa copy
* pa update
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
* add fp16 pa support for aiter
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <kyle@mistele.com>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <2037008807@qq.com>
* aiter build instructions
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
* Copy to the right path
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Add env var toggles to disable AITER MoE or PA (both by default on)
* Update accuracy benchmark for batch size > 1
* Add a few more AITER toggles for norm and linear layers
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
---------
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Joe Shajrawi <17753158+shajrawi@users.noreply.github.com>
Co-authored-by: TJian <tunjian1996@gmail.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Public aiter repo
* Fail if aiter build failed silently
* Aiter can only be built on MI300x
* Typo fix
* Aiter PA off by default
* Changes to support updated aiter FP8 PA
* Support FP8 and INT8 KV cache according to ROCm/aiter#90 * add moe weight shuffle for dynamic quant and unquantized path
Signed-off-by: charlifu <charlifu@amd.com>
* Use FP16-native PA after support in ROCm/aiter#97 * Fix: Use FP8 pertoken quantize if KV cache dtype is FP8
* revert rocm_flash_attn.py line 883
* Don't enable by default to use an RC for main vllm-dev docker
* use ck moe for bf16 and fp16 fused_moe
* Merge remote-tracking branch 'origin/aiter_intergration_final' into merge-aiter-llama-fp8
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
* [Bugfix] include moe shuffle env variable
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
---------
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: yisheng <yi.sheng@intel.com>
Signed-off-by: Abatom <abzhonghua@gmail.com>
Signed-off-by: Liangfu Chen <liangfc@amazon.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
Signed-off-by: Sourashis Roy <sroy@roblox.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Wallas Santos <wallashss@ibm.com>
Signed-off-by: jiang1.li <jiang1.li@intel.com>
Signed-off-by: yan ma <yan.ma@intel.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
Signed-off-by: Ye Qi <yeq@meta.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
Signed-off-by: Kuntai Du <kuntai@uchicago.edu>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Ren MinMin <renmm6@chinaunicom.cn>
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
Signed-off-by: Fred Reiss <frreiss@us.ibm.com>
Signed-off-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
Signed-off-by: shaochangxu.scx <shaochangxu.scx@antgroup.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
Signed-off-by: Akshat Tripathi <akshat@krai.ai>
Signed-off-by: Oleg Mosalov <oleg@krai.ai>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: Chenguang Li <757486878@qq.com>
Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>
Signed-off-by: Shanshan Shen <467638484@qq.com>
Signed-off-by: elijah <f1renze.142857@gmail.com>
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>
Signed-off-by: kewang-xlnx <kewang@xilinx.com>
Signed-off-by: kewang2 <kewang2@amd.com>
Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: hongxyan <hongxyan@amd.com>
Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>
Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Signed-off-by: Martin Gleize <mgleize@meta.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: isikhi <huseyin.isik000@gmail.com>
Signed-off-by: Jason Cheng <jasoncky96@gmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Signed-off-by: Jannis SchΓΆnleber <joennlae@gmail.com>
Signed-off-by: rickyx <rickyx@anyscale.com>
Signed-off-by: Andy Lo <andy@mistral.ai>
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
Signed-off-by: maleksan85 <maleksan@amd.com>
Signed-off-by: Hongxia Yang <hongxyan@amd.com>
Signed-off-by: kevin <kevin@anyscale.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: xffxff <1247714429@qq.com>
Signed-off-by: wangerxiao <863579016@qq.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: zhenwei <zhenweiliu@habana.ai>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: ElizaWszola <eliza@neuralmagic.com>
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
Signed-off-by: Keyun Tong <tongkeyun@gmail.com>
Signed-off-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Kyle Mistele <kyle@mistele.com>
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: charlifu <charlifu@amd.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: maang-h <55082429+maang-h@users.noreply.github.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: YiSheng5 <yi.sheng@intel.com>
Co-authored-by: Zhonghua Deng <abatom@163.com>
Co-authored-by: Liangfu Chen <liangfc@amazon.com>
Co-authored-by: XiaobingZhang <xiaobingzhangupc@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Yuan <yuan.zhou@intel.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: jiangjiadi <34134495+jiangjiadi@users.noreply.github.com>
Co-authored-by: jiadi.jjd <jiadi.jjd@antgroup.com>
Co-authored-by: sroy745 <142070531+sroy745@users.noreply.github.com>
Co-authored-by: Jie Fu (ε
ζ°) <jiefu@tencent.com>
Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Co-authored-by: WangErXiao <863579016@qq.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Wallas Henrique <wallashss@users.noreply.github.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Yan Ma <yan.ma@intel.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-neuralmagic@users.noreply.github.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
Co-authored-by: Guspan Tanadi <36249910+guspan-tanadi@users.noreply.github.com>
Co-authored-by: Ye (Charlotte) Qi <ye.charlotte.qi@gmail.com>
Co-authored-by: yeq <yeq@devgpu004.lla3.facebook.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Charles Frye <cfrye59@gmail.com>
Co-authored-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: cennn <61925104+cennn@users.noreply.github.com>
Co-authored-by: Kuntai Du <kuntai@uchicago.edu>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: minmin <rmm0811@gmail.com>
Co-authored-by: Ren MinMin <renmm6@chinaunicom.cn>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Fred Reiss <frreiss@us.ibm.com>
Co-authored-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
Co-authored-by: shaochangxu <85155497+shaochangxu@users.noreply.github.com>
Co-authored-by: shaochangxu.scx <shaochangxu.scx@antgroup.com>
Co-authored-by: NicolΓ² Lucchesi <nlucches@redhat.com>
Co-authored-by: sixgod <evethwillbeok@outlook.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: Akshat Tripathi <Akshat.tripathi6568@gmail.com>
Co-authored-by: Oleg Mosalov <oleg@krai.ai>
Co-authored-by: Avshalom Manevich <12231371+avshalomman@users.noreply.github.com>
Co-authored-by: Yangcheng Li <liyangcheng.lyc@alibaba-inc.com>
Co-authored-by: Siyuan Li <94890248+liaoyanqing666@users.noreply.github.com>
Co-authored-by: Concurrensee <yida.wu@amd.com>
Co-authored-by: Chenguang Li <757486878@qq.com>
Co-authored-by: Alex Brooks <alex.brooks@ibm.com>
Co-authored-by: Shanshan Shen <467638484@qq.com>
Co-authored-by: elijah <30852919+e1ijah1@users.noreply.github.com>
Co-authored-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: Elfie Guo <164945471+elfiegg@users.noreply.github.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>
Co-authored-by: Keyun Tong <tongkeyun@gmail.com>
Co-authored-by: RunningLeon <maningsheng@sensetime.com>
Co-authored-by: kewang-xlnx <73578509+kewang-xlnx@users.noreply.github.com>
Co-authored-by: kewang2 <kewang2@amd.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: tvirolai-amd <teemu.virolainen@amd.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Zhaoyi Li <36555117+Lzy17@users.noreply.github.com>
Co-authored-by: charlifu <charlifu@amd.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: yancong <32220263+ice-tong@users.noreply.github.com>
Co-authored-by: Michal Adamczyk <madamczyk@habana.ai>
Co-authored-by: gujing <925973396@qq.com>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: IΕΔ±k <41375111+isikhi@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cheng Kuan Yong Jason <jasoncky96@gmail.com>
Co-authored-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Thomas Parnell <tpa@zurich.ibm.com>
Co-authored-by: Jannis SchΓΆnleber <joennlae@gmail.com>
Co-authored-by: Ricky Xu <xuchen727@hotmail.com>
Co-authored-by: Andy Lo <andylolu24@gmail.com>
Co-authored-by: Adrian Cole <64215+codefromthecrypt@users.noreply.github.com>
Co-authored-by: Jani Monoses <jani.monoses@gmail.com>
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
Co-authored-by: Aleksandr Malyshev <164964928+maleksan85@users.noreply.github.com>
Co-authored-by: maleksan85 <maleksan@amd.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
Co-authored-by: zhou fan <1247714429@qq.com>
Co-authored-by: ilia-cher <30845429+ilia-cher@users.noreply.github.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: liuzhenwei <zhenweiliu@habana.ai>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: Micah Williamson <micah.williamson@amd.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
Co-authored-by: amd-ruitang3 <Rui.Tang2@amd.com>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
Co-authored-by: omer-dayan <omer@run.ai>
Co-authored-by: Mohit Deopujari <mdeopujari@habana.ai>
Co-authored-by: Jeremy Arnold <103538711+JArnoldAMD@users.noreply.github.com>
Co-authored-by: chenjun <junchen2@amd.com>
Co-authored-by: ValarLip <340077269@qq.com>
Co-authored-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Co-authored-by: Kyle Mistele <kyle@mistele.com>
Co-authored-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Bowen Wang <abmfy@icloud.com>
Co-authored-by: Bowen Bao <bowenbao@amd.com>
Co-authored-by: arakowsk-amd <182798202+arakowsk-amd@users.noreply.github.com>
Co-authored-by: Matthew Wong <Matthew.Wong2@amd.com>
Co-authored-by: sanyalington <shomy.sanyal@amd.com>
Co-authored-by: Joe Shajrawi <17753158+shajrawi@users.noreply.github.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: charlifu <chalifu@amd.com> mzusman pushed a commit
to mzusman/vllm
that referenced
this pull request Mar 12, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( β¦ β¦ bdf42bf β¦vllm-project#12094 )
Signed-off-by: Keyun Tong <tongkeyun@gmail.com> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:46:54
|
6dd94dbe94c1820a1e224cba65efcf0befa97995
|
https://github.com/vllm-project/vllm/pull/12380
| false | true | true | true | "PERF: throughput, Throughput, Throughput | SERVING: serving, serving, serving | TEST: test, test, t(...TRUNCATED) | "Copy link Member youkaichao commented Jan 24, 2025 β’ edited by github-actions bot Loading Uh oh! (...TRUNCATED) |
2025-09-07 17:46:57
|
aea94362c9bdd08ed2b346701bdc09d278e85f66
|
https://github.com/vllm-project/vllm/pull/12287
| true | true | true | true | "LM_EVAL: lm-eval, lm_eval, gsm8k | PERF: TTFT, TTFT, TTFT | SERVING: vllm serve, vllm serve, Servin(...TRUNCATED) | "Copy link Member njhill commented Jan 21, 2025 β’ edited by github-actions bot Loading Uh oh! Ther(...TRUNCATED) |
2025-09-07 17:47:04
|
3127e975fb9417d10513e25b80820870f594c627
|
https://github.com/vllm-project/vllm/pull/12212
| false | true | true | true | "PERF: Throughput, Throughput, Throughput | SERVING: serving, serving, serving | TEST: test, test, t(...TRUNCATED) | "Copy link Member DarkLight1337 commented Jan 20, 2025 β’ edited by github-actions bot Loading Uh o(...TRUNCATED) |
2025-09-07 17:47:09
|
310aca88c984983189a57f1b72e3b1dde89fb92f
|
https://github.com/vllm-project/vllm/pull/11870
| false | true | true | true |
PERF: latency, latency, latency | SERVING: Serving, serving, Serving | TEST: test, test, test
| "Copy link Member youkaichao commented Jan 9, 2025 β’ edited by github-actions bot Loading Uh oh! T(...TRUNCATED) |
2025-09-07 17:47:12
|
526de822d501c792b051c864ba873a836d78d5bf
|
https://github.com/vllm-project/vllm/pull/11698
| false | true | true | true |
PERF: latency, latency, TPOT | SERVING: Serving, serving, Serving | TEST: test, test, test
| "Copy link Contributor rasmith commented Jan 3, 2025 β’ edited by github-actions bot Loading Uh oh!(...TRUNCATED) |
2025-09-07 17:47:15
|
b55ed6ef8ab0dce7fb0f79ff292dafdb4d22610c
|
https://github.com/vllm-project/vllm/pull/11692
| false | true | true | true | "PERF: latency, optimization, speedup | SERVING: Serving, serving, API Server | TEST: test, test, te(...TRUNCATED) | "Copy link Collaborator WoosukKwon commented Jan 2, 2025 Currently, we don't consider the actual len(...TRUNCATED) |
2025-09-07 17:47:18
|
f26c4aeecba481ce1445be7a998b0b97460a13bb
|
https://github.com/vllm-project/vllm/pull/11275
| false | false | false | true |
TEST: test, CI, CI
| "Copy link Collaborator ruisearch42 commented Dec 18, 2024 β’ edited by github-actions bot Loading (...TRUNCATED) |
2025-09-07 17:47:21
|
vLLM PR Test Classification Dataset
π― Overview
This dataset contains 98 vLLM project commits with their corresponding Pull Request (PR) timeline data and comprehensive test type classifications. It provides insights into testing patterns in a major LLM serving infrastructure project.
π Dataset Description
Purpose
This dataset was created by analyzing vLLM project PR timelines to:
- Identify different types of testing and benchmarking activities
- Understand testing patterns in LLM infrastructure development
- Provide labeled data for ML models to classify test types in software PRs
- Enable research on performance optimization trends in LLM serving
Test Categories
Each commit is classified across four test categories:
Category | Description | Keywords | Prevalence |
---|---|---|---|
LM Evaluation | Language model evaluation tests | lm_eval , gsm8k , mmlu , hellaswag , truthfulqa |
25.5% |
Performance | Performance benchmarking tests | TTFT , throughput , latency , ITL , TPOT , tok/s |
81.6% |
Serving | Serving functionality tests | vllm serve , API server , frontend , online serving |
53.1% |
General Test | General testing activities | CI , pytest , unittest , buildkite , fastcheck |
96.9% |
π Dataset Statistics
Overall Distribution
- Total commits: 98
- Multi-category commits: 76 (77.6%)
- Average test types per commit: 2.57
Detailed Keyword Frequency
Top Performance Keywords (80 commits)
throughput
: 241 mentionslatency
: 191 mentionsprofiling
: 114 mentionsTTFT
(Time To First Token): 114 mentionsITL
(Inter-token Latency): 114 mentionsTPOT
(Time Per Output Token): 108 mentionsoptimization
: 87 mentionstok/s
(tokens per second): 66 mentions
Top LM Evaluation Keywords (25 commits)
gsm8k
: 62 mentionslm_eval
: 33 mentionslm-eval
: 9 mentionsmmlu
: 3 mentionshumaneval
: 1 mention
Top Serving Keywords (52 commits)
frontend
: 181 mentionsserving
: 74 mentionsapi server
: 42 mentionsvllm serve
: 23 mentionsonline serving
: 19 mentions
ποΈ Data Schema
{
'commit_hash': str, # Git commit SHA-1 hash (40 chars)
'pr_url': str, # GitHub PR URL (e.g., https://github.com/vllm-project/vllm/pull/12601)
'has_lm_eval': bool, # True if commit contains LM evaluation tests
'has_performance': bool, # True if commit contains performance benchmarks
'has_serving': bool, # True if commit contains serving tests
'has_general_test': bool, # True if commit contains general tests
'test_details': str, # Pipe-separated test keywords (e.g., "PERF: ttft, throughput | TEST: ci, pytest")
'timeline_text': str, # Full PR timeline text with comments, reviews, and commit messages
'extracted_at': str # ISO timestamp when data was extracted
}
π» Usage Examples
Basic Loading
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("your-username/vllm-pr-test-classification")
# Explore the data
print(f"Total examples: {len(dataset['train'])}")
print(f"Features: {dataset['train'].features}")
print(f"First example: {dataset['train'][0]}")
Filtering Examples
# Get commits with performance benchmarks
perf_commits = dataset['train'].filter(lambda x: x['has_performance'])
print(f"Performance commits: {len(perf_commits)}")
# Get commits with LM evaluation
lm_eval_commits = dataset['train'].filter(lambda x: x['has_lm_eval'])
print(f"LM evaluation commits: {len(lm_eval_commits)}")
# Get commits with multiple test types
multi_test = dataset['train'].filter(
lambda x: sum([x['has_lm_eval'], x['has_performance'],
x['has_serving'], x['has_general_test']]) >= 3
)
print(f"Commits with 3+ test types: {len(multi_test)}")
Analysis Example
import pandas as pd
# Convert to pandas for analysis
df = dataset['train'].to_pandas()
# Analyze test type combinations
test_combinations = df[['has_lm_eval', 'has_performance', 'has_serving', 'has_general_test']]
combination_counts = test_combinations.value_counts()
print("Most common test combinations:")
print(combination_counts.head())
# Extract performance metrics mentioned
perf_df = df[df['has_performance']]
print(f"\nCommits mentioning specific metrics:")
print(f"TTFT mentions: {perf_df['test_details'].str.contains('TTFT').sum()}")
print(f"Throughput mentions: {perf_df['test_details'].str.contains('throughput', case=False).sum()}")
Text Classification Training
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TrainingArguments, Trainer
# Prepare for multi-label classification
def preprocess_function(examples):
# Create multi-label targets
labels = []
for i in range(len(examples['commit_hash'])):
label = [
int(examples['has_lm_eval'][i]),
int(examples['has_performance'][i]),
int(examples['has_serving'][i]),
int(examples['has_general_test'][i])
]
labels.append(label)
# Tokenize timeline text
tokenized = tokenizer(
examples['timeline_text'],
truncation=True,
padding='max_length',
max_length=512
)
tokenized['labels'] = labels
return tokenized
# Train a classifier to identify test types from PR text
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained(
"bert-base-uncased",
num_labels=4,
problem_type="multi_label_classification"
)
π Sample Data
Example 1: Performance-focused commit
{
"commit_hash": "fc542144c4477ffec1d3de6fa43e54f8fb5351e8",
"pr_url": "https://github.com/vllm-project/vllm/pull/12563",
"has_lm_eval": false,
"has_performance": true,
"has_serving": false,
"has_general_test": true,
"test_details": "PERF: tok/s, optimization | TEST: CI",
"timeline_text": "[Guided decoding performance optimization]..."
}
Example 2: Comprehensive testing commit
{
"commit_hash": "aea94362c9bdd08ed2b346701bdc09d278e85f66",
"pr_url": "https://github.com/vllm-project/vllm/pull/12287",
"has_lm_eval": true,
"has_performance": true,
"has_serving": true,
"has_general_test": true,
"test_details": "LM_EVAL: lm_eval, gsm8k | PERF: TTFT, ITL | SERVING: vllm serve | TEST: test, CI",
"timeline_text": "[Frontend][V1] Online serving performance improvements..."
}
π οΈ Potential Use Cases
- Test Type Classification: Train models to automatically classify test types in software PRs
- Testing Pattern Analysis: Study how different test types correlate in infrastructure projects
- Performance Optimization Research: Analyze performance testing trends in LLM serving systems
- CI/CD Insights: Understand continuous integration patterns in ML infrastructure projects
- Documentation Generation: Generate test documentation from PR timelines
- Code Review Automation: Build tools to automatically suggest relevant tests based on PR content
π Source
This dataset was extracted from the vLLM project GitHub repository PR timelines. vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs.
π Updates
- v1.0.0 (2025-01): Initial release with 98 commits
π License
This dataset is released under the MIT License, consistent with the vLLM project's licensing.
π Citation
If you use this dataset in your research or applications, please cite:
@dataset{vllm_pr_test_classification_2025,
title={vLLM PR Test Classification Dataset},
author={vLLM Community Contributors},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/your-username/vllm-pr-test-classification},
note={A dataset of 98 vLLM commits with test type classifications extracted from GitHub PR timelines}
}
π€ Contributing
If you'd like to contribute to this dataset or report issues:
- Open an issue on the Hugging Face dataset repository
- Submit improvements via pull requests
- Share your use cases and findings
β οΈ Limitations
- Dataset size is limited to 98 commits
- Timeline text may be truncated for very long PR discussions
- Classification is based on keyword matching, which may miss context-dependent references
- Dataset represents a snapshot from specific time period of vLLM development
π Acknowledgments
Thanks to the vLLM project maintainers and contributors for their open-source work that made this dataset possible.
- Downloads last month
- 87