Update README.md
Browse files
README.md
CHANGED
@@ -8,14 +8,14 @@ As title, lot of third party librarys which are important for running NN lack pr
|
|
8 |
|
9 |
***Use At Your Own Risks, Check Official Release To See if There Are Any Official Supports On Your HW Regularly***
|
10 |
|
11 |
-
|
12 |
* Flash Attention
|
13 |
* Xformers (with cutlass/flash attention built-in)
|
14 |
* NATTEN
|
15 |
* SageAttention
|
16 |
* vLLM
|
17 |
|
18 |
-
|
19 |
1. I only ensure those wheels can works on RTX50 series (sm120) GPUs, if your platform is mixed with different sm/cu arch GPUs, you may still need to compile them by yourself
|
20 |
2. Env
|
21 |
* Pytorch: 2.7.0
|
@@ -25,4 +25,10 @@ As title, lot of third party librarys which are important for running NN lack pr
|
|
25 |
* Tested platform: Ubuntu 22.04 and 24.04
|
26 |
* CPU arch: amd64 (x86-64)
|
27 |
3. Not all the wheels are fully functional (due to deps things or source implementation), for example, cutlass w8a8 scaled mm is not working in vllm, you need to use `VLLM_TEST_FORCE_FP8_MARLIN=1` to make VLLM works normally with fp8 weight quantization. If you are using flash attention, you need `VLLM_FLASH_ATTN_VERSION=2` to make it work on 5090
|
28 |
-
4. If you meet any problem or need wheels for specific setup you can open discussion, but I can't ensure I will do it or not.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
***Use At Your Own Risks, Check Official Release To See if There Are Any Official Supports On Your HW Regularly***
|
10 |
|
11 |
+
### Inlcuded Library
|
12 |
* Flash Attention
|
13 |
* Xformers (with cutlass/flash attention built-in)
|
14 |
* NATTEN
|
15 |
* SageAttention
|
16 |
* vLLM
|
17 |
|
18 |
+
### IMPORTANT NOTE
|
19 |
1. I only ensure those wheels can works on RTX50 series (sm120) GPUs, if your platform is mixed with different sm/cu arch GPUs, you may still need to compile them by yourself
|
20 |
2. Env
|
21 |
* Pytorch: 2.7.0
|
|
|
25 |
* Tested platform: Ubuntu 22.04 and 24.04
|
26 |
* CPU arch: amd64 (x86-64)
|
27 |
3. Not all the wheels are fully functional (due to deps things or source implementation), for example, cutlass w8a8 scaled mm is not working in vllm, you need to use `VLLM_TEST_FORCE_FP8_MARLIN=1` to make VLLM works normally with fp8 weight quantization. If you are using flash attention, you need `VLLM_FLASH_ATTN_VERSION=2` to make it work on 5090
|
28 |
+
4. If you meet any problem or need wheels for specific setup you can open discussion, but I can't ensure I will do it or not.
|
29 |
+
|
30 |
+
### Tips
|
31 |
+
* Install `triton==3.3.1` for better RTX50 series support
|
32 |
+
* Install `nvidia-nccl-cu12==2.26.5` for correct multi-gpu deps for RTX50 series
|
33 |
+
* Torch 2.7.0 use 2.26.2 in their requirements which is not compatibile with RTX50 series, you should install this from pypi directly with `pip isntall nvidia-nccl-cu12>2.26.2`
|
34 |
+
* I build all those wheel with `python -m build -n -w .` which is more suitable in modern python packaging, I recommend all the user who want to compile those wheel by themselves follow this scheme. (No matter the project use pyproject or setup.py, build package will works)
|