Is there a chance to get the AWQ or DWQ model? Mac Studio (512GB) has a 3-bit honey point.

#1
by zletpm - opened
MLX Community org

Is there a chance to get the 3bit AWQ or DWQ model? Mac Studio (512GB) has a 3-bit honey point.
The 4-bit model is too slow for a prefill delay when used in a rag system, and leaving insufficient RAM for other applications.

Did you find any?

MLX Community org

Are you interested in mixed conversion 5q-3q or 4q-2q? I am working on it at this moment, but dunno if I would succeed. What —group-size you advise? I failed with 64 unfortunately :/

MLX Community org

Are you interested in mixed conversion 5q-3q or 4q-2q? I am working on it at this moment, but dunno if I would succeed. What —group-size you advise? I failed with 64 unfortunately :/

The system's performance is often limited by its weakest component. I have compared the mixed quantization protocol, and the unified quantization approach consistently outperforms in most tasks. I have not tested dynamic quantization, as it is quite time-consuming.

3-bit DWQ would be nice to run on my 2 x 256 GB M3 Ultra

MLX Community org

Is there a chance to get the 3bit AWQ or DWQ model? Mac Studio (512GB) has a 3-bit honey point.
The 4-bit model is too slow for a prefill delay when used in a rag system, and leaving insufficient RAM for other applications.

Is it? I'm able to use the 5bit with a context up to 16K and 4bit 64K.
➜ ~ mlx_lm.generate --model mlx-community/DeepSeek-R1-0528-4bit --max-tokens 100 add this to your tests.

Sign up or log in to comment