ShuaiBai623 commited on
Commit
a612b31
·
verified ·
1 Parent(s): b8ee569

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -76,6 +76,54 @@ Available in Dense and MoE architectures that scale from edge to cloud, with Ins
76
  **Pure text performance**
77
  ![](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl_2b_32b_text_thinking.jpg)
78
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  ### Generation Hyperparameters
80
  #### VL
81
  ```bash
 
76
  **Pure text performance**
77
  ![](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl_2b_32b_text_thinking.jpg)
78
 
79
+
80
+ ## How to Use
81
+
82
+ To use these models with `llama.cpp`, please ensure you are using the **latest version**—either by [building from source](https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md) or downloading the most recent [release](https://github.com/ggml-org/llama.cpp/releases/tag/b6907) according to the devices.
83
+
84
+ You can run inference via the command line or through a web-based chat interface.
85
+
86
+ ### CLI Inference (`llama-mtmd-cli`)
87
+
88
+ For example, to run Qwen3-VL-32B-Thinking with an FP16 vision encoder and Q8_0 quantized LLM:
89
+
90
+ ```bash
91
+ llama-mtmd-cli \
92
+ -m path/to/Qwen3VL-32B-Thinking-Q8_0.gguf \
93
+ --mmproj path/to/mmproj-Qwen3VL-32B-Thinking-F16.gguf \
94
+ --image test.jpeg \
95
+ -p "What is the publisher name of the newspaper?" \
96
+ --temp 1.0 --top-k 20 --top-p 0.95 -n 1024
97
+ ```
98
+
99
+ ### Web Chat (using `llama-server`)
100
+
101
+ To serve Qwen3-VL-235B-A22B-Instruct via an OpenAI-compatible API with a web UI:
102
+
103
+ ```bash
104
+ llama-server \
105
+ -m path/to/Qwen3VL-235B-A22B-Instruct-Q4_K_M-split-00001-of-00003.gguf \
106
+ --mmproj path/to/mmproj-Qwen3VL-235B-A22B-Instruct-Q8_0.gguf
107
+ ```
108
+
109
+ > **Tip**: For models split into multiple GGUF files, simply specify the first shard (e.g., `...-00001-of-00003.gguf`). llama.cpp will automatically load all parts.
110
+
111
+ Once the server is running, open your browser to `http://localhost:8080` to access the built-in chat interface, or send requests to the `/v1/chat/completions` endpoint. For more details, refer to the [official documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md).
112
+
113
+ ### Quantize Your Custom Model
114
+
115
+ You can further quantize the FP16 weights to other precision levels. For example, to quantize the model to 2-bit:
116
+
117
+ ```bash
118
+ # Quantize to 2-bit (IQ2_XXS)
119
+ llama-quantize \
120
+ path/to/Qwen3VL-235B-A22B-Instruct-F16.gguf \
121
+ path/to/Qwen3VL-235B-A22B-Instruct-IQ2_XXS.gguf \
122
+ iq2_xxs 8
123
+ ```
124
+
125
+ For a full list of supported quantization types and detailed instructions, refer to the [quantization documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/quantize/README.md).
126
+
127
  ### Generation Hyperparameters
128
  #### VL
129
  ```bash