Add files using upload-large-folder tool
Browse files- README.md +4 -4
- tokenizer_config.json +1 -1
README.md
CHANGED
@@ -8,11 +8,11 @@ tags:
|
|
8 |
base_model: Qwen/Qwen3-30B-A3B-Instruct-2507
|
9 |
---
|
10 |
|
11 |
-
#
|
12 |
|
13 |
-
This model [
|
14 |
converted to MLX format from [Qwen/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507)
|
15 |
-
using mlx-lm version **0.26.
|
16 |
|
17 |
## Use with mlx
|
18 |
|
@@ -23,7 +23,7 @@ pip install mlx-lm
|
|
23 |
```python
|
24 |
from mlx_lm import load, generate
|
25 |
|
26 |
-
model, tokenizer = load("
|
27 |
|
28 |
prompt = "hello"
|
29 |
|
|
|
8 |
base_model: Qwen/Qwen3-30B-A3B-Instruct-2507
|
9 |
---
|
10 |
|
11 |
+
# Gallardo994/Qwen3-30B-A3B-Instruct-2507
|
12 |
|
13 |
+
This model [Gallardo994/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Gallardo994/Qwen3-30B-A3B-Instruct-2507) was
|
14 |
converted to MLX format from [Qwen/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507)
|
15 |
+
using mlx-lm version **0.26.3**.
|
16 |
|
17 |
## Use with mlx
|
18 |
|
|
|
23 |
```python
|
24 |
from mlx_lm import load, generate
|
25 |
|
26 |
+
model, tokenizer = load("Gallardo994/Qwen3-30B-A3B-Instruct-2507")
|
27 |
|
28 |
prompt = "hello"
|
29 |
|
tokenizer_config.json
CHANGED
@@ -231,7 +231,7 @@
|
|
231 |
"eos_token": "<|im_end|>",
|
232 |
"errors": "replace",
|
233 |
"extra_special_tokens": {},
|
234 |
-
"model_max_length":
|
235 |
"pad_token": "<|endoftext|>",
|
236 |
"split_special_tokens": false,
|
237 |
"tokenizer_class": "Qwen2Tokenizer",
|
|
|
231 |
"eos_token": "<|im_end|>",
|
232 |
"errors": "replace",
|
233 |
"extra_special_tokens": {},
|
234 |
+
"model_max_length": 1010000,
|
235 |
"pad_token": "<|endoftext|>",
|
236 |
"split_special_tokens": false,
|
237 |
"tokenizer_class": "Qwen2Tokenizer",
|