littlebird13 commited on
Commit
d232ad0
·
verified ·
1 Parent(s): 9146df8

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,148 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Qwen3-Reranker-8B
2
+
3
+ <p align="center">
4
+ <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
5
+ <p>
6
+
7
+ ## Highlights
8
+
9
+ The Qwen3 Embedding series model is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
10
+
11
+ **Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks No.1 in the MTEB multilingual leaderboard (as of May 26, 2025, score 70.58), while the reranking model excels in various text retrieval scenarios.
12
+
13
+ **Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
14
+
15
+ **Multilingual Capability**: The Qwen3 Embedding series support over 100 languages, including various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
16
+
17
+ ## Model Overview
18
+
19
+ **Qwen3-Reranker-8B** has the following features:
20
+
21
+ - Model Type: Text Reranking
22
+ - Supported Languages: 100+ Languages
23
+ - Number of Paramaters: 8B
24
+ - Context Length: 32k
25
+
26
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-Embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
27
+
28
+ ## Qwen3 Embedding Series Model list
29
+
30
+ | Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruct Aware |
31
+ |------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
32
+ | Text Embedding | [Qwen3-Embedding-0.6B](https://modelscope.cn/models/tongyi/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
33
+ | Text Embedding | [Qwen3-Embedding-4B](https://modelscope.cn/models/tongyi/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
34
+ | Text Embedding | [Qwen3-Embedding-8B](https://modelscope.cn/models/tongyi/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
35
+ | Text Reranking | [Qwen3-Reranker-0.6B](https://modelscope.cn/models/tongyi/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
36
+ | Text Reranking | [Qwen3-Reranker-4B](https://modelscope.cn/models/tongyi/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
37
+ | Text Reranking | [Qwen3-Reranker-8B](https://modelscope.cn/models/tongyi/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
38
+
39
+ > **Note**: `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding. `Instruct Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
40
+
41
+
42
+ ## Usage
43
+
44
+ With Transformers versions earlier than 4.51.0, you may encounter the following error:
45
+ ```
46
+ KeyError: 'qwen3'
47
+ ```
48
+
49
+ ### Transformers Usage
50
+
51
+ ```python
52
+ # Requires transformers>=4.51.0
53
+ import torch
54
+ from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM
55
+
56
+ def format_instruction(instruction, query, doc):
57
+ if instruction is None:
58
+ instruction = 'Given a web search query, retrieve relevant passages that answer the query'
59
+ output = "<Instruct>: {instruction}\n<Query>: {query}\n<Document>: {doc}".format(instruction=instruction,query=query, doc=doc)
60
+ return output
61
+
62
+ def process_inputs(pairs):
63
+ inputs = tokenizer(
64
+ pairs, padding=False, truncation='longest_first',
65
+ return_attention_mask=False, max_length=max_length - len(prefix_tokens) - len(suffix_tokens)
66
+ )
67
+ for i, ele in enumerate(inputs['input_ids']):
68
+ inputs['input_ids'][i] = prefix_tokens + ele + suffix_tokens
69
+ inputs = tokenizer.pad(inputs, padding=True, return_tensors="pt", max_length=max_length)
70
+ for key in inputs:
71
+ inputs[key] = inputs[key].to(model.device)
72
+ return inputs
73
+
74
+ @torch.no_grad()
75
+ def compute_logits(inputs, **kwargs):
76
+ batch_scores = model(**inputs).logits[:, -1, :]
77
+ true_vector = batch_scores[:, token_true_id]
78
+ false_vector = batch_scores[:, token_false_id]
79
+ batch_scores = torch.stack([false_vector, true_vector], dim=1)
80
+ batch_scores = torch.nn.functional.log_softmax(batch_scores, dim=1)
81
+ scores = batch_scores[:, 1].exp().tolist()
82
+ return scores
83
+
84
+ tokenizer = AutoTokenizer.from_pretrained("tongyi/Qwen3-Reranker-8B", padding_side='left')
85
+
86
+ model = AutoModelForCausalLM.from_pretrained("tongyi/Qwen3-Reranker-8B").eval()
87
+ # We recommend enabling flash_attention_2 for better acceleration and memory saving.
88
+ # model = AutoModelForCausalLM.from_pretrained("tongyi/Qwen3-Reranker-8B", torch_dtype=torch.float16, attn_implementation="flash_attention_2").cuda().eval()
89
+
90
+ token_false_id = tokenizer.convert_tokens_to_ids("no")
91
+ token_true_id = tokenizer.convert_tokens_to_ids("yes")
92
+ max_length = 8192
93
+
94
+ prefix = "<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\".<|im_end|>\n<|im_start|>user\n"
95
+ suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
96
+ prefix_tokens = tokenizer.encode(prefix, add_special_tokens=False)
97
+ suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
98
+
99
+ task = 'Given a web search query, retrieve relevant passages that answer the query'
100
+
101
+ queries = ["What is the capital of China?",
102
+ "Explain gravity",
103
+ ]
104
+
105
+ documents = [
106
+ "The capital of China is Beijing.",
107
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
108
+ ]
109
+
110
+ pairs = [format_instruction(task, query, doc) for query, doc in zip(queries, documents)]
111
+
112
+ # Tokenize the input texts
113
+ inputs = process_inputs(pairs)
114
+ scores = compute_logits(inputs)
115
+
116
+ print("scores: ", scores)
117
+ ```
118
+
119
+ 📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
120
+
121
+ ## Evaluation
122
+
123
+ | Model | Param | MTEB-R | CMTEB-R | MMTEB-R | MLDR | MTEB-Code | FollowIR |
124
+ |------------------------------------|--------|---------|---------|---------|--------|-----------|----------|
125
+ | **Qwen3-Embedding-0.6B** | 0.6B | 61.82 | 71.02 | 64.64 | 50.26 | 75.41 | 5.09 |
126
+ | Jina-multilingual-reranker-v2-base | 0.3B | 58.22 | 63.37 | 63.73 | 39.66 | 58.98 | -0.68 |
127
+ | gte-multilingual-reranker-base | 0.3B | 59.51 | 74.08 | 59.44 | 66.33 | 54.18 | -1.64 |
128
+ | BGE-reranker-v2-m3 | 0.6B | 57.03 | 72.16 | 58.36 | 59.51 | 41.38 | -0.01 |
129
+ | **Qwen3-Reranker-0.6B** | 0.6B | 65.80 | 71.31 | 66.36 | 67.28 | 73.42 | 5.41 |
130
+ | **Qwen3-Reranker-4B** | 1.7B | **69.76** | 75.94 | 72.74 | 69.97 | 81.20 | **14.84** |
131
+ | **Qwen3-Reranker-8B** | 8B | 69.02 | **77.45** | **72.94** | **70.19** | **81.22** | 8.05 |
132
+
133
+ > **Note**:
134
+ > - Evaluation results for reranking models. We use the retrieval subsets of MTEB(eng, v2), MTEB(cmn, v1), MMTEB and MTEB (Code), which are MTEB-R, CMTEB-R, MMTEB-R and MTEB-Code.
135
+ > - All scores are our runs based on the top-100 candidates retrieved by dense embedding model [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B).
136
+
137
+ ## Citation
138
+ If you find our work helpful, feel free to give us a cite.
139
+
140
+ ```
141
+ @misc{qwen3-embedding,
142
+ title = {Qwen3-Embedding},
143
+ url = {https://qwenlm.github.io/blog/qwen3/},
144
+ author = {Qwen Team},
145
+ month = {May},
146
+ year = {2025}
147
+ }
148
+ ```
added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3ForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 151643,
8
+ "eos_token_id": 151645,
9
+ "head_dim": 128,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 4096,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 12288,
14
+ "max_position_embeddings": 40960,
15
+ "max_window_layers": 36,
16
+ "model_type": "qwen3",
17
+ "num_attention_heads": 32,
18
+ "num_hidden_layers": 36,
19
+ "num_key_value_heads": 8,
20
+ "rms_norm_eps": 1e-06,
21
+ "rope_scaling": null,
22
+ "rope_theta": 1000000,
23
+ "sliding_window": null,
24
+ "tie_word_embeddings": false,
25
+ "torch_dtype": "bfloat16",
26
+ "transformers_version": "4.51.3",
27
+ "use_cache": true,
28
+ "use_sliding_window": false,
29
+ "vocab_size": 151669
30
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "temperature": 0.6,
10
+ "top_k": 20,
11
+ "top_p": 0.95,
12
+ "transformers_version": "4.51.3"
13
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22cdfea4a13b7b3e866573800eeeb638fc38962940adf631d06dc03befed047a
3
+ size 4027618768
model-00002-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2163b74137e35b4614bd2aa5bf27bcb07de4ca61c6962495feb968385eb0df8
3
+ size 4060268160
model-00003-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5038caa78c817e8acce6806104869675938a33fd4e60ed038e9931d390d6989
3
+ size 4043508680
model-00004-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:247f85538c5996d4c296291b0e4004f618c9b17ca8cdc25d1fc726567eb15803
3
+ size 3003274088
model-00005-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ba41b93c2e4ec8339ad16b000bc977fde196aeac054956cbfc8c0186ee6d4cf
3
+ size 1242472576
model.safetensors.index.json ADDED
@@ -0,0 +1,406 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 16377096192
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00005-of-00005.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00005.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
13
+ "model.layers.0.self_attn.k_norm.weight": "model-00001-of-00005.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
16
+ "model.layers.0.self_attn.q_norm.weight": "model-00001-of-00005.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
18
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
19
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
20
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
21
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
22
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
23
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
24
+ "model.layers.1.self_attn.k_norm.weight": "model-00001-of-00005.safetensors",
25
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
26
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
27
+ "model.layers.1.self_attn.q_norm.weight": "model-00001-of-00005.safetensors",
28
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
29
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
30
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00005.safetensors",
31
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
32
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
33
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
34
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
35
+ "model.layers.10.self_attn.k_norm.weight": "model-00002-of-00005.safetensors",
36
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
37
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
38
+ "model.layers.10.self_attn.q_norm.weight": "model-00002-of-00005.safetensors",
39
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
40
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
41
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00005.safetensors",
42
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
43
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
44
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
45
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
46
+ "model.layers.11.self_attn.k_norm.weight": "model-00002-of-00005.safetensors",
47
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
48
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
49
+ "model.layers.11.self_attn.q_norm.weight": "model-00002-of-00005.safetensors",
50
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
51
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
52
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00005.safetensors",
53
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
54
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
55
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
56
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
57
+ "model.layers.12.self_attn.k_norm.weight": "model-00002-of-00005.safetensors",
58
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
59
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
60
+ "model.layers.12.self_attn.q_norm.weight": "model-00002-of-00005.safetensors",
61
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
62
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
63
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00005.safetensors",
64
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
65
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
66
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
67
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
68
+ "model.layers.13.self_attn.k_norm.weight": "model-00002-of-00005.safetensors",
69
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
70
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
71
+ "model.layers.13.self_attn.q_norm.weight": "model-00002-of-00005.safetensors",
72
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
73
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
74
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00005.safetensors",
75
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
76
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
77
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
78
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
79
+ "model.layers.14.self_attn.k_norm.weight": "model-00002-of-00005.safetensors",
80
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
81
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
82
+ "model.layers.14.self_attn.q_norm.weight": "model-00002-of-00005.safetensors",
83
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
84
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
85
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00005.safetensors",
86
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
87
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
88
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
89
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
90
+ "model.layers.15.self_attn.k_norm.weight": "model-00002-of-00005.safetensors",
91
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
92
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
93
+ "model.layers.15.self_attn.q_norm.weight": "model-00002-of-00005.safetensors",
94
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
95
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
96
+ "model.layers.16.input_layernorm.weight": "model-00002-of-00005.safetensors",
97
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
98
+ "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
99
+ "model.layers.16.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
100
+ "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
101
+ "model.layers.16.self_attn.k_norm.weight": "model-00002-of-00005.safetensors",
102
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
103
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
104
+ "model.layers.16.self_attn.q_norm.weight": "model-00002-of-00005.safetensors",
105
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
106
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
107
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00005.safetensors",
108
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
109
+ "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
110
+ "model.layers.17.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
111
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
112
+ "model.layers.17.self_attn.k_norm.weight": "model-00002-of-00005.safetensors",
113
+ "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
114
+ "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
115
+ "model.layers.17.self_attn.q_norm.weight": "model-00002-of-00005.safetensors",
116
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
117
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
118
+ "model.layers.18.input_layernorm.weight": "model-00003-of-00005.safetensors",
119
+ "model.layers.18.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
120
+ "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
121
+ "model.layers.18.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
122
+ "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
123
+ "model.layers.18.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
124
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
125
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
126
+ "model.layers.18.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
127
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
128
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
129
+ "model.layers.19.input_layernorm.weight": "model-00003-of-00005.safetensors",
130
+ "model.layers.19.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
131
+ "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
132
+ "model.layers.19.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
133
+ "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
134
+ "model.layers.19.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
135
+ "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
136
+ "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
137
+ "model.layers.19.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
138
+ "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
139
+ "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
140
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
141
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
142
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
143
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
144
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
145
+ "model.layers.2.self_attn.k_norm.weight": "model-00001-of-00005.safetensors",
146
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
147
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
148
+ "model.layers.2.self_attn.q_norm.weight": "model-00001-of-00005.safetensors",
149
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
150
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
151
+ "model.layers.20.input_layernorm.weight": "model-00003-of-00005.safetensors",
152
+ "model.layers.20.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
153
+ "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
154
+ "model.layers.20.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
155
+ "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
156
+ "model.layers.20.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
157
+ "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
158
+ "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
159
+ "model.layers.20.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
160
+ "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
161
+ "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
162
+ "model.layers.21.input_layernorm.weight": "model-00003-of-00005.safetensors",
163
+ "model.layers.21.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
164
+ "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
165
+ "model.layers.21.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
166
+ "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
167
+ "model.layers.21.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
168
+ "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
169
+ "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
170
+ "model.layers.21.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
171
+ "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
172
+ "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
173
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00005.safetensors",
174
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
175
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
176
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
177
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
178
+ "model.layers.22.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
179
+ "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
180
+ "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
181
+ "model.layers.22.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
182
+ "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
183
+ "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
184
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00005.safetensors",
185
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
186
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
187
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
188
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
189
+ "model.layers.23.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
190
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
191
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
192
+ "model.layers.23.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
193
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
194
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
195
+ "model.layers.24.input_layernorm.weight": "model-00003-of-00005.safetensors",
196
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
197
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
198
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
199
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
200
+ "model.layers.24.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
201
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
202
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
203
+ "model.layers.24.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
204
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
205
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
206
+ "model.layers.25.input_layernorm.weight": "model-00003-of-00005.safetensors",
207
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
208
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
209
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
210
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
211
+ "model.layers.25.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
212
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
213
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
214
+ "model.layers.25.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
215
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
216
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
217
+ "model.layers.26.input_layernorm.weight": "model-00003-of-00005.safetensors",
218
+ "model.layers.26.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
219
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
220
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
221
+ "model.layers.26.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
222
+ "model.layers.26.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
223
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
224
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
225
+ "model.layers.26.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
226
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
227
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
228
+ "model.layers.27.input_layernorm.weight": "model-00003-of-00005.safetensors",
229
+ "model.layers.27.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
230
+ "model.layers.27.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
231
+ "model.layers.27.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
232
+ "model.layers.27.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
233
+ "model.layers.27.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
234
+ "model.layers.27.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
235
+ "model.layers.27.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
236
+ "model.layers.27.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
237
+ "model.layers.27.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
238
+ "model.layers.27.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
239
+ "model.layers.28.input_layernorm.weight": "model-00004-of-00005.safetensors",
240
+ "model.layers.28.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
241
+ "model.layers.28.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
242
+ "model.layers.28.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
243
+ "model.layers.28.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
244
+ "model.layers.28.self_attn.k_norm.weight": "model-00003-of-00005.safetensors",
245
+ "model.layers.28.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
246
+ "model.layers.28.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
247
+ "model.layers.28.self_attn.q_norm.weight": "model-00003-of-00005.safetensors",
248
+ "model.layers.28.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
249
+ "model.layers.28.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
250
+ "model.layers.29.input_layernorm.weight": "model-00004-of-00005.safetensors",
251
+ "model.layers.29.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
252
+ "model.layers.29.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
253
+ "model.layers.29.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
254
+ "model.layers.29.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
255
+ "model.layers.29.self_attn.k_norm.weight": "model-00004-of-00005.safetensors",
256
+ "model.layers.29.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
257
+ "model.layers.29.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
258
+ "model.layers.29.self_attn.q_norm.weight": "model-00004-of-00005.safetensors",
259
+ "model.layers.29.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
260
+ "model.layers.29.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
261
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
262
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
263
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
264
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
265
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
266
+ "model.layers.3.self_attn.k_norm.weight": "model-00001-of-00005.safetensors",
267
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
268
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
269
+ "model.layers.3.self_attn.q_norm.weight": "model-00001-of-00005.safetensors",
270
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
271
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
272
+ "model.layers.30.input_layernorm.weight": "model-00004-of-00005.safetensors",
273
+ "model.layers.30.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
274
+ "model.layers.30.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
275
+ "model.layers.30.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
276
+ "model.layers.30.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
277
+ "model.layers.30.self_attn.k_norm.weight": "model-00004-of-00005.safetensors",
278
+ "model.layers.30.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
279
+ "model.layers.30.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
280
+ "model.layers.30.self_attn.q_norm.weight": "model-00004-of-00005.safetensors",
281
+ "model.layers.30.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
282
+ "model.layers.30.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
283
+ "model.layers.31.input_layernorm.weight": "model-00004-of-00005.safetensors",
284
+ "model.layers.31.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
285
+ "model.layers.31.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
286
+ "model.layers.31.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
287
+ "model.layers.31.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
288
+ "model.layers.31.self_attn.k_norm.weight": "model-00004-of-00005.safetensors",
289
+ "model.layers.31.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
290
+ "model.layers.31.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
291
+ "model.layers.31.self_attn.q_norm.weight": "model-00004-of-00005.safetensors",
292
+ "model.layers.31.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
293
+ "model.layers.31.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
294
+ "model.layers.32.input_layernorm.weight": "model-00004-of-00005.safetensors",
295
+ "model.layers.32.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
296
+ "model.layers.32.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
297
+ "model.layers.32.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
298
+ "model.layers.32.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
299
+ "model.layers.32.self_attn.k_norm.weight": "model-00004-of-00005.safetensors",
300
+ "model.layers.32.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
301
+ "model.layers.32.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
302
+ "model.layers.32.self_attn.q_norm.weight": "model-00004-of-00005.safetensors",
303
+ "model.layers.32.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
304
+ "model.layers.32.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
305
+ "model.layers.33.input_layernorm.weight": "model-00004-of-00005.safetensors",
306
+ "model.layers.33.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
307
+ "model.layers.33.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
308
+ "model.layers.33.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
309
+ "model.layers.33.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
310
+ "model.layers.33.self_attn.k_norm.weight": "model-00004-of-00005.safetensors",
311
+ "model.layers.33.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
312
+ "model.layers.33.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
313
+ "model.layers.33.self_attn.q_norm.weight": "model-00004-of-00005.safetensors",
314
+ "model.layers.33.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
315
+ "model.layers.33.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
316
+ "model.layers.34.input_layernorm.weight": "model-00004-of-00005.safetensors",
317
+ "model.layers.34.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
318
+ "model.layers.34.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
319
+ "model.layers.34.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
320
+ "model.layers.34.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
321
+ "model.layers.34.self_attn.k_norm.weight": "model-00004-of-00005.safetensors",
322
+ "model.layers.34.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
323
+ "model.layers.34.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
324
+ "model.layers.34.self_attn.q_norm.weight": "model-00004-of-00005.safetensors",
325
+ "model.layers.34.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
326
+ "model.layers.34.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
327
+ "model.layers.35.input_layernorm.weight": "model-00004-of-00005.safetensors",
328
+ "model.layers.35.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
329
+ "model.layers.35.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
330
+ "model.layers.35.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
331
+ "model.layers.35.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
332
+ "model.layers.35.self_attn.k_norm.weight": "model-00004-of-00005.safetensors",
333
+ "model.layers.35.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
334
+ "model.layers.35.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
335
+ "model.layers.35.self_attn.q_norm.weight": "model-00004-of-00005.safetensors",
336
+ "model.layers.35.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
337
+ "model.layers.35.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
338
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
339
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
340
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
341
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
342
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
343
+ "model.layers.4.self_attn.k_norm.weight": "model-00001-of-00005.safetensors",
344
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
345
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
346
+ "model.layers.4.self_attn.q_norm.weight": "model-00001-of-00005.safetensors",
347
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
348
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
349
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00005.safetensors",
350
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
351
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
352
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
353
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
354
+ "model.layers.5.self_attn.k_norm.weight": "model-00001-of-00005.safetensors",
355
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
356
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
357
+ "model.layers.5.self_attn.q_norm.weight": "model-00001-of-00005.safetensors",
358
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
359
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
360
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00005.safetensors",
361
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
362
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
363
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
364
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
365
+ "model.layers.6.self_attn.k_norm.weight": "model-00001-of-00005.safetensors",
366
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
367
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
368
+ "model.layers.6.self_attn.q_norm.weight": "model-00001-of-00005.safetensors",
369
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
370
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
371
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00005.safetensors",
372
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
373
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
374
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
375
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
376
+ "model.layers.7.self_attn.k_norm.weight": "model-00001-of-00005.safetensors",
377
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
378
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
379
+ "model.layers.7.self_attn.q_norm.weight": "model-00001-of-00005.safetensors",
380
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
381
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
382
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00005.safetensors",
383
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
384
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
385
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
386
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
387
+ "model.layers.8.self_attn.k_norm.weight": "model-00002-of-00005.safetensors",
388
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
389
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
390
+ "model.layers.8.self_attn.q_norm.weight": "model-00002-of-00005.safetensors",
391
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
392
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
393
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00005.safetensors",
394
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
395
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
396
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
397
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
398
+ "model.layers.9.self_attn.k_norm.weight": "model-00002-of-00005.safetensors",
399
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
400
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
401
+ "model.layers.9.self_attn.q_norm.weight": "model-00002-of-00005.safetensors",
402
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
403
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
404
+ "model.norm.weight": "model-00004-of-00005.safetensors"
405
+ }
406
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
3
+ size 11422654
tokenizer_config.json ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set content = message.content %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is defined and message.reasoning_content is not none %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in message.content %}\n {%- set content = message.content.split('</think>')[-1].lstrip('\\n') %}\n {%- set reasoning_content = message.content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- endif %}\n{%- endif %}",
231
+ "clean_up_tokenization_spaces": false,
232
+ "eos_token": "<|im_end|>",
233
+ "errors": "replace",
234
+ "extra_special_tokens": {},
235
+ "model_max_length": 131072,
236
+ "pad_token": "<|endoftext|>",
237
+ "split_special_tokens": false,
238
+ "tokenizer_class": "Qwen2Tokenizer",
239
+ "unk_token": null
240
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff