youssefkhalil320 commited on
Commit
ada854d
·
verified ·
1 Parent(s): 2d746a1

Upload folder using huggingface_hub

Browse files
checkpoint-2500/1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
checkpoint-2500/README.md ADDED
@@ -0,0 +1,358 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: sentence-transformers/all-mpnet-base-v2
3
+ language:
4
+ - en
5
+ library_name: sentence-transformers
6
+ license: apache-2.0
7
+ pipeline_tag: sentence-similarity
8
+ tags:
9
+ - sentence-transformers
10
+ - sentence-similarity
11
+ - feature-extraction
12
+ - generated_from_trainer
13
+ - dataset_size:1363306
14
+ - loss:AnglELoss
15
+ widget:
16
+ - source_sentence: labneh
17
+ sentences:
18
+ - iftar
19
+ - bathing suit
20
+ - coffee cup
21
+ - source_sentence: Velvet flock Veil
22
+ sentences:
23
+ - mermaid purse
24
+ - veil
25
+ - mobile bag
26
+ - source_sentence: Red lipstick
27
+ sentences:
28
+ - chemise dress
29
+ - tote
30
+ - rouge
31
+ - source_sentence: Unisex Travel bag
32
+ sentences:
33
+ - spf
34
+ - basic vega ring
35
+ - travel backpack
36
+ - source_sentence: jeremy hush book
37
+ sentences:
38
+ - chinese jumper
39
+ - perfume
40
+ - home automation device
41
+ ---
42
+
43
+ # all-mpnet-base-v3-pair_score
44
+
45
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
46
+
47
+ ## Model Details
48
+
49
+ ### Model Description
50
+ - **Model Type:** Sentence Transformer
51
+ - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 9a3225965996d404b775526de6dbfe85d3368642 -->
52
+ - **Maximum Sequence Length:** 384 tokens
53
+ - **Output Dimensionality:** 768 tokens
54
+ - **Similarity Function:** Cosine Similarity
55
+ <!-- - **Training Dataset:** Unknown -->
56
+ - **Language:** en
57
+ - **License:** apache-2.0
58
+
59
+ ### Model Sources
60
+
61
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
62
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
63
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
64
+
65
+ ### Full Model Architecture
66
+
67
+ ```
68
+ SentenceTransformer(
69
+ (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
70
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
71
+ (2): Normalize()
72
+ )
73
+ ```
74
+
75
+ ## Usage
76
+
77
+ ### Direct Usage (Sentence Transformers)
78
+
79
+ First install the Sentence Transformers library:
80
+
81
+ ```bash
82
+ pip install -U sentence-transformers
83
+ ```
84
+
85
+ Then you can load this model and run inference.
86
+ ```python
87
+ from sentence_transformers import SentenceTransformer
88
+
89
+ # Download from the 🤗 Hub
90
+ model = SentenceTransformer("sentence_transformers_model_id")
91
+ # Run inference
92
+ sentences = [
93
+ 'jeremy hush book',
94
+ 'chinese jumper',
95
+ 'perfume',
96
+ ]
97
+ embeddings = model.encode(sentences)
98
+ print(embeddings.shape)
99
+ # [3, 768]
100
+
101
+ # Get the similarity scores for the embeddings
102
+ similarities = model.similarity(embeddings, embeddings)
103
+ print(similarities.shape)
104
+ # [3, 3]
105
+ ```
106
+
107
+ <!--
108
+ ### Direct Usage (Transformers)
109
+
110
+ <details><summary>Click to see the direct usage in Transformers</summary>
111
+
112
+ </details>
113
+ -->
114
+
115
+ <!--
116
+ ### Downstream Usage (Sentence Transformers)
117
+
118
+ You can finetune this model on your own dataset.
119
+
120
+ <details><summary>Click to expand</summary>
121
+
122
+ </details>
123
+ -->
124
+
125
+ <!--
126
+ ### Out-of-Scope Use
127
+
128
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
129
+ -->
130
+
131
+ <!--
132
+ ## Bias, Risks and Limitations
133
+
134
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
135
+ -->
136
+
137
+ <!--
138
+ ### Recommendations
139
+
140
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
141
+ -->
142
+
143
+ ## Training Details
144
+
145
+ ### Training Hyperparameters
146
+ #### Non-Default Hyperparameters
147
+
148
+ - `eval_strategy`: steps
149
+ - `per_device_train_batch_size`: 128
150
+ - `per_device_eval_batch_size`: 128
151
+ - `learning_rate`: 2e-05
152
+ - `num_train_epochs`: 2
153
+ - `warmup_ratio`: 0.1
154
+ - `fp16`: True
155
+
156
+ #### All Hyperparameters
157
+ <details><summary>Click to expand</summary>
158
+
159
+ - `overwrite_output_dir`: False
160
+ - `do_predict`: False
161
+ - `eval_strategy`: steps
162
+ - `prediction_loss_only`: True
163
+ - `per_device_train_batch_size`: 128
164
+ - `per_device_eval_batch_size`: 128
165
+ - `per_gpu_train_batch_size`: None
166
+ - `per_gpu_eval_batch_size`: None
167
+ - `gradient_accumulation_steps`: 1
168
+ - `eval_accumulation_steps`: None
169
+ - `torch_empty_cache_steps`: None
170
+ - `learning_rate`: 2e-05
171
+ - `weight_decay`: 0.0
172
+ - `adam_beta1`: 0.9
173
+ - `adam_beta2`: 0.999
174
+ - `adam_epsilon`: 1e-08
175
+ - `max_grad_norm`: 1.0
176
+ - `num_train_epochs`: 2
177
+ - `max_steps`: -1
178
+ - `lr_scheduler_type`: linear
179
+ - `lr_scheduler_kwargs`: {}
180
+ - `warmup_ratio`: 0.1
181
+ - `warmup_steps`: 0
182
+ - `log_level`: passive
183
+ - `log_level_replica`: warning
184
+ - `log_on_each_node`: True
185
+ - `logging_nan_inf_filter`: True
186
+ - `save_safetensors`: True
187
+ - `save_on_each_node`: False
188
+ - `save_only_model`: False
189
+ - `restore_callback_states_from_checkpoint`: False
190
+ - `no_cuda`: False
191
+ - `use_cpu`: False
192
+ - `use_mps_device`: False
193
+ - `seed`: 42
194
+ - `data_seed`: None
195
+ - `jit_mode_eval`: False
196
+ - `use_ipex`: False
197
+ - `bf16`: False
198
+ - `fp16`: True
199
+ - `fp16_opt_level`: O1
200
+ - `half_precision_backend`: auto
201
+ - `bf16_full_eval`: False
202
+ - `fp16_full_eval`: False
203
+ - `tf32`: None
204
+ - `local_rank`: 0
205
+ - `ddp_backend`: None
206
+ - `tpu_num_cores`: None
207
+ - `tpu_metrics_debug`: False
208
+ - `debug`: []
209
+ - `dataloader_drop_last`: False
210
+ - `dataloader_num_workers`: 0
211
+ - `dataloader_prefetch_factor`: None
212
+ - `past_index`: -1
213
+ - `disable_tqdm`: False
214
+ - `remove_unused_columns`: True
215
+ - `label_names`: None
216
+ - `load_best_model_at_end`: False
217
+ - `ignore_data_skip`: False
218
+ - `fsdp`: []
219
+ - `fsdp_min_num_params`: 0
220
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
221
+ - `fsdp_transformer_layer_cls_to_wrap`: None
222
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
223
+ - `deepspeed`: None
224
+ - `label_smoothing_factor`: 0.0
225
+ - `optim`: adamw_torch
226
+ - `optim_args`: None
227
+ - `adafactor`: False
228
+ - `group_by_length`: False
229
+ - `length_column_name`: length
230
+ - `ddp_find_unused_parameters`: None
231
+ - `ddp_bucket_cap_mb`: None
232
+ - `ddp_broadcast_buffers`: False
233
+ - `dataloader_pin_memory`: True
234
+ - `dataloader_persistent_workers`: False
235
+ - `skip_memory_metrics`: True
236
+ - `use_legacy_prediction_loop`: False
237
+ - `push_to_hub`: False
238
+ - `resume_from_checkpoint`: None
239
+ - `hub_model_id`: None
240
+ - `hub_strategy`: every_save
241
+ - `hub_private_repo`: False
242
+ - `hub_always_push`: False
243
+ - `gradient_checkpointing`: False
244
+ - `gradient_checkpointing_kwargs`: None
245
+ - `include_inputs_for_metrics`: False
246
+ - `eval_do_concat_batches`: True
247
+ - `fp16_backend`: auto
248
+ - `push_to_hub_model_id`: None
249
+ - `push_to_hub_organization`: None
250
+ - `mp_parameters`:
251
+ - `auto_find_batch_size`: False
252
+ - `full_determinism`: False
253
+ - `torchdynamo`: None
254
+ - `ray_scope`: last
255
+ - `ddp_timeout`: 1800
256
+ - `torch_compile`: False
257
+ - `torch_compile_backend`: None
258
+ - `torch_compile_mode`: None
259
+ - `dispatch_batches`: None
260
+ - `split_batches`: None
261
+ - `include_tokens_per_second`: False
262
+ - `include_num_input_tokens_seen`: False
263
+ - `neftune_noise_alpha`: None
264
+ - `optim_target_modules`: None
265
+ - `batch_eval_metrics`: False
266
+ - `eval_on_start`: False
267
+ - `use_liger_kernel`: False
268
+ - `eval_use_gather_object`: False
269
+ - `batch_sampler`: batch_sampler
270
+ - `multi_dataset_batch_sampler`: proportional
271
+
272
+ </details>
273
+
274
+ ### Training Logs
275
+ | Epoch | Step | Training Loss |
276
+ |:------:|:----:|:-------------:|
277
+ | 0.0094 | 100 | 16.2337 |
278
+ | 0.0188 | 200 | 13.5901 |
279
+ | 0.0282 | 300 | 9.8565 |
280
+ | 0.0376 | 400 | 8.3332 |
281
+ | 0.0469 | 500 | 8.1261 |
282
+ | 0.0563 | 600 | 8.0697 |
283
+ | 0.0657 | 700 | 8.0298 |
284
+ | 0.0751 | 800 | 8.033 |
285
+ | 0.0845 | 900 | 7.9858 |
286
+ | 0.0939 | 1000 | 8.012 |
287
+ | 0.1033 | 1100 | 7.9745 |
288
+ | 0.1127 | 1200 | 8.0091 |
289
+ | 0.1221 | 1300 | 8.0221 |
290
+ | 0.1314 | 1400 | 7.9583 |
291
+ | 0.1408 | 1500 | 8.0031 |
292
+ | 0.1502 | 1600 | 7.9985 |
293
+ | 0.1596 | 1700 | 7.9647 |
294
+ | 0.1690 | 1800 | 7.9857 |
295
+ | 0.1784 | 1900 | 7.9806 |
296
+ | 0.1878 | 2000 | 7.9761 |
297
+ | 0.1972 | 2100 | 7.9696 |
298
+ | 0.2066 | 2200 | 8.0014 |
299
+ | 0.2159 | 2300 | 7.9546 |
300
+ | 0.2253 | 2400 | 7.9874 |
301
+ | 0.2347 | 2500 | 7.9846 |
302
+
303
+
304
+ ### Framework Versions
305
+ - Python: 3.8.10
306
+ - Sentence Transformers: 3.1.1
307
+ - Transformers: 4.45.2
308
+ - PyTorch: 2.4.1+cu118
309
+ - Accelerate: 1.0.1
310
+ - Datasets: 3.0.1
311
+ - Tokenizers: 0.20.3
312
+
313
+ ## Citation
314
+
315
+ ### BibTeX
316
+
317
+ #### Sentence Transformers
318
+ ```bibtex
319
+ @inproceedings{reimers-2019-sentence-bert,
320
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
321
+ author = "Reimers, Nils and Gurevych, Iryna",
322
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
323
+ month = "11",
324
+ year = "2019",
325
+ publisher = "Association for Computational Linguistics",
326
+ url = "https://arxiv.org/abs/1908.10084",
327
+ }
328
+ ```
329
+
330
+ #### AnglELoss
331
+ ```bibtex
332
+ @misc{li2023angleoptimized,
333
+ title={AnglE-optimized Text Embeddings},
334
+ author={Xianming Li and Jing Li},
335
+ year={2023},
336
+ eprint={2309.12871},
337
+ archivePrefix={arXiv},
338
+ primaryClass={cs.CL}
339
+ }
340
+ ```
341
+
342
+ <!--
343
+ ## Glossary
344
+
345
+ *Clearly define terms in order to be accessible across audiences.*
346
+ -->
347
+
348
+ <!--
349
+ ## Model Card Authors
350
+
351
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
352
+ -->
353
+
354
+ <!--
355
+ ## Model Card Contact
356
+
357
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
358
+ -->
checkpoint-2500/config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "sentence-transformers/all-mpnet-base-v2",
3
+ "architectures": [
4
+ "MPNetModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "mpnet",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 1,
20
+ "relative_attention_num_buckets": 32,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.45.2",
23
+ "vocab_size": 30527
24
+ }
checkpoint-2500/config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.45.2",
5
+ "pytorch": "2.4.1+cu118"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
checkpoint-2500/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9735ec331b62163b6392b5c105ec51ebd8362dd78001317c3576c7430efd3a6e
3
+ size 437967672
checkpoint-2500/modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
checkpoint-2500/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b686972473ae333eccf6f85e723bc24a5ae48b41832a5ab632a9a9a66d796500
3
+ size 871331770
checkpoint-2500/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90b464171ac5b0f2c24cc16f8ca89e3b78f7d6d9d5f80b5518a0ea3ca17cc564
3
+ size 14244
checkpoint-2500/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:369dbe7ad7b364b67a78a58cfd770b81c990376900ec29e3d002dd22e99e8c29
3
+ size 1064
checkpoint-2500/sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 384,
3
+ "do_lower_case": false
4
+ }
checkpoint-2500/special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
checkpoint-2500/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-2500/tokenizer_config.json ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "104": {
36
+ "content": "[UNK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30526": {
44
+ "content": "<mask>",
45
+ "lstrip": true,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "bos_token": "<s>",
53
+ "clean_up_tokenization_spaces": false,
54
+ "cls_token": "<s>",
55
+ "do_lower_case": true,
56
+ "eos_token": "</s>",
57
+ "mask_token": "<mask>",
58
+ "max_length": 128,
59
+ "model_max_length": 384,
60
+ "pad_to_multiple_of": null,
61
+ "pad_token": "<pad>",
62
+ "pad_token_type_id": 0,
63
+ "padding_side": "right",
64
+ "sep_token": "</s>",
65
+ "stride": 0,
66
+ "strip_accents": null,
67
+ "tokenize_chinese_chars": true,
68
+ "tokenizer_class": "MPNetTokenizer",
69
+ "truncation_side": "right",
70
+ "truncation_strategy": "longest_first",
71
+ "unk_token": "[UNK]"
72
+ }
checkpoint-2500/trainer_state.json ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.23471974462491785,
5
+ "eval_steps": 5000,
6
+ "global_step": 2500,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.009388789784996713,
13
+ "grad_norm": 87.77811431884766,
14
+ "learning_rate": 9.009854528390429e-07,
15
+ "loss": 16.2337,
16
+ "step": 100
17
+ },
18
+ {
19
+ "epoch": 0.018777579569993427,
20
+ "grad_norm": 85.86430358886719,
21
+ "learning_rate": 1.8395119662130456e-06,
22
+ "loss": 13.5901,
23
+ "step": 200
24
+ },
25
+ {
26
+ "epoch": 0.02816636935499014,
27
+ "grad_norm": 14.885255813598633,
28
+ "learning_rate": 2.7592679493195683e-06,
29
+ "loss": 9.8565,
30
+ "step": 300
31
+ },
32
+ {
33
+ "epoch": 0.03755515913998685,
34
+ "grad_norm": 6.9691972732543945,
35
+ "learning_rate": 3.6977944626935713e-06,
36
+ "loss": 8.3332,
37
+ "step": 400
38
+ },
39
+ {
40
+ "epoch": 0.04694394892498357,
41
+ "grad_norm": 5.612818241119385,
42
+ "learning_rate": 4.6363209760675744e-06,
43
+ "loss": 8.1261,
44
+ "step": 500
45
+ },
46
+ {
47
+ "epoch": 0.05633273870998028,
48
+ "grad_norm": 4.705409526824951,
49
+ "learning_rate": 5.574847489441577e-06,
50
+ "loss": 8.0697,
51
+ "step": 600
52
+ },
53
+ {
54
+ "epoch": 0.06572152849497699,
55
+ "grad_norm": 4.337332725524902,
56
+ "learning_rate": 6.51337400281558e-06,
57
+ "loss": 8.0298,
58
+ "step": 700
59
+ },
60
+ {
61
+ "epoch": 0.0751103182799737,
62
+ "grad_norm": 3.6314213275909424,
63
+ "learning_rate": 7.451900516189583e-06,
64
+ "loss": 8.033,
65
+ "step": 800
66
+ },
67
+ {
68
+ "epoch": 0.08449910806497042,
69
+ "grad_norm": 3.4845075607299805,
70
+ "learning_rate": 8.390427029563585e-06,
71
+ "loss": 7.9858,
72
+ "step": 900
73
+ },
74
+ {
75
+ "epoch": 0.09388789784996714,
76
+ "grad_norm": 5.188210487365723,
77
+ "learning_rate": 9.328953542937589e-06,
78
+ "loss": 8.012,
79
+ "step": 1000
80
+ },
81
+ {
82
+ "epoch": 0.10327668763496385,
83
+ "grad_norm": 3.0830442905426025,
84
+ "learning_rate": 1.0267480056311592e-05,
85
+ "loss": 7.9745,
86
+ "step": 1100
87
+ },
88
+ {
89
+ "epoch": 0.11266547741996057,
90
+ "grad_norm": 3.4729278087615967,
91
+ "learning_rate": 1.1206006569685594e-05,
92
+ "loss": 8.0091,
93
+ "step": 1200
94
+ },
95
+ {
96
+ "epoch": 0.12205426720495728,
97
+ "grad_norm": 2.329235076904297,
98
+ "learning_rate": 1.2144533083059597e-05,
99
+ "loss": 8.0221,
100
+ "step": 1300
101
+ },
102
+ {
103
+ "epoch": 0.13144305698995398,
104
+ "grad_norm": 2.7225279808044434,
105
+ "learning_rate": 1.3083059596433601e-05,
106
+ "loss": 7.9583,
107
+ "step": 1400
108
+ },
109
+ {
110
+ "epoch": 0.1408318467749507,
111
+ "grad_norm": 2.012805938720703,
112
+ "learning_rate": 1.4021586109807603e-05,
113
+ "loss": 8.0031,
114
+ "step": 1500
115
+ },
116
+ {
117
+ "epoch": 0.1502206365599474,
118
+ "grad_norm": 2.9397523403167725,
119
+ "learning_rate": 1.4960112623181606e-05,
120
+ "loss": 7.9985,
121
+ "step": 1600
122
+ },
123
+ {
124
+ "epoch": 0.15960942634494413,
125
+ "grad_norm": 2.356337308883667,
126
+ "learning_rate": 1.589863913655561e-05,
127
+ "loss": 7.9647,
128
+ "step": 1700
129
+ },
130
+ {
131
+ "epoch": 0.16899821612994084,
132
+ "grad_norm": 2.6846818923950195,
133
+ "learning_rate": 1.6837165649929613e-05,
134
+ "loss": 7.9857,
135
+ "step": 1800
136
+ },
137
+ {
138
+ "epoch": 0.17838700591493756,
139
+ "grad_norm": 2.0188565254211426,
140
+ "learning_rate": 1.7775692163303613e-05,
141
+ "loss": 7.9806,
142
+ "step": 1900
143
+ },
144
+ {
145
+ "epoch": 0.18777579569993427,
146
+ "grad_norm": 4.030488014221191,
147
+ "learning_rate": 1.8714218676677617e-05,
148
+ "loss": 7.9761,
149
+ "step": 2000
150
+ },
151
+ {
152
+ "epoch": 0.197164585484931,
153
+ "grad_norm": 4.183101654052734,
154
+ "learning_rate": 1.965274519005162e-05,
155
+ "loss": 7.9696,
156
+ "step": 2100
157
+ },
158
+ {
159
+ "epoch": 0.2065533752699277,
160
+ "grad_norm": 1.4769889116287231,
161
+ "learning_rate": 1.9934275728965626e-05,
162
+ "loss": 8.0014,
163
+ "step": 2200
164
+ },
165
+ {
166
+ "epoch": 0.21594216505492442,
167
+ "grad_norm": 2.1914358139038086,
168
+ "learning_rate": 1.9829951489228525e-05,
169
+ "loss": 7.9546,
170
+ "step": 2300
171
+ },
172
+ {
173
+ "epoch": 0.22533095483992113,
174
+ "grad_norm": 22.55516815185547,
175
+ "learning_rate": 1.972562724949142e-05,
176
+ "loss": 7.9874,
177
+ "step": 2400
178
+ },
179
+ {
180
+ "epoch": 0.23471974462491785,
181
+ "grad_norm": 1.635116457939148,
182
+ "learning_rate": 1.962130300975432e-05,
183
+ "loss": 7.9846,
184
+ "step": 2500
185
+ }
186
+ ],
187
+ "logging_steps": 100,
188
+ "max_steps": 21302,
189
+ "num_input_tokens_seen": 0,
190
+ "num_train_epochs": 2,
191
+ "save_steps": 500,
192
+ "stateful_callbacks": {
193
+ "TrainerControl": {
194
+ "args": {
195
+ "should_epoch_stop": false,
196
+ "should_evaluate": false,
197
+ "should_log": false,
198
+ "should_save": true,
199
+ "should_training_stop": false
200
+ },
201
+ "attributes": {}
202
+ }
203
+ },
204
+ "total_flos": 0.0,
205
+ "train_batch_size": 128,
206
+ "trial_name": null,
207
+ "trial_params": null
208
+ }
checkpoint-2500/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:006196a6760605807ae7aab04b3ff5762b98610dc83070d5ff9eb10989362210
3
+ size 5496
checkpoint-2500/vocab.txt ADDED
The diff for this file is too large to render. See raw diff