modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 06:31:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 06:31:07
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MayBashendy/ASAP_FineTuningBERT_Aug_k20_task1_organization_fold4
|
MayBashendy
| 2024-11-06T16:17:51Z | 161 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T15:44:54Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k20_task1_organization_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k20_task1_organization_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4950
- Qwk: 0.6411
- Mse: 0.4950
- Rmse: 0.7035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 0.0063 | 2 | 10.1861 | 0.0 | 10.1861 | 3.1916 |
| No log | 0.0126 | 4 | 8.5953 | -0.0005 | 8.5953 | 2.9318 |
| No log | 0.0189 | 6 | 6.9159 | 0.0051 | 6.9159 | 2.6298 |
| No log | 0.0252 | 8 | 5.5130 | 0.0037 | 5.5130 | 2.3480 |
| No log | 0.0315 | 10 | 4.3816 | 0.0018 | 4.3816 | 2.0932 |
| No log | 0.0379 | 12 | 3.5082 | 0.0492 | 3.5082 | 1.8730 |
| No log | 0.0442 | 14 | 2.7686 | 0.0128 | 2.7686 | 1.6639 |
| No log | 0.0505 | 16 | 2.1322 | 0.0118 | 2.1322 | 1.4602 |
| No log | 0.0568 | 18 | 1.6261 | 0.0079 | 1.6261 | 1.2752 |
| No log | 0.0631 | 20 | 1.2562 | 0.1722 | 1.2562 | 1.1208 |
| No log | 0.0694 | 22 | 1.0333 | 0.0420 | 1.0333 | 1.0165 |
| No log | 0.0757 | 24 | 0.8915 | 0.0316 | 0.8915 | 0.9442 |
| No log | 0.0820 | 26 | 0.8074 | 0.0316 | 0.8074 | 0.8986 |
| No log | 0.0883 | 28 | 0.7660 | 0.0316 | 0.7660 | 0.8752 |
| No log | 0.0946 | 30 | 0.7689 | 0.0542 | 0.7689 | 0.8769 |
| No log | 0.1009 | 32 | 0.9386 | 0.0937 | 0.9386 | 0.9688 |
| No log | 0.1073 | 34 | 0.8347 | 0.0771 | 0.8347 | 0.9136 |
| No log | 0.1136 | 36 | 0.8293 | 0.4385 | 0.8293 | 0.9106 |
| No log | 0.1199 | 38 | 0.8916 | 0.3628 | 0.8916 | 0.9442 |
| No log | 0.1262 | 40 | 0.8068 | 0.0212 | 0.8068 | 0.8982 |
| No log | 0.1325 | 42 | 0.8411 | 0.0344 | 0.8411 | 0.9171 |
| No log | 0.1388 | 44 | 0.8499 | 0.0344 | 0.8499 | 0.9219 |
| No log | 0.1451 | 46 | 0.8047 | 0.0107 | 0.8047 | 0.8970 |
| No log | 0.1514 | 48 | 0.7906 | 0.0107 | 0.7906 | 0.8892 |
| No log | 0.1577 | 50 | 0.7428 | 0.0317 | 0.7428 | 0.8619 |
| No log | 0.1640 | 52 | 0.7615 | 0.0511 | 0.7615 | 0.8726 |
| No log | 0.1703 | 54 | 0.7432 | 0.0792 | 0.7432 | 0.8621 |
| No log | 0.1767 | 56 | 0.6753 | 0.0610 | 0.6753 | 0.8218 |
| No log | 0.1830 | 58 | 0.6924 | 0.0317 | 0.6924 | 0.8321 |
| No log | 0.1893 | 60 | 0.7336 | 0.0730 | 0.7336 | 0.8565 |
| No log | 0.1956 | 62 | 0.7216 | 0.0213 | 0.7216 | 0.8495 |
| No log | 0.2019 | 64 | 0.6734 | 0.0826 | 0.6734 | 0.8206 |
| No log | 0.2082 | 66 | 0.8115 | 0.1971 | 0.8115 | 0.9008 |
| No log | 0.2145 | 68 | 1.0608 | 0.2342 | 1.0608 | 1.0300 |
| No log | 0.2208 | 70 | 0.8848 | 0.2293 | 0.8848 | 0.9406 |
| No log | 0.2271 | 72 | 0.6445 | 0.1331 | 0.6445 | 0.8028 |
| No log | 0.2334 | 74 | 0.6672 | 0.0803 | 0.6672 | 0.8168 |
| No log | 0.2397 | 76 | 0.6616 | 0.0754 | 0.6616 | 0.8134 |
| No log | 0.2461 | 78 | 0.6149 | 0.1067 | 0.6149 | 0.7842 |
| No log | 0.2524 | 80 | 0.6896 | 0.1973 | 0.6896 | 0.8304 |
| No log | 0.2587 | 82 | 0.7505 | 0.2167 | 0.7505 | 0.8663 |
| No log | 0.2650 | 84 | 0.6389 | 0.1883 | 0.6389 | 0.7993 |
| No log | 0.2713 | 86 | 0.6107 | 0.2957 | 0.6107 | 0.7815 |
| No log | 0.2776 | 88 | 0.6234 | 0.3088 | 0.6234 | 0.7895 |
| No log | 0.2839 | 90 | 0.5901 | 0.2657 | 0.5901 | 0.7681 |
| No log | 0.2902 | 92 | 0.6248 | 0.1786 | 0.6248 | 0.7905 |
| No log | 0.2965 | 94 | 0.6419 | 0.2214 | 0.6419 | 0.8012 |
| No log | 0.3028 | 96 | 0.5860 | 0.2699 | 0.5860 | 0.7655 |
| No log | 0.3091 | 98 | 0.5766 | 0.2956 | 0.5766 | 0.7593 |
| No log | 0.3155 | 100 | 0.5547 | 0.3623 | 0.5547 | 0.7448 |
| No log | 0.3218 | 102 | 0.5514 | 0.4222 | 0.5514 | 0.7426 |
| No log | 0.3281 | 104 | 0.5460 | 0.4061 | 0.5460 | 0.7389 |
| No log | 0.3344 | 106 | 0.5756 | 0.3134 | 0.5756 | 0.7587 |
| No log | 0.3407 | 108 | 0.6144 | 0.3095 | 0.6144 | 0.7838 |
| No log | 0.3470 | 110 | 0.5301 | 0.4421 | 0.5301 | 0.7280 |
| No log | 0.3533 | 112 | 0.5429 | 0.4684 | 0.5429 | 0.7368 |
| No log | 0.3596 | 114 | 0.5177 | 0.4759 | 0.5177 | 0.7195 |
| No log | 0.3659 | 116 | 0.5241 | 0.4151 | 0.5241 | 0.7240 |
| No log | 0.3722 | 118 | 0.5069 | 0.4161 | 0.5069 | 0.7120 |
| No log | 0.3785 | 120 | 0.5293 | 0.4872 | 0.5293 | 0.7275 |
| No log | 0.3849 | 122 | 0.5688 | 0.4517 | 0.5688 | 0.7542 |
| No log | 0.3912 | 124 | 0.5780 | 0.2445 | 0.5780 | 0.7603 |
| No log | 0.3975 | 126 | 0.5334 | 0.4100 | 0.5334 | 0.7304 |
| No log | 0.4038 | 128 | 0.5552 | 0.5686 | 0.5552 | 0.7451 |
| No log | 0.4101 | 130 | 0.5369 | 0.5723 | 0.5369 | 0.7327 |
| No log | 0.4164 | 132 | 0.5145 | 0.3755 | 0.5145 | 0.7173 |
| No log | 0.4227 | 134 | 0.5181 | 0.4368 | 0.5181 | 0.7198 |
| No log | 0.4290 | 136 | 0.5175 | 0.4105 | 0.5175 | 0.7194 |
| No log | 0.4353 | 138 | 0.5481 | 0.5205 | 0.5481 | 0.7403 |
| No log | 0.4416 | 140 | 0.5561 | 0.4941 | 0.5561 | 0.7457 |
| No log | 0.4479 | 142 | 0.5308 | 0.5019 | 0.5308 | 0.7286 |
| No log | 0.4543 | 144 | 0.5421 | 0.4929 | 0.5421 | 0.7363 |
| No log | 0.4606 | 146 | 0.5182 | 0.4383 | 0.5182 | 0.7198 |
| No log | 0.4669 | 148 | 0.5113 | 0.4444 | 0.5113 | 0.7151 |
| No log | 0.4732 | 150 | 0.5292 | 0.3937 | 0.5292 | 0.7275 |
| No log | 0.4795 | 152 | 0.5153 | 0.4278 | 0.5153 | 0.7179 |
| No log | 0.4858 | 154 | 0.4959 | 0.4610 | 0.4959 | 0.7042 |
| No log | 0.4921 | 156 | 0.4822 | 0.4742 | 0.4822 | 0.6944 |
| No log | 0.4984 | 158 | 0.5207 | 0.5700 | 0.5207 | 0.7216 |
| No log | 0.5047 | 160 | 0.6361 | 0.5602 | 0.6361 | 0.7976 |
| No log | 0.5110 | 162 | 0.5405 | 0.5354 | 0.5405 | 0.7352 |
| No log | 0.5174 | 164 | 0.5536 | 0.5347 | 0.5536 | 0.7440 |
| No log | 0.5237 | 166 | 0.5308 | 0.5142 | 0.5308 | 0.7285 |
| No log | 0.5300 | 168 | 0.5827 | 0.5080 | 0.5827 | 0.7634 |
| No log | 0.5363 | 170 | 0.6033 | 0.5139 | 0.6033 | 0.7767 |
| No log | 0.5426 | 172 | 0.7514 | 0.5038 | 0.7514 | 0.8669 |
| No log | 0.5489 | 174 | 0.7327 | 0.5197 | 0.7327 | 0.8560 |
| No log | 0.5552 | 176 | 0.5563 | 0.5225 | 0.5563 | 0.7459 |
| No log | 0.5615 | 178 | 0.5157 | 0.4842 | 0.5157 | 0.7181 |
| No log | 0.5678 | 180 | 0.5430 | 0.5432 | 0.5430 | 0.7369 |
| No log | 0.5741 | 182 | 0.5386 | 0.5786 | 0.5386 | 0.7339 |
| No log | 0.5804 | 184 | 0.4900 | 0.5768 | 0.4900 | 0.7000 |
| No log | 0.5868 | 186 | 0.5030 | 0.5908 | 0.5030 | 0.7092 |
| No log | 0.5931 | 188 | 0.4526 | 0.5804 | 0.4526 | 0.6728 |
| No log | 0.5994 | 190 | 0.5105 | 0.4823 | 0.5105 | 0.7145 |
| No log | 0.6057 | 192 | 0.5870 | 0.4220 | 0.5870 | 0.7662 |
| No log | 0.6120 | 194 | 0.5511 | 0.4319 | 0.5511 | 0.7423 |
| No log | 0.6183 | 196 | 0.4500 | 0.5472 | 0.4500 | 0.6708 |
| No log | 0.6246 | 198 | 0.4526 | 0.5562 | 0.4526 | 0.6728 |
| No log | 0.6309 | 200 | 0.5135 | 0.5754 | 0.5135 | 0.7166 |
| No log | 0.6372 | 202 | 0.6373 | 0.5419 | 0.6373 | 0.7983 |
| No log | 0.6435 | 204 | 0.5640 | 0.5393 | 0.5640 | 0.7510 |
| No log | 0.6498 | 206 | 0.5375 | 0.5351 | 0.5375 | 0.7332 |
| No log | 0.6562 | 208 | 0.5511 | 0.5560 | 0.5511 | 0.7423 |
| No log | 0.6625 | 210 | 0.5414 | 0.5693 | 0.5414 | 0.7358 |
| No log | 0.6688 | 212 | 0.5304 | 0.5811 | 0.5304 | 0.7283 |
| No log | 0.6751 | 214 | 0.4758 | 0.5939 | 0.4758 | 0.6898 |
| No log | 0.6814 | 216 | 0.4437 | 0.5481 | 0.4437 | 0.6661 |
| No log | 0.6877 | 218 | 0.4368 | 0.5673 | 0.4368 | 0.6609 |
| No log | 0.6940 | 220 | 0.4946 | 0.6281 | 0.4946 | 0.7033 |
| No log | 0.7003 | 222 | 0.4564 | 0.5958 | 0.4564 | 0.6756 |
| No log | 0.7066 | 224 | 0.4662 | 0.5795 | 0.4662 | 0.6828 |
| No log | 0.7129 | 226 | 0.5187 | 0.6018 | 0.5187 | 0.7202 |
| No log | 0.7192 | 228 | 0.5179 | 0.6018 | 0.5179 | 0.7196 |
| No log | 0.7256 | 230 | 0.4883 | 0.6011 | 0.4883 | 0.6988 |
| No log | 0.7319 | 232 | 0.4581 | 0.5898 | 0.4581 | 0.6768 |
| No log | 0.7382 | 234 | 0.5164 | 0.6064 | 0.5164 | 0.7186 |
| No log | 0.7445 | 236 | 0.4880 | 0.6120 | 0.4880 | 0.6986 |
| No log | 0.7508 | 238 | 0.4608 | 0.6049 | 0.4608 | 0.6788 |
| No log | 0.7571 | 240 | 0.5627 | 0.6490 | 0.5627 | 0.7502 |
| No log | 0.7634 | 242 | 0.8123 | 0.6725 | 0.8123 | 0.9013 |
| No log | 0.7697 | 244 | 0.6433 | 0.6624 | 0.6433 | 0.8021 |
| No log | 0.7760 | 246 | 0.4387 | 0.5914 | 0.4387 | 0.6624 |
| No log | 0.7823 | 248 | 0.4507 | 0.5951 | 0.4507 | 0.6713 |
| No log | 0.7886 | 250 | 0.6574 | 0.6299 | 0.6574 | 0.8108 |
| No log | 0.7950 | 252 | 0.9073 | 0.5748 | 0.9073 | 0.9525 |
| No log | 0.8013 | 254 | 0.7567 | 0.5976 | 0.7567 | 0.8699 |
| No log | 0.8076 | 256 | 0.4780 | 0.5993 | 0.4780 | 0.6914 |
| No log | 0.8139 | 258 | 0.4653 | 0.4804 | 0.4653 | 0.6821 |
| No log | 0.8202 | 260 | 0.4593 | 0.5099 | 0.4593 | 0.6777 |
| No log | 0.8265 | 262 | 0.5150 | 0.5981 | 0.5150 | 0.7176 |
| No log | 0.8328 | 264 | 0.7188 | 0.5631 | 0.7188 | 0.8478 |
| No log | 0.8391 | 266 | 0.6870 | 0.5665 | 0.6870 | 0.8289 |
| No log | 0.8454 | 268 | 0.5103 | 0.6082 | 0.5103 | 0.7144 |
| No log | 0.8517 | 270 | 0.4610 | 0.4952 | 0.4610 | 0.6790 |
| No log | 0.8580 | 272 | 0.5092 | 0.4066 | 0.5092 | 0.7136 |
| No log | 0.8644 | 274 | 0.4640 | 0.4861 | 0.4640 | 0.6812 |
| No log | 0.8707 | 276 | 0.4945 | 0.5916 | 0.4945 | 0.7032 |
| No log | 0.8770 | 278 | 0.6582 | 0.5572 | 0.6582 | 0.8113 |
| No log | 0.8833 | 280 | 0.6694 | 0.5610 | 0.6694 | 0.8181 |
| No log | 0.8896 | 282 | 0.5728 | 0.5254 | 0.5728 | 0.7568 |
| No log | 0.8959 | 284 | 0.5221 | 0.4152 | 0.5221 | 0.7226 |
| No log | 0.9022 | 286 | 0.4807 | 0.4751 | 0.4807 | 0.6933 |
| No log | 0.9085 | 288 | 0.4549 | 0.5473 | 0.4549 | 0.6745 |
| No log | 0.9148 | 290 | 0.4556 | 0.5597 | 0.4556 | 0.6750 |
| No log | 0.9211 | 292 | 0.4582 | 0.5556 | 0.4582 | 0.6769 |
| No log | 0.9274 | 294 | 0.4645 | 0.5505 | 0.4645 | 0.6816 |
| No log | 0.9338 | 296 | 0.4678 | 0.5381 | 0.4678 | 0.6840 |
| No log | 0.9401 | 298 | 0.4749 | 0.5534 | 0.4749 | 0.6892 |
| No log | 0.9464 | 300 | 0.5625 | 0.5975 | 0.5625 | 0.7500 |
| No log | 0.9527 | 302 | 0.5900 | 0.5826 | 0.5900 | 0.7681 |
| No log | 0.9590 | 304 | 0.4926 | 0.5950 | 0.4926 | 0.7019 |
| No log | 0.9653 | 306 | 0.4816 | 0.4778 | 0.4816 | 0.6940 |
| No log | 0.9716 | 308 | 0.4785 | 0.5246 | 0.4785 | 0.6917 |
| No log | 0.9779 | 310 | 0.4967 | 0.5915 | 0.4967 | 0.7048 |
| No log | 0.9842 | 312 | 0.4777 | 0.5359 | 0.4777 | 0.6912 |
| No log | 0.9905 | 314 | 0.5052 | 0.4469 | 0.5052 | 0.7108 |
| No log | 0.9968 | 316 | 0.4870 | 0.4692 | 0.4870 | 0.6978 |
| No log | 1.0032 | 318 | 0.4959 | 0.6014 | 0.4959 | 0.7042 |
| No log | 1.0095 | 320 | 0.5971 | 0.6622 | 0.5971 | 0.7727 |
| No log | 1.0158 | 322 | 0.6224 | 0.6527 | 0.6224 | 0.7889 |
| No log | 1.0221 | 324 | 0.5090 | 0.6125 | 0.5090 | 0.7134 |
| No log | 1.0284 | 326 | 0.4859 | 0.6161 | 0.4859 | 0.6970 |
| No log | 1.0347 | 328 | 0.5575 | 0.6373 | 0.5575 | 0.7466 |
| No log | 1.0410 | 330 | 0.6631 | 0.6354 | 0.6631 | 0.8143 |
| No log | 1.0473 | 332 | 0.7880 | 0.6128 | 0.7880 | 0.8877 |
| No log | 1.0536 | 334 | 0.6328 | 0.6471 | 0.6328 | 0.7955 |
| No log | 1.0599 | 336 | 0.4833 | 0.5926 | 0.4833 | 0.6952 |
| No log | 1.0662 | 338 | 0.4764 | 0.5915 | 0.4764 | 0.6902 |
| No log | 1.0726 | 340 | 0.4879 | 0.6097 | 0.4879 | 0.6985 |
| No log | 1.0789 | 342 | 0.5004 | 0.6328 | 0.5004 | 0.7074 |
| No log | 1.0852 | 344 | 0.4558 | 0.5696 | 0.4558 | 0.6752 |
| No log | 1.0915 | 346 | 0.4638 | 0.5143 | 0.4638 | 0.6811 |
| No log | 1.0978 | 348 | 0.4590 | 0.5340 | 0.4590 | 0.6775 |
| No log | 1.1041 | 350 | 0.4556 | 0.5999 | 0.4556 | 0.6750 |
| No log | 1.1104 | 352 | 0.4521 | 0.5984 | 0.4521 | 0.6724 |
| No log | 1.1167 | 354 | 0.4603 | 0.5902 | 0.4603 | 0.6784 |
| No log | 1.1230 | 356 | 0.5085 | 0.6098 | 0.5085 | 0.7131 |
| No log | 1.1293 | 358 | 0.5851 | 0.6319 | 0.5851 | 0.7649 |
| No log | 1.1356 | 360 | 0.5377 | 0.6091 | 0.5377 | 0.7333 |
| No log | 1.1420 | 362 | 0.4673 | 0.5626 | 0.4673 | 0.6836 |
| No log | 1.1483 | 364 | 0.4611 | 0.5643 | 0.4611 | 0.6790 |
| No log | 1.1546 | 366 | 0.4560 | 0.5333 | 0.4560 | 0.6753 |
| No log | 1.1609 | 368 | 0.4761 | 0.4842 | 0.4761 | 0.6900 |
| No log | 1.1672 | 370 | 0.4581 | 0.5306 | 0.4581 | 0.6768 |
| No log | 1.1735 | 372 | 0.4492 | 0.5837 | 0.4492 | 0.6702 |
| No log | 1.1798 | 374 | 0.4585 | 0.6097 | 0.4585 | 0.6771 |
| No log | 1.1861 | 376 | 0.4451 | 0.5503 | 0.4451 | 0.6672 |
| No log | 1.1924 | 378 | 0.4524 | 0.5227 | 0.4524 | 0.6726 |
| No log | 1.1987 | 380 | 0.4546 | 0.5008 | 0.4546 | 0.6742 |
| No log | 1.2050 | 382 | 0.4735 | 0.5442 | 0.4735 | 0.6881 |
| No log | 1.2114 | 384 | 0.5067 | 0.5698 | 0.5067 | 0.7118 |
| No log | 1.2177 | 386 | 0.4892 | 0.4913 | 0.4892 | 0.6994 |
| No log | 1.2240 | 388 | 0.4975 | 0.5099 | 0.4975 | 0.7053 |
| No log | 1.2303 | 390 | 0.6492 | 0.6296 | 0.6492 | 0.8057 |
| No log | 1.2366 | 392 | 0.7328 | 0.6114 | 0.7328 | 0.8561 |
| No log | 1.2429 | 394 | 0.5539 | 0.6157 | 0.5539 | 0.7443 |
| No log | 1.2492 | 396 | 0.5265 | 0.4173 | 0.5265 | 0.7256 |
| No log | 1.2555 | 398 | 0.6128 | 0.3532 | 0.6128 | 0.7828 |
| No log | 1.2618 | 400 | 0.5354 | 0.4003 | 0.5354 | 0.7317 |
| No log | 1.2681 | 402 | 0.4935 | 0.5464 | 0.4935 | 0.7025 |
| No log | 1.2744 | 404 | 0.5745 | 0.6324 | 0.5745 | 0.7579 |
| No log | 1.2808 | 406 | 0.5167 | 0.6236 | 0.5167 | 0.7188 |
| No log | 1.2871 | 408 | 0.4620 | 0.5427 | 0.4620 | 0.6797 |
| No log | 1.2934 | 410 | 0.4585 | 0.5055 | 0.4585 | 0.6772 |
| No log | 1.2997 | 412 | 0.4691 | 0.5926 | 0.4691 | 0.6849 |
| No log | 1.3060 | 414 | 0.5962 | 0.6760 | 0.5962 | 0.7722 |
| No log | 1.3123 | 416 | 0.5452 | 0.6593 | 0.5452 | 0.7384 |
| No log | 1.3186 | 418 | 0.4661 | 0.6018 | 0.4661 | 0.6827 |
| No log | 1.3249 | 420 | 0.4503 | 0.5347 | 0.4503 | 0.6710 |
| No log | 1.3312 | 422 | 0.4594 | 0.5752 | 0.4594 | 0.6778 |
| No log | 1.3375 | 424 | 0.5623 | 0.6484 | 0.5623 | 0.7499 |
| No log | 1.3438 | 426 | 0.5562 | 0.6429 | 0.5562 | 0.7458 |
| No log | 1.3502 | 428 | 0.4545 | 0.5922 | 0.4545 | 0.6742 |
| No log | 1.3565 | 430 | 0.4446 | 0.5818 | 0.4446 | 0.6668 |
| No log | 1.3628 | 432 | 0.5001 | 0.6472 | 0.5001 | 0.7072 |
| No log | 1.3691 | 434 | 0.5172 | 0.6548 | 0.5172 | 0.7192 |
| No log | 1.3754 | 436 | 0.4511 | 0.5994 | 0.4511 | 0.6716 |
| No log | 1.3817 | 438 | 0.4721 | 0.5433 | 0.4721 | 0.6871 |
| No log | 1.3880 | 440 | 0.4686 | 0.6124 | 0.4686 | 0.6846 |
| No log | 1.3943 | 442 | 0.5272 | 0.6602 | 0.5272 | 0.7261 |
| No log | 1.4006 | 444 | 0.4777 | 0.6232 | 0.4777 | 0.6912 |
| No log | 1.4069 | 446 | 0.4745 | 0.4864 | 0.4745 | 0.6888 |
| No log | 1.4132 | 448 | 0.4813 | 0.4603 | 0.4813 | 0.6938 |
| No log | 1.4196 | 450 | 0.4566 | 0.5352 | 0.4566 | 0.6757 |
| No log | 1.4259 | 452 | 0.5087 | 0.6295 | 0.5087 | 0.7132 |
| No log | 1.4322 | 454 | 0.5272 | 0.6279 | 0.5272 | 0.7261 |
| No log | 1.4385 | 456 | 0.4695 | 0.5742 | 0.4695 | 0.6852 |
| No log | 1.4448 | 458 | 0.4613 | 0.5300 | 0.4613 | 0.6792 |
| No log | 1.4511 | 460 | 0.4807 | 0.4327 | 0.4807 | 0.6933 |
| No log | 1.4574 | 462 | 0.4712 | 0.4831 | 0.4712 | 0.6865 |
| No log | 1.4637 | 464 | 0.5262 | 0.6207 | 0.5262 | 0.7254 |
| No log | 1.4700 | 466 | 0.5679 | 0.6533 | 0.5679 | 0.7536 |
| No log | 1.4763 | 468 | 0.4943 | 0.6319 | 0.4943 | 0.7030 |
| No log | 1.4826 | 470 | 0.4548 | 0.5373 | 0.4548 | 0.6744 |
| No log | 1.4890 | 472 | 0.4529 | 0.5669 | 0.4529 | 0.6730 |
| No log | 1.4953 | 474 | 0.4979 | 0.6578 | 0.4979 | 0.7056 |
| No log | 1.5016 | 476 | 0.5480 | 0.6783 | 0.5480 | 0.7402 |
| No log | 1.5079 | 478 | 0.4760 | 0.5831 | 0.4760 | 0.6900 |
| No log | 1.5142 | 480 | 0.4790 | 0.4885 | 0.4790 | 0.6921 |
| No log | 1.5205 | 482 | 0.4733 | 0.4948 | 0.4733 | 0.6879 |
| No log | 1.5268 | 484 | 0.4930 | 0.6107 | 0.4930 | 0.7021 |
| No log | 1.5331 | 486 | 0.6387 | 0.6998 | 0.6387 | 0.7992 |
| No log | 1.5394 | 488 | 0.5770 | 0.6947 | 0.5770 | 0.7596 |
| No log | 1.5457 | 490 | 0.4507 | 0.5730 | 0.4507 | 0.6713 |
| No log | 1.5521 | 492 | 0.4761 | 0.4890 | 0.4761 | 0.6900 |
| No log | 1.5584 | 494 | 0.4524 | 0.5010 | 0.4524 | 0.6726 |
| No log | 1.5647 | 496 | 0.4512 | 0.5824 | 0.4512 | 0.6717 |
| No log | 1.5710 | 498 | 0.5386 | 0.6594 | 0.5386 | 0.7339 |
| 0.5 | 1.5773 | 500 | 0.5441 | 0.6588 | 0.5441 | 0.7376 |
| 0.5 | 1.5836 | 502 | 0.5217 | 0.6468 | 0.5217 | 0.7223 |
| 0.5 | 1.5899 | 504 | 0.4504 | 0.5555 | 0.4504 | 0.6711 |
| 0.5 | 1.5962 | 506 | 0.4459 | 0.5713 | 0.4459 | 0.6677 |
| 0.5 | 1.6025 | 508 | 0.4642 | 0.6069 | 0.4642 | 0.6813 |
| 0.5 | 1.6088 | 510 | 0.4950 | 0.6411 | 0.4950 | 0.7035 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF
|
mattritchey
| 2024-11-06T16:14:40Z | 7 | 0 | null |
[
"gguf",
"HelpingAI",
"Emotionally-Intelligent",
"EQ-focused- EQ-focused",
"Conversational",
"SLM",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:HelpingAI/HelpingAI2-3B",
"base_model:quantized:HelpingAI/HelpingAI2-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-06T16:14:29Z |
---
license: other
license_name: helpingai
license_link: https://huggingface.co/OEvortex/HelpingAI-3B-v3/blob/main/LICENSE.md
pipeline_tag: text-generation
language:
- en
tags:
- HelpingAI
- Emotionally-Intelligent
- EQ-focused- EQ-focused
- Conversational
- SLM
- llama-cpp
- gguf-my-repo
base_model: OEvortex/HelpingAI-3B-reloaded
---
# mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF
This model was converted to GGUF format from [`OEvortex/HelpingAI-3B-reloaded`](https://huggingface.co/OEvortex/HelpingAI-3B-reloaded) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OEvortex/HelpingAI-3B-reloaded) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF --hf-file helpingai-3b-reloaded-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF --hf-file helpingai-3b-reloaded-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF --hf-file helpingai-3b-reloaded-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF --hf-file helpingai-3b-reloaded-q4_k_m.gguf -c 2048
```
|
camidenecken/RoBERTa-RM1-v2-2-rm-v32
|
camidenecken
| 2024-11-06T16:08:06Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T16:07:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camidenecken/RoBERTa-RM1-v2-2-rm-v31
|
camidenecken
| 2024-11-06T16:05:57Z | 183 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T16:05:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camidenecken/RoBERTa-RM1-v2-2-rm-v30
|
camidenecken
| 2024-11-06T16:03:50Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T16:03:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lorraine013/openmathinstruct2-llama-3.1-8B-Instruct-lr5-ep2
|
Lorraine013
| 2024-11-06T16:00:05Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T14:33:04Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openmathinstruct2-llama-3.1-8B-Instruct-lr5-ep2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openmathinstruct2-llama-3.1-8B-Instruct-lr5-ep2
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the openmathinstruct2_cot_20k_train dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8177 | 0.4808 | 500 | 0.7823 |
| 0.7708 | 0.9615 | 1000 | 0.7572 |
| 0.5513 | 1.4423 | 1500 | 0.7693 |
| 0.5059 | 1.9231 | 2000 | 0.7637 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.0+cu121
- Datasets 2.21.0
- Tokenizers 0.20.1
|
camidenecken/RoBERTa-RM1-v2-2-rm-v27
|
camidenecken
| 2024-11-06T15:57:21Z | 182 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T15:57:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camidenecken/RoBERTa-RM1-v2-2-rm-v26
|
camidenecken
| 2024-11-06T15:55:15Z | 180 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T15:54:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
belisarius/FLUX.1-dev-Mann-E-gguf
|
belisarius
| 2024-11-06T15:46:26Z | 108 | 4 | null |
[
"gguf",
"base_model:mann-e/mann-e_flux",
"base_model:quantized:mann-e/mann-e_flux",
"license:other",
"region:us"
] | null | 2024-11-05T08:26:06Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
base_model:
- mann-e/mann-e_flux
---
UNet only - no Clip-l/T5xxl included
Quantized versions of the Mann-E model. https://huggingface.co/mann-e/mann-e_flux
Made using this guide: https://github.com/city96/ComfyUI-GGUF/tree/main/tools
|
richiebailey/whisper-large-v3-turbo
|
richiebailey
| 2024-11-06T15:44:40Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-06T15:37:29Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
license: mit
tags:
- audio
- automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
pipeline_tag: automatic-speech-recognition
base_model:
- openai/whisper-large-v3
library_name: transformers
---
# Whisper
Whisper is a state-of-the-art model for automatic speech recognition (ASR) and speech translation, proposed in the paper
[Robust Speech Recognition via Large-Scale Weak Supervision](https://huggingface.co/papers/2212.04356) by Alec Radford
et al. from OpenAI. Trained on >5M hours of labeled data, Whisper demonstrates a strong ability to generalise to many
datasets and domains in a zero-shot setting.
Whisper large-v3-turbo is a finetuned version of a pruned [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3). In other words, it's the exact same model, except that the number of decoding layers have reduced from 32 to 4.
As a result, the model is way faster, at the expense of a minor quality degradation. You can find more details about it [in this GitHub discussion](https://github.com/openai/whisper/discussions/2363).
**Disclaimer**: Content for this model card has partly been written by the 🤗 Hugging Face team, and partly copied and
pasted from the original model card.
## Usage
Whisper large-v3-turbo is supported in Hugging Face 🤗 Transformers. To run the model, first install the Transformers
library. For this example, we'll also install 🤗 Datasets to load toy audio dataset from the Hugging Face Hub, and
🤗 Accelerate to reduce the model loading time:
```bash
pip install --upgrade pip
pip install --upgrade transformers datasets[audio] accelerate
```
The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
class to transcribe audios of arbitrary length:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-large-v3-turbo"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline:
```python
result = pipe("audio.mp3")
```
Multiple audio files can be transcribed in parallel by specifying them as a list and setting the `batch_size` parameter:
```python
result = pipe(["audio_1.mp3", "audio_2.mp3"], batch_size=2)
```
Transformers is compatible with all Whisper decoding strategies, such as temperature fallback and condition on previous
tokens. The following example demonstrates how to enable these heuristics:
```python
generate_kwargs = {
"max_new_tokens": 448,
"num_beams": 1,
"condition_on_prev_tokens": False,
"compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space)
"temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0),
"logprob_threshold": -1.0,
"no_speech_threshold": 0.6,
"return_timestamps": True,
}
result = pipe(sample, generate_kwargs=generate_kwargs)
```
Whisper predicts the language of the source audio automatically. If the source audio language is known *a-priori*, it
can be passed as an argument to the pipeline:
```python
result = pipe(sample, generate_kwargs={"language": "english"})
```
By default, Whisper performs the task of *speech transcription*, where the source audio language is the same as the target
text language. To perform *speech translation*, where the target text is in English, set the task to `"translate"`:
```python
result = pipe(sample, generate_kwargs={"task": "translate"})
```
Finally, the model can be made to predict timestamps. For sentence-level timestamps, pass the `return_timestamps` argument:
```python
result = pipe(sample, return_timestamps=True)
print(result["chunks"])
```
And for word-level timestamps:
```python
result = pipe(sample, return_timestamps="word")
print(result["chunks"])
```
The above arguments can be used in isolation or in combination. For example, to perform the task of speech transcription
where the source audio is in French, and we want to return sentence-level timestamps, the following can be used:
```python
result = pipe(sample, return_timestamps=True, generate_kwargs={"language": "french", "task": "translate"})
print(result["chunks"])
```
<details>
<summary> For more control over the generation parameters, use the model + processor API directly: </summary>
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
from datasets import Audio, load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-large-v3-turbo"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate))
sample = dataset[0]["audio"]
inputs = processor(
sample["array"],
sampling_rate=sample["sampling_rate"],
return_tensors="pt",
truncation=False,
padding="longest",
return_attention_mask=True,
)
inputs = inputs.to(device, dtype=torch_dtype)
gen_kwargs = {
"max_new_tokens": 448,
"num_beams": 1,
"condition_on_prev_tokens": False,
"compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space)
"temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0),
"logprob_threshold": -1.0,
"no_speech_threshold": 0.6,
"return_timestamps": True,
}
pred_ids = model.generate(**inputs, **gen_kwargs)
pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=False)
print(pred_text)
```
</details>
## Additional Speed & Memory Improvements
You can apply additional speed and memory improvements to Whisper to further reduce the inference speed and VRAM
requirements.
### Chunked Long-Form
Whisper has a receptive field of 30-seconds. To transcribe audios longer than this, one of two long-form algorithms are
required:
1. **Sequential:** uses a "sliding window" for buffered inference, transcribing 30-second slices one after the other
2. **Chunked:** splits long audio files into shorter ones (with a small overlap between segments), transcribes each segment independently, and stitches the resulting transcriptions at the boundaries
The sequential long-form algorithm should be used in either of the following scenarios:
1. Transcription accuracy is the most important factor, and speed is less of a consideration
2. You are transcribing **batches** of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate
Conversely, the chunked algorithm should be used when:
1. Transcription speed is the most important factor
2. You are transcribing a **single** long audio file
By default, Transformers uses the sequential algorithm. To enable the chunked algorithm, pass the `chunk_length_s`
parameter to the `pipeline`. For large-v3, a chunk length of 30-seconds is optimal. To activate batching over long
audio files, pass the argument `batch_size`:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-large-v3-turbo"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
chunk_length_s=30,
batch_size=16, # batch size for inference - set based on your device
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
#### Torch compile
The Whisper forward pass is compatible with [`torch.compile`](https://pytorch.org/docs/stable/generated/torch.compile.html)
for 4.5x speed-ups.
**Note:** `torch.compile` is currently not compatible with the Chunked long-form algorithm or Flash Attention 2 ⚠️
```python
import torch
from torch.nn.attention import SDPBackend, sdpa_kernel
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
from tqdm import tqdm
torch.set_float32_matmul_precision("high")
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-large-v3-turbo"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True
).to(device)
# Enable static cache and compile the forward pass
model.generation_config.cache_implementation = "static"
model.generation_config.max_new_tokens = 256
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
# 2 warmup steps
for _ in tqdm(range(2), desc="Warm-up step"):
with sdpa_kernel(SDPBackend.MATH):
result = pipe(sample.copy(), generate_kwargs={"min_new_tokens": 256, "max_new_tokens": 256})
# fast run
with sdpa_kernel(SDPBackend.MATH):
result = pipe(sample.copy())
print(result["text"])
```
#### Flash Attention 2
We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2) if your GPU supports it and you are not using [torch.compile](#torch-compile).
To do so, first install [Flash Attention](https://github.com/Dao-AILab/flash-attention):
```
pip install flash-attn --no-build-isolation
```
Then pass `attn_implementation="flash_attention_2"` to `from_pretrained`:
```python
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, attn_implementation="flash_attention_2")
```
#### Torch Scale-Product-Attention (SDPA)
If your GPU does not support Flash Attention, we recommend making use of PyTorch [scaled dot-product attention (SDPA)](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html).
This attention implementation is activated **by default** for PyTorch versions 2.1.1 or greater. To check
whether you have a compatible PyTorch version, run the following Python code snippet:
```python
from transformers.utils import is_torch_sdpa_available
print(is_torch_sdpa_available())
```
If the above returns `True`, you have a valid version of PyTorch installed and SDPA is activated by default. If it
returns `False`, you need to upgrade your PyTorch version according to the [official instructions](https://pytorch.org/get-started/locally/)
Once a valid PyTorch version is installed, SDPA is activated by default. It can also be set explicitly by specifying
`attn_implementation="sdpa"` as follows:
```python
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, attn_implementation="sdpa")
```
For more information about how to use the SDPA refer to the [Transformers SDPA documentation](https://huggingface.co/docs/transformers/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention).
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model. There are two
flavours of Whisper model: English-only and multilingual. The English-only models were trained on the task of English
speech recognition. The multilingual models were trained simultaneously on multilingual speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio. For speech
translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes. The smallest four are available as English-only
and multilingual. The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
| large-v3 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v3) |
| large-v3-turbo | 809 M | x | [✓](https://huggingface.co/openai/whisper-large-v3-turbo) |
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
No information provided.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
mradermacher/alfred-40b-1023-i1-GGUF
|
mradermacher
| 2024-11-06T15:41:09Z | 106 | 0 |
transformers
|
[
"transformers",
"gguf",
"falcon-40b",
"long-context",
"falcon",
"NTK-YaRN",
"en",
"fr",
"de",
"es",
"it",
"dataset:OpenAssistant/oasst1",
"dataset:ehartford/dolphin",
"dataset:tau/sled",
"dataset:tiiuae/falcon-refinedweb",
"base_model:lightonai/alfred-40b-1023",
"base_model:quantized:lightonai/alfred-40b-1023",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-06T07:03:20Z |
---
base_model: lightonai/alfred-40b-1023
datasets:
- OpenAssistant/oasst1
- ehartford/dolphin
- tau/sled
- tiiuae/falcon-refinedweb
language:
- en
- fr
- de
- es
- it
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- falcon-40b
- long-context
- falcon
- NTK-YaRN
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/lightonai/alfred-40b-1023
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/alfred-40b-1023-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ1_S.gguf) | i1-IQ1_S | 9.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ1_M.gguf) | i1-IQ1_M | 10.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ2_XS.gguf) | i1-IQ2_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ2_S.gguf) | i1-IQ2_S | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ2_M.gguf) | i1-IQ2_M | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q2_K.gguf) | i1-Q2_K | 15.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 16.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ3_XS.gguf) | i1-IQ3_XS | 17.9 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ3_S.gguf) | i1-IQ3_S | 18.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q3_K_S.gguf) | i1-Q3_K_S | 18.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ3_M.gguf) | i1-IQ3_M | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q3_K_M.gguf) | i1-Q3_K_M | 20.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q3_K_L.gguf) | i1-Q3_K_L | 21.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ4_XS.gguf) | i1-IQ4_XS | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q4_K_S.gguf) | i1-Q4_K_S | 23.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q4_0.gguf) | i1-Q4_0 | 24.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q4_K_M.gguf) | i1-Q4_K_M | 25.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q5_K_S.gguf) | i1-Q5_K_S | 29.1 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q5_K_M.gguf) | i1-Q5_K_M | 30.7 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q6_K.gguf) | i1-Q6_K | 34.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
neopolita/hammer2.0-7b-gguf
|
neopolita
| 2024-11-06T15:39:32Z | 13 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T15:03:06Z |
---
{}
---
# GGUF quants for [**MadeAgents/Hammer2.0-7b**](https://huggingface.co/MadeAgents/Hammer2.0-7b) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
**Terms of Use**: Please check the [**original model**](https://huggingface.co/MadeAgents/Hammer2.0-7b)
<picture>
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
</picture>
## Quants
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_s`: Uses Q3_K for all tensors
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_s`: Uses Q4_K for all tensors
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_s`: Uses Q5_K for all tensors
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
|
mav23/SmolLM2-360M-Instruct-GGUF
|
mav23
| 2024-11-06T15:37:44Z | 66 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T15:32:30Z |
---
library_name: transformers
license: apache-2.0
language:
- en
---
# SmolLM2

## Table of Contents
1. [Model Summary](##model-summary)
2. [Limitations](##limitations)
3. [Training](##training)
4. [License](##license)
5. [Citation](##citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
### How to use
### Transformers
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-360M-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM2-360M-Instruct --device cpu
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base Pre-Trained Model
| Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M |
|:-------------------|:------------:|:------------:|:------------:|
| HellaSwag | **54.5** | 51.2 | 51.8 |
| ARC (Average) | **53.0** | 45.4 | 50.1 |
| PIQA | **71.7** | 69.9 | 71.6 |
| MMLU (cloze) | **35.8** | 33.7 | 34.4 |
| CommonsenseQA | **38.0** | 31.6 | 35.3 |
| TriviaQA | **16.9** | 4.3 | 9.1 |
| Winogrande | 52.5 | **54.1** | 52.8 |
| OpenBookQA | **37.4** | **37.4** | 37.2 |
| GSM8K (5-shot) | 3.2 | **33.4** | 1.6 |
## Instruction Model
| Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct |
|:-----------------------------|:---------------------:|:---------------------:|:---------------------:|
| IFEval (Average prompt/inst) | **41.0** | 31.6 | 19.8 |
| MT-Bench | 3.66 | **4.16** | 3.37 |
| HellaSwag | **52.1** | 48.0 | 47.9 |
| ARC (Average) | **43.7** | 37.3 | 38.8 |
| PIQA | **70.8** | 67.2 | 69.4 |
| MMLU (cloze) | **32.8** | 31.7 | 30.6 |
| BBH (3-shot) | 27.3 | **30.7** | 24.4 |
| GSM8K (5-shot) | 7.43 | **26.8** | 1.36 |
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 4T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 64 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
```
|
mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF
|
mradermacher
| 2024-11-06T15:35:12Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"tr",
"en",
"base_model:Trendyol/Trendyol-LLM-7b-chat-v0.1",
"base_model:quantized:Trendyol/Trendyol-LLM-7b-chat-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-06T06:04:52Z |
---
base_model: Trendyol/Trendyol-LLM-7b-chat-v0.1
language:
- tr
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 5.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
QuantFactory/KONI-Llama3.1-8B-Instruct-20241024-GGUF
|
QuantFactory
| 2024-11-06T15:34:48Z | 105 | 3 |
transformers
|
[
"transformers",
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T14:40:16Z |
---
library_name: transformers
tags: []
---
[](https://hf.co/QuantFactory)
# QuantFactory/KONI-Llama3.1-8B-Instruct-20241024-GGUF
This is quantized version of [KISTI-KONI/KONI-Llama3.1-8B-Instruct-20241024](https://huggingface.co/KISTI-KONI/KONI-Llama3.1-8B-Instruct-20241024) created using llama.cpp
# Original Model Card
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
techiaith/whisper-large-v3-ft-cv-cy
|
techiaith
| 2024-11-06T15:32:39Z | 10 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"automatic-speech-recognition",
"cy",
"dataset:techiaith/commonvoice_18_0_cy",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2024-08-26T11:24:16Z |
---
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
- whisper
datasets:
- techiaith/commonvoice_18_0_cy
metrics:
- wer
model-index:
- name: whisper-large-v3-ft-cv-cy-train-all-plus-other-with-excluded
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: DewiBrynJones/commonvoice_18_0_cy default
type: DewiBrynJones/commonvoice_18_0_cy
args: default
metrics:
- name: Wer
type: wer
value: 0.185
language:
- cy
pipeline_tag: automatic-speech-recognition
---
# whisper-large-v3-ft-cv-cy
This model is a version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) fine-tuned with the
`train_all` and `other_with_excluded` custom splits from [techiaith/commonvoice_18_0_cy](https://huggingface.co/datasets/techiaith/commonvoice_18_0_cy)
It achieves the following results on the Common Voice for Welsh release 18's standard test set:
- WER: 18.50
- CER: 5.32
N.B. this model performs considerably worse on English language speech, but better on Welsh than a [bilingual model](https://huggingface.co/techiaith/whisper-large-v3-ft-cv-cy-en)
## Usage
```python
from transformers import pipeline
transcriber = pipeline("automatic-speech-recognition", model="techiaith/whisper-large-v3-ft-cv-cy")
result = transcriber(<path or url to soundfile>)
print (result)
```
`{'text': 'Mae hen wlad fy nhadau yn annwyl i mi.'}`
|
mav23/SmolLM2-135M-Instruct-GGUF
|
mav23
| 2024-11-06T15:29:26Z | 27 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T15:26:33Z |
---
library_name: transformers
license: apache-2.0
language:
- en
---
# SmolLM2

## Table of Contents
1. [Model Summary](##model-summary)
2. [Limitations](##limitations)
3. [Training](##training)
4. [License](##license)
5. [Citation](##citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 135M model was trained on 2 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
### How to use
### Transformers
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-135M-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is gravity?"}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM2-135M-Instruct --device cpu
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base pre-trained model
| Metrics | SmolLM2-135M-8k | SmolLM-135M |
|:-------------------|:----------------:|:------------:|
| HellaSwag | **42.1** | 41.2 |
| ARC (Average) | **43.9** | 42.4 |
| PIQA | 68.4 | 68.4 |
| MMLU (cloze) | **31.5** | 30.2 |
| CommonsenseQA | **33.9** | 32.7 |
| TriviaQA | 4.1 | **4.3** |
| Winogrande | 51.3 | 51.3 |
| OpenBookQA | **34.6** | 34.0 |
| GSM8K (5-shot) | **1.4** | 1.0 |
## Instruction model
| Metric | SmolLM2-135M-Instruct | SmolLM-135M-Instruct |
|:-----------------------------|:---------------------:|:--------------------:|
| IFEval (Average prompt/inst) | **29.9** | 17.2 |
| MT-Bench | **19.8** | 16.8 |
| HellaSwag | **40.9** | 38.9 |
| ARC (Average) | **37.3** | 33.9 |
| PIQA | **66.3** | 64.0 |
| MMLU (cloze) | **29.3** | 28.3 |
| BBH (3-shot) | **28.2** | 25.2 |
| GSM8K (5-shot) | 1.4 | 1.4 |
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 2T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 64 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
```
|
Tippawan/pr-corrected-v8
|
Tippawan
| 2024-11-06T15:26:40Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-06T15:26:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mav23/SmolLM2-1.7B-GGUF
|
mav23
| 2024-11-06T15:25:15Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-06T15:09:39Z |
---
library_name: transformers
license: apache-2.0
language:
- en
---
# SmolLM2

## Table of Contents
1. [Model Summary](#model-summary)
2. [Evaluation](#evaluation)
3. [Limitations](#limitations)
4. [Training](#training)
5. [License](#license)
6. [Citation](#citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
### How to use
```bash
pip install transformers
```
#### Running the model on CPU/GPU/multi GPU
* _Using full precision_
```python
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
# for fp16 use `torch_dtype=torch.float16` instead
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
```bash
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 3422.76 MB
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base Pre-Trained Model
| Metric | SmolLM2-1.7B | Llama-1B | Qwen2.5-1.5B | SmolLM1-1.7B |
|------------------|--------------|-------------|---------------|--------------|
| HellaSwag | **68.7** | 61.2 | 66.4 | 62.9 |
| ARC (Average) | **60.5** | 49.2 | 58.5 | 59.9 |
| PIQA | **77.6** | 74.8 | 76.1 | 76.0 |
| MMLU-Pro (MCF) | **19.4** | 11.7 | 13.7 | 10.8 |
| CommonsenseQA | **43.6** | 41.2 | 34.1 | 38.0 |
| TriviaQA | **36.7** | 28.1 | 20.9 | 22.5 |
| Winogrande | **59.4** | 57.8 | 59.3 | 54.7 |
| OpenBookQA | 42.2 | 38.4 | 40.0 | **42.4** |
| GSM8K (5-shot) | 31.0 | 7.2 | **61.3** | 5.5 |
## Instruction Model
| Metric | SmolLM2-1.7B-Instruct | Llama-1B-Instruct | Qwen2.5-1.5B-Instruct | SmolLM1-1.7B-Instruct |
|:-----------------------------|:---------------------:|:-----------------:|:----------------------:|:----------------------:|
| IFEval (Average prompt/inst) | **56.7** | 53.5 | 47.4 | 23.1 |
| MT-Bench | 6.13 | 5.48 | **6.52** | 4.33 |
| OpenRewrite-Eval (micro_avg RougeL) | 44.9 | 39.2 | **46.9** | NaN |
| HellaSwag | **66.1** | 56.1 | 60.9 | 55.5 |
| ARC (Average) | **51.7** | 41.6 | 46.2 | 43.7 |
| PIQA | **74.4** | 72.3 | 73.2 | 71.6 |
| MMLU-Pro (MCF) | 19.3 | 12.7 | **24.2** | 11.7 |
| BBH (3-shot) | 32.2 | 27.6 | **35.3** | 25.7 |
| GSM8K (5-shot) | **48.2** | 26.8 | 42.8 | 4.62 |
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 11T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 256 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
```
|
MaziyarPanahi/llama3-8B-aifeifei-1.0-GGUF
|
MaziyarPanahi
| 2024-11-06T15:19:58Z | 34 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:aifeifei798/llama3-8B-aifeifei-1.0",
"base_model:quantized:aifeifei798/llama3-8B-aifeifei-1.0",
"region:us",
"conversational"
] |
text-generation
| 2024-11-06T14:57:15Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: llama3-8B-aifeifei-1.0-GGUF
base_model: aifeifei798/llama3-8B-aifeifei-1.0
inference: false
model_creator: aifeifei798
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/llama3-8B-aifeifei-1.0-GGUF](https://huggingface.co/MaziyarPanahi/llama3-8B-aifeifei-1.0-GGUF)
- Model creator: [aifeifei798](https://huggingface.co/aifeifei798)
- Original model: [aifeifei798/llama3-8B-aifeifei-1.0](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.0)
## Description
[MaziyarPanahi/llama3-8B-aifeifei-1.0-GGUF](https://huggingface.co/MaziyarPanahi/llama3-8B-aifeifei-1.0-GGUF) contains GGUF format model files for [aifeifei798/llama3-8B-aifeifei-1.0](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.0).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
nphuc/Llama-3.2-3B-Thich-Nhat-Hanh-v3.2
|
nphuc
| 2024-11-06T15:15:38Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T15:00:15Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** nphuc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ZyloO-AI/RawCharm-Amateur-Photography
|
ZyloO-AI
| 2024-11-06T14:53:47Z | 40 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-11-06T14:49:25Z |
---
library_name: diffusers
pipeline_tag: text-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
novalalthoff/wav2vec2-large-id-16hr-non-lp
|
novalalthoff
| 2024-11-06T14:51:21Z | 80 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-06T14:49:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-6.9b-deduped-int8-step2-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-06T14:46:28Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-06T14:36:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/llama3-8B-aifeifei-1.1-GGUF
|
MaziyarPanahi
| 2024-11-06T14:35:34Z | 52 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:aifeifei798/llama3-8B-aifeifei-1.1",
"base_model:quantized:aifeifei798/llama3-8B-aifeifei-1.1",
"region:us",
"conversational"
] |
text-generation
| 2024-11-06T14:12:42Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: llama3-8B-aifeifei-1.1-GGUF
base_model: aifeifei798/llama3-8B-aifeifei-1.1
inference: false
model_creator: aifeifei798
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/llama3-8B-aifeifei-1.1-GGUF](https://huggingface.co/MaziyarPanahi/llama3-8B-aifeifei-1.1-GGUF)
- Model creator: [aifeifei798](https://huggingface.co/aifeifei798)
- Original model: [aifeifei798/llama3-8B-aifeifei-1.1](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.1)
## Description
[MaziyarPanahi/llama3-8B-aifeifei-1.1-GGUF](https://huggingface.co/MaziyarPanahi/llama3-8B-aifeifei-1.1-GGUF) contains GGUF format model files for [aifeifei798/llama3-8B-aifeifei-1.1](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.1).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
MayBashendy/ASAP_FineTuningBERT_Aug_k20_task1_organization_fold1
|
MayBashendy
| 2024-11-06T14:33:18Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T14:00:29Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k20_task1_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k20_task1_organization_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4063
- Qwk: 0.5441
- Mse: 0.4063
- Rmse: 0.6374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0060 | 2 | 9.8538 | 0.0 | 9.8538 | 3.1391 |
| No log | 0.0120 | 4 | 7.5205 | 0.0213 | 7.5205 | 2.7424 |
| No log | 0.0181 | 6 | 6.8656 | 0.0018 | 6.8656 | 2.6202 |
| No log | 0.0241 | 8 | 6.0945 | 0.0 | 6.0945 | 2.4687 |
| No log | 0.0301 | 10 | 5.6492 | 0.0 | 5.6492 | 2.3768 |
| No log | 0.0361 | 12 | 4.8146 | 0.0 | 4.8146 | 2.1942 |
| No log | 0.0422 | 14 | 4.1150 | 0.0 | 4.1150 | 2.0285 |
| No log | 0.0482 | 16 | 3.3946 | 0.0297 | 3.3946 | 1.8424 |
| No log | 0.0542 | 18 | 2.7416 | 0.0118 | 2.7416 | 1.6558 |
| No log | 0.0602 | 20 | 2.1167 | 0.0079 | 2.1167 | 1.4549 |
| No log | 0.0663 | 22 | 1.6121 | 0.0 | 1.6121 | 1.2697 |
| No log | 0.0723 | 24 | 1.2816 | 0.1643 | 1.2816 | 1.1321 |
| No log | 0.0783 | 26 | 1.0878 | 0.1060 | 1.0878 | 1.0430 |
| No log | 0.0843 | 28 | 0.9893 | 0.0211 | 0.9893 | 0.9946 |
| No log | 0.0904 | 30 | 0.8874 | 0.0211 | 0.8874 | 0.9420 |
| No log | 0.0964 | 32 | 0.8744 | 0.0211 | 0.8744 | 0.9351 |
| No log | 0.1024 | 34 | 0.8581 | 0.0106 | 0.8581 | 0.9263 |
| No log | 0.1084 | 36 | 0.8897 | 0.0106 | 0.8897 | 0.9432 |
| No log | 0.1145 | 38 | 0.8408 | 0.0106 | 0.8408 | 0.9170 |
| No log | 0.1205 | 40 | 0.9843 | 0.1325 | 0.9843 | 0.9921 |
| No log | 0.1265 | 42 | 1.0066 | 0.0 | 1.0066 | 1.0033 |
| No log | 0.1325 | 44 | 0.9551 | 0.0 | 0.9551 | 0.9773 |
| No log | 0.1386 | 46 | 0.8306 | 0.0 | 0.8306 | 0.9114 |
| No log | 0.1446 | 48 | 0.7559 | 0.0106 | 0.7559 | 0.8694 |
| No log | 0.1506 | 50 | 0.7449 | 0.0106 | 0.7449 | 0.8630 |
| No log | 0.1566 | 52 | 0.7499 | 0.0106 | 0.7499 | 0.8660 |
| No log | 0.1627 | 54 | 0.8259 | 0.0106 | 0.8259 | 0.9088 |
| No log | 0.1687 | 56 | 0.8342 | 0.0106 | 0.8342 | 0.9134 |
| No log | 0.1747 | 58 | 0.8480 | 0.0106 | 0.8480 | 0.9209 |
| No log | 0.1807 | 60 | 0.8004 | 0.0106 | 0.8004 | 0.8947 |
| No log | 0.1867 | 62 | 0.8020 | 0.0106 | 0.8020 | 0.8955 |
| No log | 0.1928 | 64 | 0.7901 | 0.0132 | 0.7901 | 0.8889 |
| No log | 0.1988 | 66 | 0.7705 | 0.0106 | 0.7705 | 0.8778 |
| No log | 0.2048 | 68 | 0.7646 | 0.0477 | 0.7646 | 0.8744 |
| No log | 0.2108 | 70 | 0.7771 | 0.0106 | 0.7771 | 0.8816 |
| No log | 0.2169 | 72 | 0.8900 | 0.0 | 0.8900 | 0.9434 |
| No log | 0.2229 | 74 | 0.9312 | 0.0 | 0.9312 | 0.9650 |
| No log | 0.2289 | 76 | 0.7507 | 0.0211 | 0.7507 | 0.8664 |
| No log | 0.2349 | 78 | 0.8049 | 0.1726 | 0.8049 | 0.8972 |
| No log | 0.2410 | 80 | 0.7495 | 0.1392 | 0.7495 | 0.8657 |
| No log | 0.2470 | 82 | 0.6772 | 0.0570 | 0.6772 | 0.8229 |
| No log | 0.2530 | 84 | 0.6693 | 0.0803 | 0.6693 | 0.8181 |
| No log | 0.2590 | 86 | 0.6966 | 0.0999 | 0.6966 | 0.8346 |
| No log | 0.2651 | 88 | 0.7758 | 0.1809 | 0.7758 | 0.8808 |
| No log | 0.2711 | 90 | 0.6627 | 0.0620 | 0.6627 | 0.8140 |
| No log | 0.2771 | 92 | 0.6599 | 0.0941 | 0.6599 | 0.8123 |
| No log | 0.2831 | 94 | 0.6633 | 0.0669 | 0.6633 | 0.8144 |
| No log | 0.2892 | 96 | 0.6847 | 0.0604 | 0.6847 | 0.8275 |
| No log | 0.2952 | 98 | 0.8380 | 0.0052 | 0.8380 | 0.9154 |
| No log | 0.3012 | 100 | 0.9455 | 0.0 | 0.9455 | 0.9724 |
| No log | 0.3072 | 102 | 0.9850 | 0.0 | 0.9850 | 0.9925 |
| No log | 0.3133 | 104 | 0.9506 | 0.0 | 0.9506 | 0.9750 |
| No log | 0.3193 | 106 | 0.9171 | -0.0027 | 0.9171 | 0.9576 |
| No log | 0.3253 | 108 | 0.8876 | 0.0116 | 0.8876 | 0.9421 |
| No log | 0.3313 | 110 | 0.8296 | 0.0025 | 0.8296 | 0.9108 |
| No log | 0.3373 | 112 | 0.7703 | 0.0 | 0.7703 | 0.8777 |
| No log | 0.3434 | 114 | 0.7313 | 0.0211 | 0.7313 | 0.8552 |
| No log | 0.3494 | 116 | 0.7281 | 0.0276 | 0.7281 | 0.8533 |
| No log | 0.3554 | 118 | 0.7371 | 0.0211 | 0.7371 | 0.8585 |
| No log | 0.3614 | 120 | 0.8136 | 0.2752 | 0.8136 | 0.9020 |
| No log | 0.3675 | 122 | 0.8426 | 0.3032 | 0.8426 | 0.9179 |
| No log | 0.3735 | 124 | 0.8149 | 0.0 | 0.8149 | 0.9027 |
| No log | 0.3795 | 126 | 0.8598 | 0.0 | 0.8598 | 0.9273 |
| No log | 0.3855 | 128 | 0.8445 | 0.0 | 0.8445 | 0.9189 |
| No log | 0.3916 | 130 | 0.7930 | 0.0 | 0.7930 | 0.8905 |
| No log | 0.3976 | 132 | 0.7897 | 0.0742 | 0.7897 | 0.8887 |
| No log | 0.4036 | 134 | 0.7592 | 0.0779 | 0.7592 | 0.8713 |
| No log | 0.4096 | 136 | 0.7340 | 0.0250 | 0.7340 | 0.8567 |
| No log | 0.4157 | 138 | 0.7109 | 0.0631 | 0.7109 | 0.8432 |
| No log | 0.4217 | 140 | 0.7010 | 0.3134 | 0.7010 | 0.8372 |
| No log | 0.4277 | 142 | 0.7485 | 0.4716 | 0.7485 | 0.8652 |
| No log | 0.4337 | 144 | 0.7055 | 0.3984 | 0.7055 | 0.8399 |
| No log | 0.4398 | 146 | 0.6043 | 0.2866 | 0.6043 | 0.7774 |
| No log | 0.4458 | 148 | 0.6090 | 0.1808 | 0.6090 | 0.7804 |
| No log | 0.4518 | 150 | 0.6293 | 0.1585 | 0.6293 | 0.7933 |
| No log | 0.4578 | 152 | 0.6791 | 0.1350 | 0.6791 | 0.8241 |
| No log | 0.4639 | 154 | 0.7938 | 0.1901 | 0.7938 | 0.8910 |
| No log | 0.4699 | 156 | 0.7204 | 0.2131 | 0.7204 | 0.8487 |
| No log | 0.4759 | 158 | 0.6079 | 0.1893 | 0.6079 | 0.7797 |
| No log | 0.4819 | 160 | 0.6052 | 0.2001 | 0.6052 | 0.7780 |
| No log | 0.4880 | 162 | 0.6483 | 0.3653 | 0.6483 | 0.8052 |
| No log | 0.4940 | 164 | 0.6604 | 0.3441 | 0.6604 | 0.8127 |
| No log | 0.5 | 166 | 0.6017 | 0.2949 | 0.6017 | 0.7757 |
| No log | 0.5060 | 168 | 0.5814 | 0.4476 | 0.5814 | 0.7625 |
| No log | 0.5120 | 170 | 0.5913 | 0.5063 | 0.5913 | 0.7689 |
| No log | 0.5181 | 172 | 0.6019 | 0.5193 | 0.6019 | 0.7759 |
| No log | 0.5241 | 174 | 0.5463 | 0.4907 | 0.5463 | 0.7391 |
| No log | 0.5301 | 176 | 0.5484 | 0.4974 | 0.5484 | 0.7406 |
| No log | 0.5361 | 178 | 0.5551 | 0.4497 | 0.5551 | 0.7451 |
| No log | 0.5422 | 180 | 0.5884 | 0.4432 | 0.5884 | 0.7671 |
| No log | 0.5482 | 182 | 0.7529 | 0.4655 | 0.7529 | 0.8677 |
| No log | 0.5542 | 184 | 0.8098 | 0.4488 | 0.8098 | 0.8999 |
| No log | 0.5602 | 186 | 0.7082 | 0.4467 | 0.7082 | 0.8416 |
| No log | 0.5663 | 188 | 0.6004 | 0.3464 | 0.6004 | 0.7748 |
| No log | 0.5723 | 190 | 0.5929 | 0.2784 | 0.5929 | 0.7700 |
| No log | 0.5783 | 192 | 0.5549 | 0.4062 | 0.5549 | 0.7449 |
| No log | 0.5843 | 194 | 0.5563 | 0.5899 | 0.5563 | 0.7458 |
| No log | 0.5904 | 196 | 0.5644 | 0.5750 | 0.5644 | 0.7513 |
| No log | 0.5964 | 198 | 0.5137 | 0.5741 | 0.5137 | 0.7167 |
| No log | 0.6024 | 200 | 0.5335 | 0.5233 | 0.5335 | 0.7304 |
| No log | 0.6084 | 202 | 0.5050 | 0.5336 | 0.5050 | 0.7106 |
| No log | 0.6145 | 204 | 0.5118 | 0.6005 | 0.5118 | 0.7154 |
| No log | 0.6205 | 206 | 0.6248 | 0.5732 | 0.6248 | 0.7905 |
| No log | 0.6265 | 208 | 0.7111 | 0.5476 | 0.7111 | 0.8433 |
| No log | 0.6325 | 210 | 0.7781 | 0.5189 | 0.7781 | 0.8821 |
| No log | 0.6386 | 212 | 0.8434 | 0.4931 | 0.8434 | 0.9184 |
| No log | 0.6446 | 214 | 0.8126 | 0.5209 | 0.8126 | 0.9014 |
| No log | 0.6506 | 216 | 0.6383 | 0.5970 | 0.6383 | 0.7989 |
| No log | 0.6566 | 218 | 0.5315 | 0.6014 | 0.5315 | 0.7290 |
| No log | 0.6627 | 220 | 0.5624 | 0.4525 | 0.5624 | 0.7499 |
| No log | 0.6687 | 222 | 0.6333 | 0.3337 | 0.6333 | 0.7958 |
| No log | 0.6747 | 224 | 0.5795 | 0.3624 | 0.5795 | 0.7612 |
| No log | 0.6807 | 226 | 0.5073 | 0.5486 | 0.5073 | 0.7122 |
| No log | 0.6867 | 228 | 0.5600 | 0.6066 | 0.5600 | 0.7484 |
| No log | 0.6928 | 230 | 0.6215 | 0.5835 | 0.6215 | 0.7883 |
| No log | 0.6988 | 232 | 0.5686 | 0.5877 | 0.5686 | 0.7540 |
| No log | 0.7048 | 234 | 0.6028 | 0.5796 | 0.6028 | 0.7764 |
| No log | 0.7108 | 236 | 0.5793 | 0.5983 | 0.5793 | 0.7611 |
| No log | 0.7169 | 238 | 0.5310 | 0.6063 | 0.5310 | 0.7287 |
| No log | 0.7229 | 240 | 0.4967 | 0.5858 | 0.4967 | 0.7048 |
| No log | 0.7289 | 242 | 0.4868 | 0.5721 | 0.4868 | 0.6977 |
| No log | 0.7349 | 244 | 0.4722 | 0.5557 | 0.4722 | 0.6871 |
| No log | 0.7410 | 246 | 0.4724 | 0.5195 | 0.4724 | 0.6873 |
| No log | 0.7470 | 248 | 0.4676 | 0.5665 | 0.4676 | 0.6838 |
| No log | 0.7530 | 250 | 0.4837 | 0.5811 | 0.4837 | 0.6955 |
| No log | 0.7590 | 252 | 0.4968 | 0.5984 | 0.4968 | 0.7048 |
| No log | 0.7651 | 254 | 0.4773 | 0.6040 | 0.4773 | 0.6909 |
| No log | 0.7711 | 256 | 0.4643 | 0.5940 | 0.4643 | 0.6814 |
| No log | 0.7771 | 258 | 0.4890 | 0.6155 | 0.4890 | 0.6993 |
| No log | 0.7831 | 260 | 0.6079 | 0.5785 | 0.6079 | 0.7797 |
| No log | 0.7892 | 262 | 0.6645 | 0.5582 | 0.6645 | 0.8152 |
| No log | 0.7952 | 264 | 0.6657 | 0.5514 | 0.6657 | 0.8159 |
| No log | 0.8012 | 266 | 0.6675 | 0.5498 | 0.6675 | 0.8170 |
| No log | 0.8072 | 268 | 0.5966 | 0.5786 | 0.5966 | 0.7724 |
| No log | 0.8133 | 270 | 0.5300 | 0.6091 | 0.5300 | 0.7280 |
| No log | 0.8193 | 272 | 0.5421 | 0.6164 | 0.5421 | 0.7363 |
| No log | 0.8253 | 274 | 0.6810 | 0.5792 | 0.6810 | 0.8252 |
| No log | 0.8313 | 276 | 0.6917 | 0.5810 | 0.6917 | 0.8317 |
| No log | 0.8373 | 278 | 0.5877 | 0.6193 | 0.5877 | 0.7666 |
| No log | 0.8434 | 280 | 0.5283 | 0.6098 | 0.5283 | 0.7268 |
| No log | 0.8494 | 282 | 0.5171 | 0.5755 | 0.5171 | 0.7191 |
| No log | 0.8554 | 284 | 0.5388 | 0.5552 | 0.5388 | 0.7340 |
| No log | 0.8614 | 286 | 0.5772 | 0.5397 | 0.5772 | 0.7598 |
| No log | 0.8675 | 288 | 0.5090 | 0.5516 | 0.5090 | 0.7135 |
| No log | 0.8735 | 290 | 0.4698 | 0.4972 | 0.4698 | 0.6854 |
| No log | 0.8795 | 292 | 0.4553 | 0.5218 | 0.4553 | 0.6748 |
| No log | 0.8855 | 294 | 0.4614 | 0.5281 | 0.4614 | 0.6793 |
| No log | 0.8916 | 296 | 0.4784 | 0.5153 | 0.4784 | 0.6917 |
| No log | 0.8976 | 298 | 0.4488 | 0.5968 | 0.4488 | 0.6700 |
| No log | 0.9036 | 300 | 0.5075 | 0.6270 | 0.5075 | 0.7124 |
| No log | 0.9096 | 302 | 0.4963 | 0.6338 | 0.4963 | 0.7045 |
| No log | 0.9157 | 304 | 0.4564 | 0.5870 | 0.4564 | 0.6756 |
| No log | 0.9217 | 306 | 0.5087 | 0.5159 | 0.5087 | 0.7132 |
| No log | 0.9277 | 308 | 0.4529 | 0.5780 | 0.4529 | 0.6729 |
| No log | 0.9337 | 310 | 0.5123 | 0.6292 | 0.5123 | 0.7158 |
| No log | 0.9398 | 312 | 0.6004 | 0.6074 | 0.6004 | 0.7749 |
| No log | 0.9458 | 314 | 0.5623 | 0.6023 | 0.5623 | 0.7499 |
| No log | 0.9518 | 316 | 0.4899 | 0.6086 | 0.4899 | 0.6999 |
| No log | 0.9578 | 318 | 0.4493 | 0.5903 | 0.4493 | 0.6703 |
| No log | 0.9639 | 320 | 0.4409 | 0.5967 | 0.4409 | 0.6640 |
| No log | 0.9699 | 322 | 0.4509 | 0.6117 | 0.4509 | 0.6715 |
| No log | 0.9759 | 324 | 0.4351 | 0.5719 | 0.4351 | 0.6596 |
| No log | 0.9819 | 326 | 0.4458 | 0.6023 | 0.4458 | 0.6677 |
| No log | 0.9880 | 328 | 0.5342 | 0.6022 | 0.5342 | 0.7309 |
| No log | 0.9940 | 330 | 0.5829 | 0.5923 | 0.5829 | 0.7635 |
| No log | 1.0 | 332 | 0.5132 | 0.6003 | 0.5132 | 0.7164 |
| No log | 1.0060 | 334 | 0.5212 | 0.6055 | 0.5212 | 0.7219 |
| No log | 1.0120 | 336 | 0.5368 | 0.6044 | 0.5368 | 0.7327 |
| No log | 1.0181 | 338 | 0.5627 | 0.5900 | 0.5627 | 0.7501 |
| No log | 1.0241 | 340 | 0.5291 | 0.6080 | 0.5291 | 0.7274 |
| No log | 1.0301 | 342 | 0.4764 | 0.5559 | 0.4764 | 0.6902 |
| No log | 1.0361 | 344 | 0.4584 | 0.5561 | 0.4584 | 0.6770 |
| No log | 1.0422 | 346 | 0.4822 | 0.5822 | 0.4822 | 0.6944 |
| No log | 1.0482 | 348 | 0.4995 | 0.5724 | 0.4995 | 0.7068 |
| No log | 1.0542 | 350 | 0.5213 | 0.5983 | 0.5213 | 0.7220 |
| No log | 1.0602 | 352 | 0.4432 | 0.6025 | 0.4432 | 0.6657 |
| No log | 1.0663 | 354 | 0.4266 | 0.6011 | 0.4266 | 0.6531 |
| No log | 1.0723 | 356 | 0.4593 | 0.6256 | 0.4593 | 0.6777 |
| No log | 1.0783 | 358 | 0.4282 | 0.6277 | 0.4282 | 0.6544 |
| No log | 1.0843 | 360 | 0.4012 | 0.5671 | 0.4012 | 0.6334 |
| No log | 1.0904 | 362 | 0.4494 | 0.5048 | 0.4494 | 0.6704 |
| No log | 1.0964 | 364 | 0.4008 | 0.5778 | 0.4008 | 0.6331 |
| No log | 1.1024 | 366 | 0.4551 | 0.6453 | 0.4551 | 0.6746 |
| No log | 1.1084 | 368 | 0.4896 | 0.6424 | 0.4896 | 0.6997 |
| No log | 1.1145 | 370 | 0.4146 | 0.5919 | 0.4146 | 0.6439 |
| No log | 1.1205 | 372 | 0.4293 | 0.5326 | 0.4293 | 0.6552 |
| No log | 1.1265 | 374 | 0.4293 | 0.5315 | 0.4293 | 0.6552 |
| No log | 1.1325 | 376 | 0.4753 | 0.5957 | 0.4753 | 0.6894 |
| No log | 1.1386 | 378 | 0.6103 | 0.5992 | 0.6103 | 0.7812 |
| No log | 1.1446 | 380 | 0.6979 | 0.5847 | 0.6979 | 0.8354 |
| No log | 1.1506 | 382 | 0.5574 | 0.6284 | 0.5574 | 0.7466 |
| No log | 1.1566 | 384 | 0.4114 | 0.6228 | 0.4114 | 0.6414 |
| No log | 1.1627 | 386 | 0.4050 | 0.6184 | 0.4050 | 0.6364 |
| No log | 1.1687 | 388 | 0.4458 | 0.6544 | 0.4458 | 0.6677 |
| No log | 1.1747 | 390 | 0.4770 | 0.6515 | 0.4770 | 0.6906 |
| No log | 1.1807 | 392 | 0.4387 | 0.6407 | 0.4387 | 0.6623 |
| No log | 1.1867 | 394 | 0.4974 | 0.5471 | 0.4974 | 0.7053 |
| No log | 1.1928 | 396 | 0.5162 | 0.5377 | 0.5162 | 0.7185 |
| No log | 1.1988 | 398 | 0.4317 | 0.6337 | 0.4317 | 0.6570 |
| No log | 1.2048 | 400 | 0.5618 | 0.6426 | 0.5618 | 0.7496 |
| No log | 1.2108 | 402 | 0.7052 | 0.6094 | 0.7052 | 0.8398 |
| No log | 1.2169 | 404 | 0.6045 | 0.6085 | 0.6045 | 0.7775 |
| No log | 1.2229 | 406 | 0.5113 | 0.5431 | 0.5113 | 0.7151 |
| No log | 1.2289 | 408 | 0.5141 | 0.5341 | 0.5141 | 0.7170 |
| No log | 1.2349 | 410 | 0.5797 | 0.5890 | 0.5797 | 0.7614 |
| No log | 1.2410 | 412 | 0.5818 | 0.5859 | 0.5818 | 0.7627 |
| No log | 1.2470 | 414 | 0.5180 | 0.5409 | 0.5180 | 0.7197 |
| No log | 1.2530 | 416 | 0.5190 | 0.5214 | 0.5190 | 0.7205 |
| No log | 1.2590 | 418 | 0.5646 | 0.5339 | 0.5646 | 0.7514 |
| No log | 1.2651 | 420 | 0.5798 | 0.5696 | 0.5798 | 0.7615 |
| No log | 1.2711 | 422 | 0.5575 | 0.6004 | 0.5575 | 0.7467 |
| No log | 1.2771 | 424 | 0.4537 | 0.5824 | 0.4537 | 0.6735 |
| No log | 1.2831 | 426 | 0.4111 | 0.5705 | 0.4111 | 0.6412 |
| No log | 1.2892 | 428 | 0.4151 | 0.5388 | 0.4151 | 0.6443 |
| No log | 1.2952 | 430 | 0.3907 | 0.5737 | 0.3907 | 0.6251 |
| No log | 1.3012 | 432 | 0.3818 | 0.6003 | 0.3818 | 0.6179 |
| No log | 1.3072 | 434 | 0.3918 | 0.5599 | 0.3918 | 0.6259 |
| No log | 1.3133 | 436 | 0.4417 | 0.5025 | 0.4417 | 0.6646 |
| No log | 1.3193 | 438 | 0.4010 | 0.5344 | 0.4010 | 0.6332 |
| No log | 1.3253 | 440 | 0.3930 | 0.6166 | 0.3930 | 0.6269 |
| No log | 1.3313 | 442 | 0.4862 | 0.6574 | 0.4862 | 0.6973 |
| No log | 1.3373 | 444 | 0.6188 | 0.6505 | 0.6188 | 0.7867 |
| No log | 1.3434 | 446 | 0.5829 | 0.6513 | 0.5829 | 0.7635 |
| No log | 1.3494 | 448 | 0.4792 | 0.6095 | 0.4792 | 0.6923 |
| No log | 1.3554 | 450 | 0.4474 | 0.5256 | 0.4474 | 0.6689 |
| No log | 1.3614 | 452 | 0.4590 | 0.5178 | 0.4590 | 0.6775 |
| No log | 1.3675 | 454 | 0.4685 | 0.5701 | 0.4685 | 0.6845 |
| No log | 1.3735 | 456 | 0.4354 | 0.6031 | 0.4354 | 0.6598 |
| No log | 1.3795 | 458 | 0.4036 | 0.5533 | 0.4036 | 0.6353 |
| No log | 1.3855 | 460 | 0.4440 | 0.4963 | 0.4440 | 0.6664 |
| No log | 1.3916 | 462 | 0.4178 | 0.5149 | 0.4178 | 0.6463 |
| No log | 1.3976 | 464 | 0.4140 | 0.6389 | 0.4140 | 0.6434 |
| No log | 1.4036 | 466 | 0.5198 | 0.6676 | 0.5198 | 0.7210 |
| No log | 1.4096 | 468 | 0.4730 | 0.6435 | 0.4730 | 0.6878 |
| No log | 1.4157 | 470 | 0.4154 | 0.5595 | 0.4154 | 0.6445 |
| No log | 1.4217 | 472 | 0.4113 | 0.5696 | 0.4113 | 0.6414 |
| No log | 1.4277 | 474 | 0.4438 | 0.6509 | 0.4438 | 0.6662 |
| No log | 1.4337 | 476 | 0.5069 | 0.6826 | 0.5069 | 0.7120 |
| No log | 1.4398 | 478 | 0.4424 | 0.6829 | 0.4424 | 0.6651 |
| No log | 1.4458 | 480 | 0.3942 | 0.6120 | 0.3942 | 0.6279 |
| No log | 1.4518 | 482 | 0.3959 | 0.5517 | 0.3959 | 0.6292 |
| No log | 1.4578 | 484 | 0.4028 | 0.5742 | 0.4028 | 0.6347 |
| No log | 1.4639 | 486 | 0.4481 | 0.6284 | 0.4481 | 0.6694 |
| No log | 1.4699 | 488 | 0.4389 | 0.6197 | 0.4389 | 0.6625 |
| No log | 1.4759 | 490 | 0.4036 | 0.5371 | 0.4036 | 0.6353 |
| No log | 1.4819 | 492 | 0.3990 | 0.5366 | 0.3990 | 0.6317 |
| No log | 1.4880 | 494 | 0.3900 | 0.5701 | 0.3900 | 0.6245 |
| No log | 1.4940 | 496 | 0.3945 | 0.6265 | 0.3945 | 0.6281 |
| No log | 1.5 | 498 | 0.3904 | 0.6173 | 0.3904 | 0.6248 |
| 0.5402 | 1.5060 | 500 | 0.4074 | 0.6301 | 0.4074 | 0.6383 |
| 0.5402 | 1.5120 | 502 | 0.4201 | 0.5920 | 0.4201 | 0.6481 |
| 0.5402 | 1.5181 | 504 | 0.4574 | 0.5466 | 0.4574 | 0.6763 |
| 0.5402 | 1.5241 | 506 | 0.4131 | 0.5669 | 0.4131 | 0.6427 |
| 0.5402 | 1.5301 | 508 | 0.4058 | 0.6023 | 0.4058 | 0.6370 |
| 0.5402 | 1.5361 | 510 | 0.4063 | 0.5441 | 0.4063 | 0.6374 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
aigchacker/Text-Poster
|
aigchacker
| 2024-11-06T14:26:55Z | 42 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"image-generation",
"flux",
"safetensors",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-06T13:59:31Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- image-generation
- flux
- safetensors
widget:
- text: Text poster, a couple
output:
url: images/6dd1a918d89991ad5e40513ab88e7d892077f89dac93edcf4b660dd2.jpg
- text: Text poster, a woman sitting in a cafe
output:
url: images/d2586464001008a80b5e45104e0f23290a35db048cab2e4fc4bfa356.jpg
- text: Text poster, eiffel tower
output:
url: images/f25e24ecfbd0aa96fb6f55ab29288ba4d1fffe79fd95679d9d2f1329.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: text poster
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# FLUX.1-dev-LoRA-Text-Poster
This is a LoRA (Text Poster) trained on FLUX.1-dev for artistic text poster by [cooooool](https://www.shakker.ai/userpage/c4d790d27e6b4de69f3f3508daf8f4c5/publish). If you are also interested in sharing your models on our platform, welcome to join our [Discord Community](https://huggingface.co/spaces/Shakker-Labs/README/blob/main/(https://discord.gg/5TuxSjJya6)).
<div class="container">
<img src="./poster.jpeg" width="1024"/>
</div>
## Showcases
<Gallery />
## Trigger words
You should use `text poster` to trigger the image generation. The recommended scale is `0.8` to `1.0` in diffusers.
## Online Inference
You can also download this model at [Shakker AI](https://www.shakker.ai/modelinfo/579ab130b53246fea49811bf80d38486/FLUX-text-poster?from=search), where we provide an online interface to generate images.
## Acknowledgements
This model is trained by our copyrighted users [cooooool](https://www.shakker.ai/userpage/c4d790d27e6b4de69f3f3508daf8f4c5/publish). We release this model under permissions. The model follows [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
mradermacher/tamil-llama-13b-base-v0.1-GGUF
|
mradermacher
| 2024-11-06T14:18:54Z | 31 | 0 |
transformers
|
[
"transformers",
"gguf",
"ta",
"en",
"base_model:abhinand/tamil-llama-13b-base-v0.1",
"base_model:quantized:abhinand/tamil-llama-13b-base-v0.1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-06T12:45:03Z |
---
base_model: abhinand/tamil-llama-13b-base-v0.1
language:
- ta
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q3_K_S.gguf) | Q3_K_S | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q3_K_L.gguf) | Q3_K_L | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q5_K_S.gguf) | Q5_K_S | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q6_K.gguf) | Q6_K | 10.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
GateNLP/covid-vaccine-twitter-bert
|
GateNLP
| 2024-11-06T14:18:18Z | 117 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-13T19:02:52Z |
VaxxHesitancy: A Dataset for Studying Hesitancy Towards COVID-19 Vaccination on Twitter
Yida Mu, Mali Jin, Charlie Grimshaw, Carolina Scarton, Kalina Bontcheva, Xingyi Song
Accepted @ICWSM 2023
```bibtex
@inproceedings{mu2023vaxxhesitancy,
title={VaxxHesitancy: A Dataset for Studying Hesitancy Towards COVID-19 Vaccination on Twitter},
author={Mu, Yida and Jin, Mali and Grimshaw, Charlie and Scarton, Carolina and Bontcheva, Kalina and Song, Xingyi},
booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
volume={17},
pages={1052--1062},
year={2023}
}
```
---
license: mit
---
|
lesubra/ECE-PRYMMAL-3B-SLERP_2-V2
|
lesubra
| 2024-11-06T14:17:40Z | 10 | 0 | null |
[
"safetensors",
"phi3",
"merge",
"mergekit",
"lazymergekit",
"jpacifico/Chocolatine-3B-Instruct-DPO-Revised",
"microsoft/Phi-3-mini-4k-instruct",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2024-11-06T14:15:13Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- jpacifico/Chocolatine-3B-Instruct-DPO-Revised
- microsoft/Phi-3-mini-4k-instruct
---
# ECE-PRYMMAL-3B-SLERP_2-V2
ECE-PRYMMAL-3B-SLERP_2-V2 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [jpacifico/Chocolatine-3B-Instruct-DPO-Revised](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-Revised)
* [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised
layer_range: [0, 32]
- model: microsoft/Phi-3-mini-4k-instruct
layer_range: [0, 32]
merge_method: slerp
base_model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
QuantFactory/Qwen2.5-Coder-7B-Instruct-GGUF
|
QuantFactory
| 2024-11-06T14:17:24Z | 386 | 5 |
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-7B",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-09-19T06:10:06Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
[](https://hf.co/QuantFactory)
# QuantFactory/Qwen2.5-Coder-7B-Instruct-GGUF
This is quantized version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) based on updated weights _(dated 06 Nov, 2024)_ and created using llama.cpp
# Original Model Card
# Qwen2.5-Coder-7B-Instruct
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). For Qwen2.5-Coder, we release three base language models and instruction-tuned language models, 1.5, 7 and 32 (coming soon) billion parameters. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the instruction-tuned 7B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Coder-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "write a quick sort algorithm."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-sft-3epochs
|
AlekseyKorshuk
| 2024-11-06T14:15:56Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T10:34:06Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
datasets: AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft
library_name: transformers
model_name: ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-sft-3epochs
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-sft-3epochs
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the [AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft](https://huggingface.co/datasets/AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-sft-3epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/aleksey-korshuk/huggingface/runs/bfyzbjtg)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.4.1+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mateiaassAI/MT5Base-8e5-4
|
mateiaassAI
| 2024-11-06T14:10:32Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-06T14:08:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kajamo/alpha0ne
|
kajamo
| 2024-11-06T14:06:15Z | 114 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-06T12:34:27Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: alpha0ne
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alpha0ne
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
lesubra/ECE-PRYMMAL-3B-SLERP_2-V1
|
lesubra
| 2024-11-06T14:03:06Z | 9 | 0 | null |
[
"safetensors",
"phi3",
"merge",
"mergekit",
"lazymergekit",
"jpacifico/Chocolatine-3B-Instruct-DPO-Revised",
"microsoft/Phi-3-mini-4k-instruct",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2024-11-06T14:00:50Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- jpacifico/Chocolatine-3B-Instruct-DPO-Revised
- microsoft/Phi-3-mini-4k-instruct
---
# ECE-PRYMMAL-3B-SLERP_2-V1
ECE-PRYMMAL-3B-SLERP_2-V1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [jpacifico/Chocolatine-3B-Instruct-DPO-Revised](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-Revised)
* [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised
layer_range: [0, 32]
- model: microsoft/Phi-3-mini-4k-instruct
layer_range: [0, 32]
merge_method: slerp
base_model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
MayBashendy/ASAP_FineTuningBERT_Aug_k20_task1_organization_fold0
|
MayBashendy
| 2024-11-06T13:58:49Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T11:41:28Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k20_task1_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k20_task1_organization_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5352
- Qwk: 0.5466
- Mse: 0.5352
- Rmse: 0.7316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 0.0063 | 2 | 12.0179 | 0.0 | 12.0179 | 3.4667 |
| No log | 0.0126 | 4 | 11.0822 | 0.0 | 11.0822 | 3.3290 |
| No log | 0.0189 | 6 | 10.5444 | 0.0 | 10.5444 | 3.2472 |
| No log | 0.0252 | 8 | 9.8057 | 0.0 | 9.8057 | 3.1314 |
| No log | 0.0314 | 10 | 8.6296 | 0.0026 | 8.6296 | 2.9376 |
| No log | 0.0377 | 12 | 7.0373 | 0.0054 | 7.0373 | 2.6528 |
| No log | 0.0440 | 14 | 5.3678 | 0.0 | 5.3678 | 2.3168 |
| No log | 0.0503 | 16 | 4.1881 | 0.0125 | 4.1881 | 2.0465 |
| No log | 0.0566 | 18 | 3.2466 | 0.0314 | 3.2466 | 1.8018 |
| No log | 0.0629 | 20 | 2.5151 | 0.0201 | 2.5151 | 1.5859 |
| No log | 0.0692 | 22 | 1.6556 | 0.0115 | 1.6556 | 1.2867 |
| No log | 0.0755 | 24 | 1.1844 | 0.1172 | 1.1844 | 1.0883 |
| No log | 0.0818 | 26 | 0.9363 | 0.0866 | 0.9363 | 0.9676 |
| No log | 0.0881 | 28 | 0.7604 | 0.0484 | 0.7604 | 0.8720 |
| No log | 0.0943 | 30 | 0.7517 | 0.0316 | 0.7517 | 0.8670 |
| No log | 0.1006 | 32 | 0.7482 | 0.0674 | 0.7482 | 0.8650 |
| No log | 0.1069 | 34 | 0.7812 | 0.0382 | 0.7812 | 0.8838 |
| No log | 0.1132 | 36 | 0.8630 | 0.0382 | 0.8630 | 0.9290 |
| No log | 0.1195 | 38 | 0.7789 | 0.0316 | 0.7789 | 0.8825 |
| No log | 0.1258 | 40 | 0.8760 | 0.0316 | 0.8760 | 0.9359 |
| No log | 0.1321 | 42 | 0.9209 | 0.0212 | 0.9209 | 0.9596 |
| No log | 0.1384 | 44 | 0.8864 | 0.0212 | 0.8864 | 0.9415 |
| No log | 0.1447 | 46 | 0.8158 | 0.0126 | 0.8158 | 0.9032 |
| No log | 0.1509 | 48 | 0.8029 | 0.0316 | 0.8029 | 0.8961 |
| No log | 0.1572 | 50 | 0.8344 | 0.0316 | 0.8344 | 0.9134 |
| No log | 0.1635 | 52 | 0.8317 | 0.0316 | 0.8317 | 0.9120 |
| No log | 0.1698 | 54 | 0.7914 | 0.0316 | 0.7914 | 0.8896 |
| No log | 0.1761 | 56 | 0.7742 | 0.0316 | 0.7742 | 0.8799 |
| No log | 0.1824 | 58 | 0.7615 | 0.0316 | 0.7615 | 0.8726 |
| No log | 0.1887 | 60 | 0.7750 | 0.0316 | 0.7750 | 0.8803 |
| No log | 0.1950 | 62 | 0.7897 | 0.0316 | 0.7897 | 0.8887 |
| No log | 0.2013 | 64 | 0.7865 | 0.0316 | 0.7865 | 0.8869 |
| No log | 0.2075 | 66 | 0.7933 | 0.0316 | 0.7933 | 0.8907 |
| No log | 0.2138 | 68 | 0.8167 | 0.0316 | 0.8167 | 0.9037 |
| No log | 0.2201 | 70 | 0.8019 | 0.0316 | 0.8019 | 0.8955 |
| No log | 0.2264 | 72 | 0.7407 | 0.0316 | 0.7407 | 0.8606 |
| No log | 0.2327 | 74 | 0.7123 | 0.0316 | 0.7123 | 0.8440 |
| No log | 0.2390 | 76 | 0.6954 | 0.0316 | 0.6954 | 0.8339 |
| No log | 0.2453 | 78 | 0.7127 | 0.0419 | 0.7127 | 0.8442 |
| No log | 0.2516 | 80 | 0.6788 | 0.0419 | 0.6788 | 0.8239 |
| No log | 0.2579 | 82 | 0.6872 | 0.3131 | 0.6872 | 0.8290 |
| No log | 0.2642 | 84 | 0.7295 | 0.4173 | 0.7295 | 0.8541 |
| No log | 0.2704 | 86 | 0.7259 | 0.1405 | 0.7259 | 0.8520 |
| No log | 0.2767 | 88 | 0.7534 | 0.0316 | 0.7534 | 0.8680 |
| No log | 0.2830 | 90 | 0.9137 | 0.0316 | 0.9137 | 0.9559 |
| No log | 0.2893 | 92 | 0.8953 | 0.0419 | 0.8953 | 0.9462 |
| No log | 0.2956 | 94 | 0.7677 | 0.0419 | 0.7677 | 0.8762 |
| No log | 0.3019 | 96 | 0.6872 | 0.0419 | 0.6872 | 0.8290 |
| No log | 0.3082 | 98 | 0.6619 | 0.0493 | 0.6619 | 0.8135 |
| No log | 0.3145 | 100 | 0.6529 | 0.0897 | 0.6529 | 0.8080 |
| No log | 0.3208 | 102 | 0.6887 | 0.0447 | 0.6887 | 0.8299 |
| No log | 0.3270 | 104 | 0.8399 | 0.0921 | 0.8399 | 0.9164 |
| No log | 0.3333 | 106 | 0.8642 | 0.0873 | 0.8642 | 0.9296 |
| No log | 0.3396 | 108 | 0.7132 | 0.0484 | 0.7132 | 0.8445 |
| No log | 0.3459 | 110 | 0.6894 | 0.0419 | 0.6894 | 0.8303 |
| No log | 0.3522 | 112 | 0.6976 | 0.0484 | 0.6976 | 0.8352 |
| No log | 0.3585 | 114 | 0.6968 | 0.0419 | 0.6968 | 0.8348 |
| No log | 0.3648 | 116 | 0.6796 | 0.0548 | 0.6796 | 0.8244 |
| No log | 0.3711 | 118 | 0.6692 | 0.0548 | 0.6692 | 0.8180 |
| No log | 0.3774 | 120 | 0.6608 | 0.0548 | 0.6608 | 0.8129 |
| No log | 0.3836 | 122 | 0.6634 | 0.0649 | 0.6634 | 0.8145 |
| No log | 0.3899 | 124 | 0.6622 | 0.0603 | 0.6622 | 0.8137 |
| No log | 0.3962 | 126 | 0.6628 | 0.0750 | 0.6628 | 0.8142 |
| No log | 0.4025 | 128 | 0.6598 | 0.1833 | 0.6598 | 0.8123 |
| No log | 0.4088 | 130 | 0.6672 | 0.2258 | 0.6672 | 0.8168 |
| No log | 0.4151 | 132 | 0.6639 | 0.0918 | 0.6639 | 0.8148 |
| No log | 0.4214 | 134 | 0.6644 | 0.0603 | 0.6644 | 0.8151 |
| No log | 0.4277 | 136 | 0.6554 | 0.0746 | 0.6554 | 0.8096 |
| No log | 0.4340 | 138 | 0.6264 | 0.1257 | 0.6264 | 0.7914 |
| No log | 0.4403 | 140 | 0.6451 | 0.4728 | 0.6451 | 0.8032 |
| No log | 0.4465 | 142 | 0.6685 | 0.4752 | 0.6685 | 0.8176 |
| No log | 0.4528 | 144 | 0.6049 | 0.4478 | 0.6049 | 0.7777 |
| No log | 0.4591 | 146 | 0.6038 | 0.1782 | 0.6038 | 0.7771 |
| No log | 0.4654 | 148 | 0.5846 | 0.2371 | 0.5846 | 0.7646 |
| No log | 0.4717 | 150 | 0.5859 | 0.4477 | 0.5859 | 0.7655 |
| No log | 0.4780 | 152 | 0.7173 | 0.3828 | 0.7173 | 0.8469 |
| No log | 0.4843 | 154 | 0.7084 | 0.3882 | 0.7084 | 0.8417 |
| No log | 0.4906 | 156 | 0.6240 | 0.4139 | 0.6240 | 0.7899 |
| No log | 0.4969 | 158 | 0.6043 | 0.1238 | 0.6043 | 0.7774 |
| No log | 0.5031 | 160 | 0.6702 | 0.1123 | 0.6702 | 0.8187 |
| No log | 0.5094 | 162 | 0.6411 | 0.1102 | 0.6411 | 0.8007 |
| No log | 0.5157 | 164 | 0.5683 | 0.3157 | 0.5683 | 0.7539 |
| No log | 0.5220 | 166 | 0.5955 | 0.4532 | 0.5955 | 0.7717 |
| No log | 0.5283 | 168 | 0.6909 | 0.3869 | 0.6909 | 0.8312 |
| No log | 0.5346 | 170 | 0.9200 | 0.0801 | 0.9200 | 0.9592 |
| No log | 0.5409 | 172 | 0.7579 | 0.2155 | 0.7579 | 0.8706 |
| No log | 0.5472 | 174 | 0.6391 | 0.3533 | 0.6391 | 0.7994 |
| No log | 0.5535 | 176 | 0.6119 | 0.3729 | 0.6119 | 0.7822 |
| No log | 0.5597 | 178 | 0.6156 | 0.3864 | 0.6156 | 0.7846 |
| No log | 0.5660 | 180 | 0.5831 | 0.4169 | 0.5831 | 0.7636 |
| No log | 0.5723 | 182 | 0.6125 | 0.4773 | 0.6125 | 0.7826 |
| No log | 0.5786 | 184 | 0.5654 | 0.5126 | 0.5654 | 0.7519 |
| No log | 0.5849 | 186 | 0.5045 | 0.4536 | 0.5045 | 0.7103 |
| No log | 0.5912 | 188 | 0.4852 | 0.4839 | 0.4852 | 0.6965 |
| No log | 0.5975 | 190 | 0.4864 | 0.5193 | 0.4864 | 0.6974 |
| No log | 0.6038 | 192 | 0.5333 | 0.5381 | 0.5333 | 0.7303 |
| No log | 0.6101 | 194 | 0.5818 | 0.4820 | 0.5818 | 0.7627 |
| No log | 0.6164 | 196 | 0.5457 | 0.4547 | 0.5457 | 0.7387 |
| No log | 0.6226 | 198 | 0.5304 | 0.4858 | 0.5304 | 0.7283 |
| No log | 0.6289 | 200 | 0.5180 | 0.5206 | 0.5180 | 0.7197 |
| No log | 0.6352 | 202 | 0.4908 | 0.5086 | 0.4908 | 0.7006 |
| No log | 0.6415 | 204 | 0.4715 | 0.5198 | 0.4715 | 0.6867 |
| No log | 0.6478 | 206 | 0.4842 | 0.5030 | 0.4842 | 0.6959 |
| No log | 0.6541 | 208 | 0.5114 | 0.5502 | 0.5114 | 0.7151 |
| No log | 0.6604 | 210 | 0.5467 | 0.5329 | 0.5467 | 0.7394 |
| No log | 0.6667 | 212 | 0.5509 | 0.5181 | 0.5509 | 0.7422 |
| No log | 0.6730 | 214 | 0.5190 | 0.3750 | 0.5190 | 0.7204 |
| No log | 0.6792 | 216 | 0.5155 | 0.3492 | 0.5155 | 0.7180 |
| No log | 0.6855 | 218 | 0.4663 | 0.5006 | 0.4663 | 0.6828 |
| No log | 0.6918 | 220 | 0.4621 | 0.5516 | 0.4621 | 0.6798 |
| No log | 0.6981 | 222 | 0.4470 | 0.5623 | 0.4470 | 0.6686 |
| No log | 0.7044 | 224 | 0.4992 | 0.4535 | 0.4992 | 0.7065 |
| No log | 0.7107 | 226 | 0.4630 | 0.5280 | 0.4630 | 0.6805 |
| No log | 0.7170 | 228 | 0.4813 | 0.5276 | 0.4813 | 0.6937 |
| No log | 0.7233 | 230 | 0.4967 | 0.4917 | 0.4967 | 0.7048 |
| No log | 0.7296 | 232 | 0.5387 | 0.4653 | 0.5387 | 0.7340 |
| No log | 0.7358 | 234 | 0.5568 | 0.4123 | 0.5568 | 0.7462 |
| No log | 0.7421 | 236 | 0.5762 | 0.3926 | 0.5762 | 0.7591 |
| No log | 0.7484 | 238 | 0.5757 | 0.3604 | 0.5757 | 0.7587 |
| No log | 0.7547 | 240 | 0.5917 | 0.1988 | 0.5917 | 0.7692 |
| No log | 0.7610 | 242 | 0.6619 | 0.1229 | 0.6619 | 0.8135 |
| No log | 0.7673 | 244 | 0.6446 | 0.1413 | 0.6446 | 0.8029 |
| No log | 0.7736 | 246 | 0.5670 | 0.2121 | 0.5670 | 0.7530 |
| No log | 0.7799 | 248 | 0.5287 | 0.4550 | 0.5287 | 0.7271 |
| No log | 0.7862 | 250 | 0.5835 | 0.5201 | 0.5835 | 0.7639 |
| No log | 0.7925 | 252 | 0.5566 | 0.5366 | 0.5566 | 0.7460 |
| No log | 0.7987 | 254 | 0.4833 | 0.5475 | 0.4833 | 0.6952 |
| No log | 0.8050 | 256 | 0.4625 | 0.4781 | 0.4625 | 0.6801 |
| No log | 0.8113 | 258 | 0.4996 | 0.3595 | 0.4996 | 0.7068 |
| No log | 0.8176 | 260 | 0.4938 | 0.3752 | 0.4938 | 0.7027 |
| No log | 0.8239 | 262 | 0.4648 | 0.4688 | 0.4648 | 0.6818 |
| No log | 0.8302 | 264 | 0.5235 | 0.4649 | 0.5235 | 0.7236 |
| No log | 0.8365 | 266 | 0.5261 | 0.4878 | 0.5261 | 0.7253 |
| No log | 0.8428 | 268 | 0.5084 | 0.4377 | 0.5084 | 0.7130 |
| No log | 0.8491 | 270 | 0.4691 | 0.5609 | 0.4691 | 0.6849 |
| No log | 0.8553 | 272 | 0.4405 | 0.5774 | 0.4405 | 0.6637 |
| No log | 0.8616 | 274 | 0.4312 | 0.5560 | 0.4312 | 0.6567 |
| No log | 0.8679 | 276 | 0.4357 | 0.4909 | 0.4357 | 0.6601 |
| No log | 0.8742 | 278 | 0.4706 | 0.4431 | 0.4706 | 0.6860 |
| No log | 0.8805 | 280 | 0.4540 | 0.4803 | 0.4540 | 0.6738 |
| No log | 0.8868 | 282 | 0.4660 | 0.5422 | 0.4660 | 0.6827 |
| No log | 0.8931 | 284 | 0.4745 | 0.5291 | 0.4745 | 0.6888 |
| No log | 0.8994 | 286 | 0.5117 | 0.5465 | 0.5117 | 0.7154 |
| No log | 0.9057 | 288 | 0.6159 | 0.4993 | 0.6159 | 0.7848 |
| No log | 0.9119 | 290 | 0.5713 | 0.5362 | 0.5713 | 0.7558 |
| No log | 0.9182 | 292 | 0.5096 | 0.5233 | 0.5096 | 0.7138 |
| No log | 0.9245 | 294 | 0.5283 | 0.5564 | 0.5283 | 0.7269 |
| No log | 0.9308 | 296 | 0.5306 | 0.5697 | 0.5306 | 0.7284 |
| No log | 0.9371 | 298 | 0.6002 | 0.5214 | 0.6002 | 0.7747 |
| No log | 0.9434 | 300 | 0.5728 | 0.5460 | 0.5728 | 0.7568 |
| No log | 0.9497 | 302 | 0.4807 | 0.5059 | 0.4807 | 0.6933 |
| No log | 0.9560 | 304 | 0.5446 | 0.3641 | 0.5446 | 0.7380 |
| No log | 0.9623 | 306 | 0.5320 | 0.3856 | 0.5320 | 0.7294 |
| No log | 0.9686 | 308 | 0.4741 | 0.5118 | 0.4741 | 0.6886 |
| No log | 0.9748 | 310 | 0.4723 | 0.5244 | 0.4723 | 0.6872 |
| No log | 0.9811 | 312 | 0.4768 | 0.4583 | 0.4768 | 0.6905 |
| No log | 0.9874 | 314 | 0.5082 | 0.4248 | 0.5082 | 0.7129 |
| No log | 0.9937 | 316 | 0.4580 | 0.4888 | 0.4580 | 0.6768 |
| No log | 1.0 | 318 | 0.4900 | 0.5683 | 0.4900 | 0.7000 |
| No log | 1.0063 | 320 | 0.4791 | 0.5563 | 0.4791 | 0.6921 |
| No log | 1.0126 | 322 | 0.4524 | 0.5296 | 0.4524 | 0.6726 |
| No log | 1.0189 | 324 | 0.4568 | 0.4785 | 0.4568 | 0.6759 |
| No log | 1.0252 | 326 | 0.4561 | 0.5154 | 0.4561 | 0.6753 |
| No log | 1.0314 | 328 | 0.5108 | 0.5644 | 0.5108 | 0.7147 |
| No log | 1.0377 | 330 | 0.5220 | 0.5710 | 0.5220 | 0.7225 |
| No log | 1.0440 | 332 | 0.4507 | 0.5357 | 0.4507 | 0.6713 |
| No log | 1.0503 | 334 | 0.4476 | 0.4719 | 0.4476 | 0.6690 |
| No log | 1.0566 | 336 | 0.4624 | 0.5533 | 0.4624 | 0.6800 |
| No log | 1.0629 | 338 | 0.6018 | 0.5378 | 0.6018 | 0.7758 |
| No log | 1.0692 | 340 | 0.6157 | 0.5216 | 0.6157 | 0.7846 |
| No log | 1.0755 | 342 | 0.5006 | 0.5030 | 0.5006 | 0.7076 |
| No log | 1.0818 | 344 | 0.4784 | 0.4407 | 0.4784 | 0.6917 |
| No log | 1.0881 | 346 | 0.4645 | 0.4348 | 0.4645 | 0.6816 |
| No log | 1.0943 | 348 | 0.4531 | 0.4843 | 0.4531 | 0.6731 |
| No log | 1.1006 | 350 | 0.4681 | 0.5530 | 0.4681 | 0.6842 |
| No log | 1.1069 | 352 | 0.4497 | 0.5415 | 0.4497 | 0.6706 |
| No log | 1.1132 | 354 | 0.4471 | 0.5348 | 0.4471 | 0.6687 |
| No log | 1.1195 | 356 | 0.4355 | 0.4720 | 0.4355 | 0.6599 |
| No log | 1.1258 | 358 | 0.4191 | 0.4984 | 0.4191 | 0.6474 |
| No log | 1.1321 | 360 | 0.4048 | 0.5830 | 0.4048 | 0.6362 |
| No log | 1.1384 | 362 | 0.4305 | 0.5964 | 0.4305 | 0.6562 |
| No log | 1.1447 | 364 | 0.4154 | 0.5886 | 0.4154 | 0.6445 |
| No log | 1.1509 | 366 | 0.4085 | 0.5737 | 0.4085 | 0.6391 |
| No log | 1.1572 | 368 | 0.4145 | 0.5675 | 0.4145 | 0.6438 |
| No log | 1.1635 | 370 | 0.4510 | 0.5868 | 0.4510 | 0.6715 |
| No log | 1.1698 | 372 | 0.5957 | 0.5596 | 0.5957 | 0.7718 |
| No log | 1.1761 | 374 | 0.5335 | 0.5670 | 0.5335 | 0.7304 |
| No log | 1.1824 | 376 | 0.4396 | 0.5635 | 0.4396 | 0.6630 |
| No log | 1.1887 | 378 | 0.4411 | 0.5379 | 0.4411 | 0.6642 |
| No log | 1.1950 | 380 | 0.4362 | 0.5354 | 0.4362 | 0.6604 |
| No log | 1.2013 | 382 | 0.4534 | 0.5435 | 0.4534 | 0.6734 |
| No log | 1.2075 | 384 | 0.4296 | 0.5520 | 0.4296 | 0.6555 |
| No log | 1.2138 | 386 | 0.4266 | 0.5481 | 0.4266 | 0.6531 |
| No log | 1.2201 | 388 | 0.4252 | 0.5482 | 0.4252 | 0.6520 |
| No log | 1.2264 | 390 | 0.4254 | 0.5407 | 0.4254 | 0.6522 |
| No log | 1.2327 | 392 | 0.4384 | 0.5204 | 0.4384 | 0.6621 |
| No log | 1.2390 | 394 | 0.4530 | 0.4888 | 0.4530 | 0.6731 |
| No log | 1.2453 | 396 | 0.4710 | 0.5440 | 0.4710 | 0.6863 |
| No log | 1.2516 | 398 | 0.5120 | 0.5318 | 0.5120 | 0.7155 |
| No log | 1.2579 | 400 | 0.5009 | 0.5510 | 0.5009 | 0.7078 |
| No log | 1.2642 | 402 | 0.4882 | 0.5547 | 0.4882 | 0.6987 |
| No log | 1.2704 | 404 | 0.4844 | 0.5687 | 0.4844 | 0.6960 |
| No log | 1.2767 | 406 | 0.4974 | 0.5624 | 0.4974 | 0.7053 |
| No log | 1.2830 | 408 | 0.4527 | 0.5299 | 0.4527 | 0.6728 |
| No log | 1.2893 | 410 | 0.4481 | 0.5026 | 0.4481 | 0.6694 |
| No log | 1.2956 | 412 | 0.4408 | 0.5473 | 0.4408 | 0.6639 |
| No log | 1.3019 | 414 | 0.4956 | 0.6047 | 0.4956 | 0.7040 |
| No log | 1.3082 | 416 | 0.4792 | 0.6101 | 0.4792 | 0.6922 |
| No log | 1.3145 | 418 | 0.4203 | 0.5664 | 0.4203 | 0.6483 |
| No log | 1.3208 | 420 | 0.4335 | 0.5261 | 0.4335 | 0.6584 |
| No log | 1.3270 | 422 | 0.4437 | 0.5094 | 0.4437 | 0.6661 |
| No log | 1.3333 | 424 | 0.4254 | 0.5705 | 0.4254 | 0.6522 |
| No log | 1.3396 | 426 | 0.4540 | 0.5908 | 0.4540 | 0.6738 |
| No log | 1.3459 | 428 | 0.4200 | 0.5599 | 0.4200 | 0.6480 |
| No log | 1.3522 | 430 | 0.4490 | 0.4792 | 0.4490 | 0.6701 |
| No log | 1.3585 | 432 | 0.4900 | 0.4328 | 0.4900 | 0.7000 |
| No log | 1.3648 | 434 | 0.4325 | 0.5198 | 0.4325 | 0.6577 |
| No log | 1.3711 | 436 | 0.4629 | 0.5773 | 0.4629 | 0.6804 |
| No log | 1.3774 | 438 | 0.5279 | 0.6088 | 0.5279 | 0.7265 |
| No log | 1.3836 | 440 | 0.4515 | 0.5876 | 0.4515 | 0.6720 |
| No log | 1.3899 | 442 | 0.4544 | 0.4816 | 0.4544 | 0.6741 |
| No log | 1.3962 | 444 | 0.5344 | 0.3721 | 0.5344 | 0.7311 |
| No log | 1.4025 | 446 | 0.4667 | 0.4603 | 0.4667 | 0.6832 |
| No log | 1.4088 | 448 | 0.4476 | 0.5907 | 0.4476 | 0.6690 |
| No log | 1.4151 | 450 | 0.5574 | 0.5828 | 0.5574 | 0.7466 |
| No log | 1.4214 | 452 | 0.5536 | 0.5754 | 0.5536 | 0.7441 |
| No log | 1.4277 | 454 | 0.4889 | 0.5879 | 0.4889 | 0.6992 |
| No log | 1.4340 | 456 | 0.4475 | 0.5637 | 0.4475 | 0.6689 |
| No log | 1.4403 | 458 | 0.4689 | 0.5894 | 0.4689 | 0.6848 |
| No log | 1.4465 | 460 | 0.5947 | 0.6027 | 0.5947 | 0.7712 |
| No log | 1.4528 | 462 | 0.6122 | 0.6422 | 0.6122 | 0.7824 |
| No log | 1.4591 | 464 | 0.4651 | 0.6175 | 0.4651 | 0.6820 |
| No log | 1.4654 | 466 | 0.4310 | 0.5557 | 0.4310 | 0.6565 |
| No log | 1.4717 | 468 | 0.4604 | 0.5151 | 0.4604 | 0.6785 |
| No log | 1.4780 | 470 | 0.4326 | 0.5487 | 0.4326 | 0.6577 |
| No log | 1.4843 | 472 | 0.4264 | 0.5595 | 0.4264 | 0.6530 |
| No log | 1.4906 | 474 | 0.4322 | 0.5707 | 0.4322 | 0.6574 |
| No log | 1.4969 | 476 | 0.4565 | 0.5351 | 0.4565 | 0.6756 |
| No log | 1.5031 | 478 | 0.4335 | 0.5410 | 0.4335 | 0.6584 |
| No log | 1.5094 | 480 | 0.4330 | 0.6007 | 0.4330 | 0.6580 |
| No log | 1.5157 | 482 | 0.6046 | 0.6136 | 0.6046 | 0.7775 |
| No log | 1.5220 | 484 | 0.6213 | 0.5786 | 0.6213 | 0.7882 |
| No log | 1.5283 | 486 | 0.5202 | 0.5802 | 0.5202 | 0.7213 |
| No log | 1.5346 | 488 | 0.4899 | 0.5651 | 0.4899 | 0.6999 |
| No log | 1.5409 | 490 | 0.5525 | 0.5716 | 0.5525 | 0.7433 |
| No log | 1.5472 | 492 | 0.5873 | 0.5840 | 0.5873 | 0.7663 |
| No log | 1.5535 | 494 | 0.5886 | 0.5970 | 0.5886 | 0.7672 |
| No log | 1.5597 | 496 | 0.4956 | 0.5652 | 0.4956 | 0.7040 |
| No log | 1.5660 | 498 | 0.4548 | 0.5153 | 0.4548 | 0.6744 |
| 0.6168 | 1.5723 | 500 | 0.4578 | 0.4906 | 0.4578 | 0.6766 |
| 0.6168 | 1.5786 | 502 | 0.4687 | 0.4650 | 0.4687 | 0.6846 |
| 0.6168 | 1.5849 | 504 | 0.4649 | 0.5025 | 0.4649 | 0.6819 |
| 0.6168 | 1.5912 | 506 | 0.4787 | 0.5359 | 0.4787 | 0.6919 |
| 0.6168 | 1.5975 | 508 | 0.5109 | 0.5803 | 0.5109 | 0.7148 |
| 0.6168 | 1.6038 | 510 | 0.4629 | 0.5427 | 0.4629 | 0.6804 |
| 0.6168 | 1.6101 | 512 | 0.4511 | 0.4776 | 0.4511 | 0.6716 |
| 0.6168 | 1.6164 | 514 | 0.4647 | 0.4382 | 0.4647 | 0.6817 |
| 0.6168 | 1.6226 | 516 | 0.4988 | 0.3909 | 0.4988 | 0.7063 |
| 0.6168 | 1.6289 | 518 | 0.4708 | 0.4333 | 0.4708 | 0.6862 |
| 0.6168 | 1.6352 | 520 | 0.4335 | 0.5524 | 0.4335 | 0.6584 |
| 0.6168 | 1.6415 | 522 | 0.5118 | 0.6061 | 0.5118 | 0.7154 |
| 0.6168 | 1.6478 | 524 | 0.5394 | 0.5932 | 0.5394 | 0.7344 |
| 0.6168 | 1.6541 | 526 | 0.4893 | 0.5836 | 0.4893 | 0.6995 |
| 0.6168 | 1.6604 | 528 | 0.4719 | 0.5948 | 0.4719 | 0.6869 |
| 0.6168 | 1.6667 | 530 | 0.4779 | 0.6197 | 0.4779 | 0.6913 |
| 0.6168 | 1.6730 | 532 | 0.5567 | 0.6521 | 0.5567 | 0.7461 |
| 0.6168 | 1.6792 | 534 | 0.5218 | 0.6485 | 0.5218 | 0.7223 |
| 0.6168 | 1.6855 | 536 | 0.4417 | 0.5646 | 0.4417 | 0.6646 |
| 0.6168 | 1.6918 | 538 | 0.4669 | 0.5291 | 0.4669 | 0.6833 |
| 0.6168 | 1.6981 | 540 | 0.4911 | 0.5074 | 0.4911 | 0.7008 |
| 0.6168 | 1.7044 | 542 | 0.4456 | 0.5703 | 0.4456 | 0.6675 |
| 0.6168 | 1.7107 | 544 | 0.5328 | 0.6591 | 0.5328 | 0.7300 |
| 0.6168 | 1.7170 | 546 | 0.5935 | 0.6528 | 0.5935 | 0.7704 |
| 0.6168 | 1.7233 | 548 | 0.5023 | 0.6085 | 0.5023 | 0.7087 |
| 0.6168 | 1.7296 | 550 | 0.4372 | 0.5448 | 0.4372 | 0.6612 |
| 0.6168 | 1.7358 | 552 | 0.4434 | 0.5654 | 0.4434 | 0.6659 |
| 0.6168 | 1.7421 | 554 | 0.5137 | 0.6049 | 0.5137 | 0.7167 |
| 0.6168 | 1.7484 | 556 | 0.5971 | 0.6242 | 0.5971 | 0.7727 |
| 0.6168 | 1.7547 | 558 | 0.5219 | 0.6144 | 0.5219 | 0.7225 |
| 0.6168 | 1.7610 | 560 | 0.4464 | 0.5712 | 0.4464 | 0.6681 |
| 0.6168 | 1.7673 | 562 | 0.4409 | 0.5542 | 0.4409 | 0.6640 |
| 0.6168 | 1.7736 | 564 | 0.4485 | 0.5703 | 0.4485 | 0.6697 |
| 0.6168 | 1.7799 | 566 | 0.5082 | 0.6321 | 0.5082 | 0.7129 |
| 0.6168 | 1.7862 | 568 | 0.4859 | 0.6142 | 0.4859 | 0.6970 |
| 0.6168 | 1.7925 | 570 | 0.4397 | 0.5736 | 0.4397 | 0.6631 |
| 0.6168 | 1.7987 | 572 | 0.4426 | 0.5451 | 0.4426 | 0.6653 |
| 0.6168 | 1.8050 | 574 | 0.4499 | 0.5175 | 0.4499 | 0.6708 |
| 0.6168 | 1.8113 | 576 | 0.4461 | 0.5466 | 0.4461 | 0.6679 |
| 0.6168 | 1.8176 | 578 | 0.4628 | 0.6349 | 0.4628 | 0.6803 |
| 0.6168 | 1.8239 | 580 | 0.5017 | 0.6687 | 0.5017 | 0.7083 |
| 0.6168 | 1.8302 | 582 | 0.4434 | 0.6080 | 0.4434 | 0.6659 |
| 0.6168 | 1.8365 | 584 | 0.4401 | 0.5153 | 0.4401 | 0.6634 |
| 0.6168 | 1.8428 | 586 | 0.4543 | 0.4893 | 0.4543 | 0.6740 |
| 0.6168 | 1.8491 | 588 | 0.4335 | 0.5541 | 0.4335 | 0.6584 |
| 0.6168 | 1.8553 | 590 | 0.5906 | 0.6594 | 0.5906 | 0.7685 |
| 0.6168 | 1.8616 | 592 | 0.8147 | 0.6481 | 0.8147 | 0.9026 |
| 0.6168 | 1.8679 | 594 | 0.7159 | 0.6446 | 0.7159 | 0.8461 |
| 0.6168 | 1.8742 | 596 | 0.4781 | 0.6088 | 0.4781 | 0.6915 |
| 0.6168 | 1.8805 | 598 | 0.4439 | 0.4836 | 0.4439 | 0.6662 |
| 0.6168 | 1.8868 | 600 | 0.4566 | 0.4557 | 0.4566 | 0.6757 |
| 0.6168 | 1.8931 | 602 | 0.4287 | 0.5373 | 0.4287 | 0.6547 |
| 0.6168 | 1.8994 | 604 | 0.4456 | 0.5949 | 0.4456 | 0.6675 |
| 0.6168 | 1.9057 | 606 | 0.4832 | 0.6382 | 0.4832 | 0.6952 |
| 0.6168 | 1.9119 | 608 | 0.4698 | 0.6393 | 0.4698 | 0.6855 |
| 0.6168 | 1.9182 | 610 | 0.4329 | 0.5806 | 0.4329 | 0.6580 |
| 0.6168 | 1.9245 | 612 | 0.4489 | 0.6122 | 0.4489 | 0.6700 |
| 0.6168 | 1.9308 | 614 | 0.4779 | 0.6377 | 0.4779 | 0.6913 |
| 0.6168 | 1.9371 | 616 | 0.4334 | 0.5817 | 0.4334 | 0.6583 |
| 0.6168 | 1.9434 | 618 | 0.4404 | 0.5204 | 0.4404 | 0.6636 |
| 0.6168 | 1.9497 | 620 | 0.4414 | 0.5303 | 0.4414 | 0.6644 |
| 0.6168 | 1.9560 | 622 | 0.4488 | 0.6036 | 0.4488 | 0.6699 |
| 0.6168 | 1.9623 | 624 | 0.4846 | 0.6425 | 0.4846 | 0.6961 |
| 0.6168 | 1.9686 | 626 | 0.4593 | 0.6147 | 0.4593 | 0.6777 |
| 0.6168 | 1.9748 | 628 | 0.4434 | 0.5229 | 0.4434 | 0.6659 |
| 0.6168 | 1.9811 | 630 | 0.5029 | 0.4279 | 0.5029 | 0.7092 |
| 0.6168 | 1.9874 | 632 | 0.4739 | 0.4619 | 0.4739 | 0.6884 |
| 0.6168 | 1.9937 | 634 | 0.4356 | 0.5404 | 0.4356 | 0.6600 |
| 0.6168 | 2.0 | 636 | 0.4431 | 0.5583 | 0.4431 | 0.6657 |
| 0.6168 | 2.0063 | 638 | 0.4482 | 0.5137 | 0.4482 | 0.6695 |
| 0.6168 | 2.0126 | 640 | 0.4698 | 0.4867 | 0.4698 | 0.6854 |
| 0.6168 | 2.0189 | 642 | 0.4694 | 0.5070 | 0.4694 | 0.6851 |
| 0.6168 | 2.0252 | 644 | 0.4760 | 0.5115 | 0.4760 | 0.6899 |
| 0.6168 | 2.0314 | 646 | 0.4932 | 0.5436 | 0.4932 | 0.7023 |
| 0.6168 | 2.0377 | 648 | 0.4959 | 0.5361 | 0.4959 | 0.7042 |
| 0.6168 | 2.0440 | 650 | 0.5141 | 0.5522 | 0.5141 | 0.7170 |
| 0.6168 | 2.0503 | 652 | 0.5923 | 0.6191 | 0.5923 | 0.7696 |
| 0.6168 | 2.0566 | 654 | 0.5373 | 0.6122 | 0.5373 | 0.7330 |
| 0.6168 | 2.0629 | 656 | 0.4557 | 0.5540 | 0.4557 | 0.6750 |
| 0.6168 | 2.0692 | 658 | 0.4540 | 0.5032 | 0.4540 | 0.6738 |
| 0.6168 | 2.0755 | 660 | 0.4465 | 0.5682 | 0.4465 | 0.6682 |
| 0.6168 | 2.0818 | 662 | 0.4612 | 0.5993 | 0.4612 | 0.6791 |
| 0.6168 | 2.0881 | 664 | 0.4772 | 0.6044 | 0.4772 | 0.6908 |
| 0.6168 | 2.0943 | 666 | 0.4469 | 0.5729 | 0.4469 | 0.6685 |
| 0.6168 | 2.1006 | 668 | 0.4566 | 0.5216 | 0.4566 | 0.6757 |
| 0.6168 | 2.1069 | 670 | 0.4556 | 0.5326 | 0.4556 | 0.6750 |
| 0.6168 | 2.1132 | 672 | 0.4481 | 0.5748 | 0.4481 | 0.6694 |
| 0.6168 | 2.1195 | 674 | 0.4865 | 0.6066 | 0.4865 | 0.6975 |
| 0.6168 | 2.1258 | 676 | 0.4763 | 0.6105 | 0.4763 | 0.6901 |
| 0.6168 | 2.1321 | 678 | 0.4473 | 0.5493 | 0.4473 | 0.6688 |
| 0.6168 | 2.1384 | 680 | 0.4496 | 0.5392 | 0.4496 | 0.6705 |
| 0.6168 | 2.1447 | 682 | 0.4709 | 0.6130 | 0.4709 | 0.6862 |
| 0.6168 | 2.1509 | 684 | 0.6828 | 0.6267 | 0.6828 | 0.8263 |
| 0.6168 | 2.1572 | 686 | 0.8103 | 0.6521 | 0.8103 | 0.9002 |
| 0.6168 | 2.1635 | 688 | 0.6630 | 0.6061 | 0.6630 | 0.8143 |
| 0.6168 | 2.1698 | 690 | 0.4662 | 0.6020 | 0.4662 | 0.6828 |
| 0.6168 | 2.1761 | 692 | 0.4516 | 0.5239 | 0.4516 | 0.6720 |
| 0.6168 | 2.1824 | 694 | 0.4492 | 0.5802 | 0.4492 | 0.6702 |
| 0.6168 | 2.1887 | 696 | 0.4976 | 0.6040 | 0.4976 | 0.7054 |
| 0.6168 | 2.1950 | 698 | 0.6299 | 0.6332 | 0.6299 | 0.7937 |
| 0.6168 | 2.2013 | 700 | 0.6688 | 0.6466 | 0.6688 | 0.8178 |
| 0.6168 | 2.2075 | 702 | 0.5166 | 0.6479 | 0.5166 | 0.7187 |
| 0.6168 | 2.2138 | 704 | 0.4484 | 0.5805 | 0.4484 | 0.6697 |
| 0.6168 | 2.2201 | 706 | 0.4544 | 0.5440 | 0.4544 | 0.6741 |
| 0.6168 | 2.2264 | 708 | 0.4569 | 0.5398 | 0.4569 | 0.6760 |
| 0.6168 | 2.2327 | 710 | 0.4531 | 0.5747 | 0.4531 | 0.6731 |
| 0.6168 | 2.2390 | 712 | 0.4682 | 0.6000 | 0.4682 | 0.6842 |
| 0.6168 | 2.2453 | 714 | 0.5003 | 0.6096 | 0.5003 | 0.7074 |
| 0.6168 | 2.2516 | 716 | 0.5291 | 0.5905 | 0.5291 | 0.7274 |
| 0.6168 | 2.2579 | 718 | 0.4991 | 0.5957 | 0.4991 | 0.7065 |
| 0.6168 | 2.2642 | 720 | 0.5197 | 0.5879 | 0.5197 | 0.7209 |
| 0.6168 | 2.2704 | 722 | 0.4952 | 0.5493 | 0.4952 | 0.7037 |
| 0.6168 | 2.2767 | 724 | 0.5032 | 0.5694 | 0.5032 | 0.7093 |
| 0.6168 | 2.2830 | 726 | 0.5523 | 0.5756 | 0.5523 | 0.7431 |
| 0.6168 | 2.2893 | 728 | 0.6089 | 0.6209 | 0.6089 | 0.7803 |
| 0.6168 | 2.2956 | 730 | 0.5343 | 0.6081 | 0.5343 | 0.7309 |
| 0.6168 | 2.3019 | 732 | 0.4702 | 0.5385 | 0.4702 | 0.6857 |
| 0.6168 | 2.3082 | 734 | 0.4921 | 0.4988 | 0.4921 | 0.7015 |
| 0.6168 | 2.3145 | 736 | 0.4682 | 0.5508 | 0.4682 | 0.6843 |
| 0.6168 | 2.3208 | 738 | 0.4779 | 0.5928 | 0.4779 | 0.6913 |
| 0.6168 | 2.3270 | 740 | 0.4663 | 0.5907 | 0.4663 | 0.6828 |
| 0.6168 | 2.3333 | 742 | 0.4595 | 0.5644 | 0.4595 | 0.6779 |
| 0.6168 | 2.3396 | 744 | 0.4796 | 0.4895 | 0.4796 | 0.6926 |
| 0.6168 | 2.3459 | 746 | 0.4608 | 0.5578 | 0.4608 | 0.6788 |
| 0.6168 | 2.3522 | 748 | 0.4919 | 0.6168 | 0.4919 | 0.7013 |
| 0.6168 | 2.3585 | 750 | 0.4988 | 0.6122 | 0.4988 | 0.7063 |
| 0.6168 | 2.3648 | 752 | 0.4754 | 0.5926 | 0.4754 | 0.6895 |
| 0.6168 | 2.3711 | 754 | 0.4788 | 0.5962 | 0.4788 | 0.6919 |
| 0.6168 | 2.3774 | 756 | 0.5472 | 0.6241 | 0.5472 | 0.7397 |
| 0.6168 | 2.3836 | 758 | 0.5343 | 0.6358 | 0.5343 | 0.7309 |
| 0.6168 | 2.3899 | 760 | 0.5494 | 0.6484 | 0.5494 | 0.7412 |
| 0.6168 | 2.3962 | 762 | 0.6423 | 0.6621 | 0.6423 | 0.8014 |
| 0.6168 | 2.4025 | 764 | 0.5970 | 0.6302 | 0.5970 | 0.7726 |
| 0.6168 | 2.4088 | 766 | 0.4743 | 0.5986 | 0.4743 | 0.6887 |
| 0.6168 | 2.4151 | 768 | 0.4533 | 0.5541 | 0.4533 | 0.6733 |
| 0.6168 | 2.4214 | 770 | 0.4515 | 0.5663 | 0.4515 | 0.6720 |
| 0.6168 | 2.4277 | 772 | 0.4737 | 0.5937 | 0.4737 | 0.6883 |
| 0.6168 | 2.4340 | 774 | 0.4717 | 0.5974 | 0.4717 | 0.6868 |
| 0.6168 | 2.4403 | 776 | 0.4551 | 0.5597 | 0.4551 | 0.6746 |
| 0.6168 | 2.4465 | 778 | 0.4473 | 0.5481 | 0.4473 | 0.6688 |
| 0.6168 | 2.4528 | 780 | 0.4460 | 0.5399 | 0.4460 | 0.6678 |
| 0.6168 | 2.4591 | 782 | 0.4570 | 0.5819 | 0.4570 | 0.6760 |
| 0.6168 | 2.4654 | 784 | 0.4882 | 0.6198 | 0.4882 | 0.6987 |
| 0.6168 | 2.4717 | 786 | 0.4643 | 0.5972 | 0.4643 | 0.6814 |
| 0.6168 | 2.4780 | 788 | 0.4631 | 0.5362 | 0.4631 | 0.6805 |
| 0.6168 | 2.4843 | 790 | 0.4646 | 0.5431 | 0.4646 | 0.6816 |
| 0.6168 | 2.4906 | 792 | 0.4667 | 0.5166 | 0.4667 | 0.6831 |
| 0.6168 | 2.4969 | 794 | 0.4569 | 0.5557 | 0.4569 | 0.6760 |
| 0.6168 | 2.5031 | 796 | 0.4888 | 0.6104 | 0.4888 | 0.6991 |
| 0.6168 | 2.5094 | 798 | 0.5187 | 0.6351 | 0.5187 | 0.7202 |
| 0.6168 | 2.5157 | 800 | 0.4694 | 0.5874 | 0.4694 | 0.6851 |
| 0.6168 | 2.5220 | 802 | 0.4601 | 0.4851 | 0.4601 | 0.6783 |
| 0.6168 | 2.5283 | 804 | 0.4941 | 0.4234 | 0.4941 | 0.7029 |
| 0.6168 | 2.5346 | 806 | 0.4711 | 0.4523 | 0.4711 | 0.6864 |
| 0.6168 | 2.5409 | 808 | 0.5131 | 0.6013 | 0.5131 | 0.7163 |
| 0.6168 | 2.5472 | 810 | 0.6076 | 0.6146 | 0.6076 | 0.7795 |
| 0.6168 | 2.5535 | 812 | 0.5727 | 0.5899 | 0.5727 | 0.7568 |
| 0.6168 | 2.5597 | 814 | 0.5201 | 0.5752 | 0.5201 | 0.7212 |
| 0.6168 | 2.5660 | 816 | 0.5087 | 0.5671 | 0.5087 | 0.7132 |
| 0.6168 | 2.5723 | 818 | 0.4889 | 0.5301 | 0.4889 | 0.6992 |
| 0.6168 | 2.5786 | 820 | 0.4882 | 0.5239 | 0.4882 | 0.6987 |
| 0.6168 | 2.5849 | 822 | 0.4822 | 0.5388 | 0.4822 | 0.6944 |
| 0.6168 | 2.5912 | 824 | 0.4752 | 0.5557 | 0.4752 | 0.6894 |
| 0.6168 | 2.5975 | 826 | 0.4755 | 0.5177 | 0.4755 | 0.6895 |
| 0.6168 | 2.6038 | 828 | 0.4695 | 0.5348 | 0.4695 | 0.6852 |
| 0.6168 | 2.6101 | 830 | 0.4779 | 0.6109 | 0.4779 | 0.6913 |
| 0.6168 | 2.6164 | 832 | 0.4690 | 0.6106 | 0.4690 | 0.6849 |
| 0.6168 | 2.6226 | 834 | 0.4478 | 0.5889 | 0.4478 | 0.6692 |
| 0.6168 | 2.6289 | 836 | 0.4370 | 0.5301 | 0.4370 | 0.6610 |
| 0.6168 | 2.6352 | 838 | 0.4351 | 0.5446 | 0.4351 | 0.6596 |
| 0.6168 | 2.6415 | 840 | 0.4627 | 0.6075 | 0.4627 | 0.6802 |
| 0.6168 | 2.6478 | 842 | 0.4984 | 0.6086 | 0.4984 | 0.7060 |
| 0.6168 | 2.6541 | 844 | 0.5112 | 0.6214 | 0.5112 | 0.7150 |
| 0.6168 | 2.6604 | 846 | 0.4755 | 0.6191 | 0.4755 | 0.6896 |
| 0.6168 | 2.6667 | 848 | 0.4469 | 0.5968 | 0.4469 | 0.6685 |
| 0.6168 | 2.6730 | 850 | 0.4566 | 0.6065 | 0.4566 | 0.6757 |
| 0.6168 | 2.6792 | 852 | 0.5356 | 0.6601 | 0.5356 | 0.7319 |
| 0.6168 | 2.6855 | 854 | 0.5851 | 0.6592 | 0.5851 | 0.7649 |
| 0.6168 | 2.6918 | 856 | 0.5472 | 0.6549 | 0.5472 | 0.7397 |
| 0.6168 | 2.6981 | 858 | 0.5496 | 0.6657 | 0.5496 | 0.7413 |
| 0.6168 | 2.7044 | 860 | 0.5095 | 0.6290 | 0.5095 | 0.7138 |
| 0.6168 | 2.7107 | 862 | 0.5340 | 0.6657 | 0.5340 | 0.7308 |
| 0.6168 | 2.7170 | 864 | 0.4919 | 0.6255 | 0.4919 | 0.7014 |
| 0.6168 | 2.7233 | 866 | 0.5321 | 0.6314 | 0.5321 | 0.7294 |
| 0.6168 | 2.7296 | 868 | 0.5331 | 0.6205 | 0.5331 | 0.7301 |
| 0.6168 | 2.7358 | 870 | 0.4911 | 0.6002 | 0.4911 | 0.7008 |
| 0.6168 | 2.7421 | 872 | 0.4601 | 0.5945 | 0.4601 | 0.6783 |
| 0.6168 | 2.7484 | 874 | 0.4646 | 0.6022 | 0.4646 | 0.6816 |
| 0.6168 | 2.7547 | 876 | 0.4638 | 0.6107 | 0.4638 | 0.6810 |
| 0.6168 | 2.7610 | 878 | 0.4979 | 0.6217 | 0.4979 | 0.7056 |
| 0.6168 | 2.7673 | 880 | 0.4824 | 0.6171 | 0.4824 | 0.6945 |
| 0.6168 | 2.7736 | 882 | 0.4717 | 0.6174 | 0.4717 | 0.6868 |
| 0.6168 | 2.7799 | 884 | 0.5750 | 0.6513 | 0.5750 | 0.7583 |
| 0.6168 | 2.7862 | 886 | 0.6010 | 0.6408 | 0.6010 | 0.7752 |
| 0.6168 | 2.7925 | 888 | 0.5237 | 0.6295 | 0.5237 | 0.7236 |
| 0.6168 | 2.7987 | 890 | 0.5266 | 0.6235 | 0.5266 | 0.7257 |
| 0.6168 | 2.8050 | 892 | 0.4882 | 0.6032 | 0.4882 | 0.6987 |
| 0.6168 | 2.8113 | 894 | 0.4529 | 0.5806 | 0.4529 | 0.6730 |
| 0.6168 | 2.8176 | 896 | 0.4637 | 0.5802 | 0.4637 | 0.6810 |
| 0.6168 | 2.8239 | 898 | 0.5794 | 0.6391 | 0.5794 | 0.7612 |
| 0.6168 | 2.8302 | 900 | 0.6045 | 0.6505 | 0.6045 | 0.7775 |
| 0.6168 | 2.8365 | 902 | 0.4849 | 0.6172 | 0.4849 | 0.6964 |
| 0.6168 | 2.8428 | 904 | 0.4473 | 0.5701 | 0.4473 | 0.6688 |
| 0.6168 | 2.8491 | 906 | 0.4564 | 0.5158 | 0.4564 | 0.6756 |
| 0.6168 | 2.8553 | 908 | 0.4440 | 0.5683 | 0.4440 | 0.6663 |
| 0.6168 | 2.8616 | 910 | 0.4760 | 0.6210 | 0.4760 | 0.6899 |
| 0.6168 | 2.8679 | 912 | 0.4973 | 0.6063 | 0.4973 | 0.7052 |
| 0.6168 | 2.8742 | 914 | 0.4613 | 0.5769 | 0.4613 | 0.6792 |
| 0.6168 | 2.8805 | 916 | 0.4627 | 0.5601 | 0.4627 | 0.6802 |
| 0.6168 | 2.8868 | 918 | 0.4902 | 0.5810 | 0.4902 | 0.7001 |
| 0.6168 | 2.8931 | 920 | 0.5110 | 0.5978 | 0.5110 | 0.7148 |
| 0.6168 | 2.8994 | 922 | 0.4982 | 0.5263 | 0.4982 | 0.7059 |
| 0.6168 | 2.9057 | 924 | 0.5189 | 0.4791 | 0.5189 | 0.7204 |
| 0.6168 | 2.9119 | 926 | 0.5141 | 0.4919 | 0.5141 | 0.7170 |
| 0.6168 | 2.9182 | 928 | 0.5133 | 0.5588 | 0.5133 | 0.7165 |
| 0.6168 | 2.9245 | 930 | 0.5122 | 0.5844 | 0.5122 | 0.7157 |
| 0.6168 | 2.9308 | 932 | 0.5012 | 0.5188 | 0.5012 | 0.7080 |
| 0.6168 | 2.9371 | 934 | 0.4874 | 0.5313 | 0.4874 | 0.6981 |
| 0.6168 | 2.9434 | 936 | 0.4737 | 0.5710 | 0.4737 | 0.6883 |
| 0.6168 | 2.9497 | 938 | 0.5110 | 0.6227 | 0.5110 | 0.7149 |
| 0.6168 | 2.9560 | 940 | 0.4784 | 0.6200 | 0.4784 | 0.6916 |
| 0.6168 | 2.9623 | 942 | 0.4436 | 0.5909 | 0.4436 | 0.6660 |
| 0.6168 | 2.9686 | 944 | 0.4353 | 0.5637 | 0.4353 | 0.6598 |
| 0.6168 | 2.9748 | 946 | 0.4419 | 0.6045 | 0.4419 | 0.6648 |
| 0.6168 | 2.9811 | 948 | 0.4552 | 0.6259 | 0.4552 | 0.6747 |
| 0.6168 | 2.9874 | 950 | 0.4415 | 0.6037 | 0.4415 | 0.6644 |
| 0.6168 | 2.9937 | 952 | 0.4334 | 0.5655 | 0.4334 | 0.6583 |
| 0.6168 | 3.0 | 954 | 0.4329 | 0.5599 | 0.4329 | 0.6579 |
| 0.6168 | 3.0063 | 956 | 0.4388 | 0.5950 | 0.4388 | 0.6624 |
| 0.6168 | 3.0126 | 958 | 0.5174 | 0.6305 | 0.5174 | 0.7193 |
| 0.6168 | 3.0189 | 960 | 0.6504 | 0.6523 | 0.6504 | 0.8065 |
| 0.6168 | 3.0252 | 962 | 0.5995 | 0.6266 | 0.5995 | 0.7743 |
| 0.6168 | 3.0314 | 964 | 0.4915 | 0.6136 | 0.4915 | 0.7011 |
| 0.6168 | 3.0377 | 966 | 0.5021 | 0.6101 | 0.5021 | 0.7086 |
| 0.6168 | 3.0440 | 968 | 0.5449 | 0.6237 | 0.5449 | 0.7382 |
| 0.6168 | 3.0503 | 970 | 0.5088 | 0.5658 | 0.5088 | 0.7133 |
| 0.6168 | 3.0566 | 972 | 0.5037 | 0.5307 | 0.5037 | 0.7097 |
| 0.6168 | 3.0629 | 974 | 0.5258 | 0.4603 | 0.5258 | 0.7251 |
| 0.6168 | 3.0692 | 976 | 0.5263 | 0.4741 | 0.5263 | 0.7255 |
| 0.6168 | 3.0755 | 978 | 0.5189 | 0.5580 | 0.5189 | 0.7203 |
| 0.6168 | 3.0818 | 980 | 0.5218 | 0.5977 | 0.5218 | 0.7223 |
| 0.6168 | 3.0881 | 982 | 0.4925 | 0.5705 | 0.4925 | 0.7018 |
| 0.6168 | 3.0943 | 984 | 0.5247 | 0.4499 | 0.5247 | 0.7243 |
| 0.6168 | 3.1006 | 986 | 0.5441 | 0.4333 | 0.5441 | 0.7376 |
| 0.6168 | 3.1069 | 988 | 0.4828 | 0.4792 | 0.4828 | 0.6948 |
| 0.6168 | 3.1132 | 990 | 0.4882 | 0.6114 | 0.4882 | 0.6987 |
| 0.6168 | 3.1195 | 992 | 0.6132 | 0.6336 | 0.6132 | 0.7831 |
| 0.6168 | 3.1258 | 994 | 0.5768 | 0.6112 | 0.5768 | 0.7595 |
| 0.6168 | 3.1321 | 996 | 0.4734 | 0.5712 | 0.4734 | 0.6880 |
| 0.6168 | 3.1384 | 998 | 0.4728 | 0.4874 | 0.4728 | 0.6876 |
| 0.2191 | 3.1447 | 1000 | 0.4709 | 0.4862 | 0.4709 | 0.6862 |
| 0.2191 | 3.1509 | 1002 | 0.4621 | 0.5558 | 0.4621 | 0.6798 |
| 0.2191 | 3.1572 | 1004 | 0.4894 | 0.5871 | 0.4894 | 0.6996 |
| 0.2191 | 3.1635 | 1006 | 0.4857 | 0.5880 | 0.4857 | 0.6969 |
| 0.2191 | 3.1698 | 1008 | 0.4703 | 0.5338 | 0.4703 | 0.6858 |
| 0.2191 | 3.1761 | 1010 | 0.4707 | 0.5164 | 0.4707 | 0.6861 |
| 0.2191 | 3.1824 | 1012 | 0.4615 | 0.5635 | 0.4615 | 0.6794 |
| 0.2191 | 3.1887 | 1014 | 0.5353 | 0.6266 | 0.5353 | 0.7316 |
| 0.2191 | 3.1950 | 1016 | 0.6134 | 0.6373 | 0.6134 | 0.7832 |
| 0.2191 | 3.2013 | 1018 | 0.5485 | 0.6253 | 0.5485 | 0.7406 |
| 0.2191 | 3.2075 | 1020 | 0.4671 | 0.5818 | 0.4671 | 0.6835 |
| 0.2191 | 3.2138 | 1022 | 0.4729 | 0.5298 | 0.4729 | 0.6877 |
| 0.2191 | 3.2201 | 1024 | 0.4646 | 0.5701 | 0.4646 | 0.6816 |
| 0.2191 | 3.2264 | 1026 | 0.5196 | 0.6276 | 0.5196 | 0.7209 |
| 0.2191 | 3.2327 | 1028 | 0.5914 | 0.6449 | 0.5914 | 0.7690 |
| 0.2191 | 3.2390 | 1030 | 0.5345 | 0.6325 | 0.5345 | 0.7311 |
| 0.2191 | 3.2453 | 1032 | 0.4740 | 0.5896 | 0.4740 | 0.6885 |
| 0.2191 | 3.2516 | 1034 | 0.4757 | 0.6038 | 0.4757 | 0.6897 |
| 0.2191 | 3.2579 | 1036 | 0.4797 | 0.6070 | 0.4797 | 0.6926 |
| 0.2191 | 3.2642 | 1038 | 0.5105 | 0.6500 | 0.5105 | 0.7145 |
| 0.2191 | 3.2704 | 1040 | 0.5650 | 0.6644 | 0.5650 | 0.7517 |
| 0.2191 | 3.2767 | 1042 | 0.5207 | 0.6509 | 0.5207 | 0.7216 |
| 0.2191 | 3.2830 | 1044 | 0.4863 | 0.5506 | 0.4863 | 0.6973 |
| 0.2191 | 3.2893 | 1046 | 0.5195 | 0.4940 | 0.5195 | 0.7208 |
| 0.2191 | 3.2956 | 1048 | 0.4869 | 0.5194 | 0.4869 | 0.6978 |
| 0.2191 | 3.3019 | 1050 | 0.5009 | 0.6426 | 0.5009 | 0.7078 |
| 0.2191 | 3.3082 | 1052 | 0.5457 | 0.6660 | 0.5457 | 0.7387 |
| 0.2191 | 3.3145 | 1054 | 0.5233 | 0.6402 | 0.5233 | 0.7234 |
| 0.2191 | 3.3208 | 1056 | 0.4881 | 0.6002 | 0.4881 | 0.6986 |
| 0.2191 | 3.3270 | 1058 | 0.5018 | 0.6009 | 0.5018 | 0.7084 |
| 0.2191 | 3.3333 | 1060 | 0.5207 | 0.6291 | 0.5207 | 0.7216 |
| 0.2191 | 3.3396 | 1062 | 0.5560 | 0.6547 | 0.5560 | 0.7456 |
| 0.2191 | 3.3459 | 1064 | 0.5198 | 0.6343 | 0.5198 | 0.7210 |
| 0.2191 | 3.3522 | 1066 | 0.4909 | 0.5900 | 0.4909 | 0.7006 |
| 0.2191 | 3.3585 | 1068 | 0.4876 | 0.5475 | 0.4876 | 0.6983 |
| 0.2191 | 3.3648 | 1070 | 0.4872 | 0.5630 | 0.4872 | 0.6980 |
| 0.2191 | 3.3711 | 1072 | 0.4876 | 0.5919 | 0.4876 | 0.6983 |
| 0.2191 | 3.3774 | 1074 | 0.4887 | 0.5430 | 0.4887 | 0.6991 |
| 0.2191 | 3.3836 | 1076 | 0.4853 | 0.5461 | 0.4853 | 0.6966 |
| 0.2191 | 3.3899 | 1078 | 0.4850 | 0.5996 | 0.4850 | 0.6964 |
| 0.2191 | 3.3962 | 1080 | 0.4844 | 0.5939 | 0.4844 | 0.6960 |
| 0.2191 | 3.4025 | 1082 | 0.4808 | 0.5413 | 0.4808 | 0.6934 |
| 0.2191 | 3.4088 | 1084 | 0.4776 | 0.5428 | 0.4776 | 0.6911 |
| 0.2191 | 3.4151 | 1086 | 0.4739 | 0.5833 | 0.4739 | 0.6884 |
| 0.2191 | 3.4214 | 1088 | 0.5182 | 0.6022 | 0.5182 | 0.7199 |
| 0.2191 | 3.4277 | 1090 | 0.5028 | 0.5787 | 0.5028 | 0.7091 |
| 0.2191 | 3.4340 | 1092 | 0.4934 | 0.5866 | 0.4934 | 0.7024 |
| 0.2191 | 3.4403 | 1094 | 0.4849 | 0.5311 | 0.4849 | 0.6964 |
| 0.2191 | 3.4465 | 1096 | 0.4987 | 0.5814 | 0.4987 | 0.7062 |
| 0.2191 | 3.4528 | 1098 | 0.5297 | 0.5906 | 0.5297 | 0.7278 |
| 0.2191 | 3.4591 | 1100 | 0.4967 | 0.5850 | 0.4967 | 0.7047 |
| 0.2191 | 3.4654 | 1102 | 0.4799 | 0.5463 | 0.4799 | 0.6927 |
| 0.2191 | 3.4717 | 1104 | 0.4803 | 0.5327 | 0.4803 | 0.6931 |
| 0.2191 | 3.4780 | 1106 | 0.4778 | 0.5716 | 0.4778 | 0.6912 |
| 0.2191 | 3.4843 | 1108 | 0.4970 | 0.5929 | 0.4970 | 0.7049 |
| 0.2191 | 3.4906 | 1110 | 0.4984 | 0.5921 | 0.4984 | 0.7060 |
| 0.2191 | 3.4969 | 1112 | 0.4783 | 0.5598 | 0.4783 | 0.6916 |
| 0.2191 | 3.5031 | 1114 | 0.4793 | 0.5569 | 0.4793 | 0.6923 |
| 0.2191 | 3.5094 | 1116 | 0.4870 | 0.5872 | 0.4870 | 0.6979 |
| 0.2191 | 3.5157 | 1118 | 0.4885 | 0.5913 | 0.4885 | 0.6989 |
| 0.2191 | 3.5220 | 1120 | 0.4890 | 0.5400 | 0.4890 | 0.6993 |
| 0.2191 | 3.5283 | 1122 | 0.4850 | 0.5606 | 0.4850 | 0.6964 |
| 0.2191 | 3.5346 | 1124 | 0.4946 | 0.5952 | 0.4946 | 0.7033 |
| 0.2191 | 3.5409 | 1126 | 0.4899 | 0.5991 | 0.4899 | 0.6999 |
| 0.2191 | 3.5472 | 1128 | 0.5096 | 0.6137 | 0.5096 | 0.7139 |
| 0.2191 | 3.5535 | 1130 | 0.5116 | 0.6028 | 0.5116 | 0.7153 |
| 0.2191 | 3.5597 | 1132 | 0.4771 | 0.5908 | 0.4771 | 0.6907 |
| 0.2191 | 3.5660 | 1134 | 0.4686 | 0.5789 | 0.4686 | 0.6846 |
| 0.2191 | 3.5723 | 1136 | 0.4895 | 0.5819 | 0.4895 | 0.6996 |
| 0.2191 | 3.5786 | 1138 | 0.5350 | 0.6032 | 0.5350 | 0.7314 |
| 0.2191 | 3.5849 | 1140 | 0.5408 | 0.6008 | 0.5408 | 0.7354 |
| 0.2191 | 3.5912 | 1142 | 0.4848 | 0.5834 | 0.4848 | 0.6963 |
| 0.2191 | 3.5975 | 1144 | 0.4599 | 0.5429 | 0.4599 | 0.6782 |
| 0.2191 | 3.6038 | 1146 | 0.4601 | 0.5355 | 0.4601 | 0.6783 |
| 0.2191 | 3.6101 | 1148 | 0.4588 | 0.5793 | 0.4588 | 0.6774 |
| 0.2191 | 3.6164 | 1150 | 0.4823 | 0.5979 | 0.4823 | 0.6945 |
| 0.2191 | 3.6226 | 1152 | 0.4627 | 0.5902 | 0.4627 | 0.6802 |
| 0.2191 | 3.6289 | 1154 | 0.4612 | 0.5895 | 0.4612 | 0.6791 |
| 0.2191 | 3.6352 | 1156 | 0.4707 | 0.5929 | 0.4707 | 0.6861 |
| 0.2191 | 3.6415 | 1158 | 0.4756 | 0.5888 | 0.4756 | 0.6896 |
| 0.2191 | 3.6478 | 1160 | 0.4981 | 0.5867 | 0.4981 | 0.7058 |
| 0.2191 | 3.6541 | 1162 | 0.4825 | 0.5920 | 0.4825 | 0.6947 |
| 0.2191 | 3.6604 | 1164 | 0.4698 | 0.4999 | 0.4698 | 0.6854 |
| 0.2191 | 3.6667 | 1166 | 0.4758 | 0.4843 | 0.4758 | 0.6898 |
| 0.2191 | 3.6730 | 1168 | 0.4684 | 0.5680 | 0.4684 | 0.6844 |
| 0.2191 | 3.6792 | 1170 | 0.5320 | 0.6034 | 0.5320 | 0.7294 |
| 0.2191 | 3.6855 | 1172 | 0.5642 | 0.6074 | 0.5642 | 0.7511 |
| 0.2191 | 3.6918 | 1174 | 0.5204 | 0.5851 | 0.5204 | 0.7214 |
| 0.2191 | 3.6981 | 1176 | 0.4829 | 0.5681 | 0.4829 | 0.6949 |
| 0.2191 | 3.7044 | 1178 | 0.4784 | 0.5650 | 0.4784 | 0.6917 |
| 0.2191 | 3.7107 | 1180 | 0.4994 | 0.5960 | 0.4994 | 0.7067 |
| 0.2191 | 3.7170 | 1182 | 0.5731 | 0.6224 | 0.5731 | 0.7571 |
| 0.2191 | 3.7233 | 1184 | 0.5590 | 0.6252 | 0.5590 | 0.7476 |
| 0.2191 | 3.7296 | 1186 | 0.5240 | 0.6032 | 0.5240 | 0.7239 |
| 0.2191 | 3.7358 | 1188 | 0.4929 | 0.5795 | 0.4929 | 0.7021 |
| 0.2191 | 3.7421 | 1190 | 0.5136 | 0.6038 | 0.5136 | 0.7167 |
| 0.2191 | 3.7484 | 1192 | 0.6420 | 0.6343 | 0.6420 | 0.8012 |
| 0.2191 | 3.7547 | 1194 | 0.6949 | 0.6368 | 0.6949 | 0.8336 |
| 0.2191 | 3.7610 | 1196 | 0.6039 | 0.6441 | 0.6039 | 0.7771 |
| 0.2191 | 3.7673 | 1198 | 0.4848 | 0.5966 | 0.4848 | 0.6963 |
| 0.2191 | 3.7736 | 1200 | 0.4677 | 0.5934 | 0.4677 | 0.6839 |
| 0.2191 | 3.7799 | 1202 | 0.4915 | 0.6116 | 0.4915 | 0.7011 |
| 0.2191 | 3.7862 | 1204 | 0.5060 | 0.6274 | 0.5060 | 0.7113 |
| 0.2191 | 3.7925 | 1206 | 0.4734 | 0.6065 | 0.4734 | 0.6880 |
| 0.2191 | 3.7987 | 1208 | 0.4893 | 0.6210 | 0.4893 | 0.6995 |
| 0.2191 | 3.8050 | 1210 | 0.5399 | 0.6244 | 0.5399 | 0.7348 |
| 0.2191 | 3.8113 | 1212 | 0.5119 | 0.6266 | 0.5119 | 0.7155 |
| 0.2191 | 3.8176 | 1214 | 0.4816 | 0.5927 | 0.4816 | 0.6940 |
| 0.2191 | 3.8239 | 1216 | 0.4972 | 0.5898 | 0.4972 | 0.7051 |
| 0.2191 | 3.8302 | 1218 | 0.5283 | 0.6094 | 0.5283 | 0.7268 |
| 0.2191 | 3.8365 | 1220 | 0.5222 | 0.5733 | 0.5222 | 0.7226 |
| 0.2191 | 3.8428 | 1222 | 0.5608 | 0.5982 | 0.5608 | 0.7489 |
| 0.2191 | 3.8491 | 1224 | 0.5559 | 0.5857 | 0.5559 | 0.7456 |
| 0.2191 | 3.8553 | 1226 | 0.5330 | 0.5723 | 0.5330 | 0.7301 |
| 0.2191 | 3.8616 | 1228 | 0.5097 | 0.5748 | 0.5097 | 0.7139 |
| 0.2191 | 3.8679 | 1230 | 0.4911 | 0.5813 | 0.4911 | 0.7008 |
| 0.2191 | 3.8742 | 1232 | 0.4909 | 0.6022 | 0.4909 | 0.7007 |
| 0.2191 | 3.8805 | 1234 | 0.5000 | 0.6197 | 0.5000 | 0.7071 |
| 0.2191 | 3.8868 | 1236 | 0.4952 | 0.6236 | 0.4952 | 0.7037 |
| 0.2191 | 3.8931 | 1238 | 0.4668 | 0.5950 | 0.4668 | 0.6833 |
| 0.2191 | 3.8994 | 1240 | 0.4689 | 0.5844 | 0.4689 | 0.6847 |
| 0.2191 | 3.9057 | 1242 | 0.4959 | 0.6224 | 0.4959 | 0.7042 |
| 0.2191 | 3.9119 | 1244 | 0.5012 | 0.6153 | 0.5012 | 0.7079 |
| 0.2191 | 3.9182 | 1246 | 0.5449 | 0.6003 | 0.5449 | 0.7382 |
| 0.2191 | 3.9245 | 1248 | 0.5715 | 0.6007 | 0.5715 | 0.7560 |
| 0.2191 | 3.9308 | 1250 | 0.5274 | 0.5888 | 0.5274 | 0.7262 |
| 0.2191 | 3.9371 | 1252 | 0.5270 | 0.5761 | 0.5270 | 0.7259 |
| 0.2191 | 3.9434 | 1254 | 0.5680 | 0.5990 | 0.5680 | 0.7537 |
| 0.2191 | 3.9497 | 1256 | 0.6245 | 0.6284 | 0.6245 | 0.7903 |
| 0.2191 | 3.9560 | 1258 | 0.6299 | 0.6353 | 0.6299 | 0.7937 |
| 0.2191 | 3.9623 | 1260 | 0.5335 | 0.6144 | 0.5335 | 0.7304 |
| 0.2191 | 3.9686 | 1262 | 0.4950 | 0.5627 | 0.4950 | 0.7036 |
| 0.2191 | 3.9748 | 1264 | 0.5072 | 0.5777 | 0.5072 | 0.7122 |
| 0.2191 | 3.9811 | 1266 | 0.5436 | 0.6057 | 0.5436 | 0.7373 |
| 0.2191 | 3.9874 | 1268 | 0.5330 | 0.6026 | 0.5330 | 0.7301 |
| 0.2191 | 3.9937 | 1270 | 0.5473 | 0.6055 | 0.5473 | 0.7398 |
| 0.2191 | 4.0 | 1272 | 0.5432 | 0.6049 | 0.5432 | 0.7370 |
| 0.2191 | 4.0063 | 1274 | 0.5198 | 0.6003 | 0.5198 | 0.7210 |
| 0.2191 | 4.0126 | 1276 | 0.5440 | 0.6122 | 0.5440 | 0.7375 |
| 0.2191 | 4.0189 | 1278 | 0.5267 | 0.6074 | 0.5267 | 0.7258 |
| 0.2191 | 4.0252 | 1280 | 0.5213 | 0.6056 | 0.5213 | 0.7220 |
| 0.2191 | 4.0314 | 1282 | 0.5556 | 0.6345 | 0.5556 | 0.7454 |
| 0.2191 | 4.0377 | 1284 | 0.6064 | 0.6612 | 0.6064 | 0.7787 |
| 0.2191 | 4.0440 | 1286 | 0.5591 | 0.6278 | 0.5591 | 0.7477 |
| 0.2191 | 4.0503 | 1288 | 0.5057 | 0.5699 | 0.5057 | 0.7112 |
| 0.2191 | 4.0566 | 1290 | 0.5133 | 0.5851 | 0.5133 | 0.7165 |
| 0.2191 | 4.0629 | 1292 | 0.5021 | 0.5604 | 0.5021 | 0.7086 |
| 0.2191 | 4.0692 | 1294 | 0.4998 | 0.5759 | 0.4998 | 0.7069 |
| 0.2191 | 4.0755 | 1296 | 0.5104 | 0.6179 | 0.5104 | 0.7144 |
| 0.2191 | 4.0818 | 1298 | 0.5394 | 0.6526 | 0.5394 | 0.7344 |
| 0.2191 | 4.0881 | 1300 | 0.5295 | 0.6392 | 0.5295 | 0.7276 |
| 0.2191 | 4.0943 | 1302 | 0.4951 | 0.5717 | 0.4951 | 0.7037 |
| 0.2191 | 4.1006 | 1304 | 0.4979 | 0.5665 | 0.4979 | 0.7056 |
| 0.2191 | 4.1069 | 1306 | 0.5026 | 0.5854 | 0.5026 | 0.7089 |
| 0.2191 | 4.1132 | 1308 | 0.4988 | 0.5564 | 0.4988 | 0.7062 |
| 0.2191 | 4.1195 | 1310 | 0.4986 | 0.5671 | 0.4986 | 0.7061 |
| 0.2191 | 4.1258 | 1312 | 0.5221 | 0.6021 | 0.5221 | 0.7225 |
| 0.2191 | 4.1321 | 1314 | 0.5309 | 0.6093 | 0.5309 | 0.7286 |
| 0.2191 | 4.1384 | 1316 | 0.5054 | 0.5482 | 0.5054 | 0.7109 |
| 0.2191 | 4.1447 | 1318 | 0.4995 | 0.5210 | 0.4995 | 0.7068 |
| 0.2191 | 4.1509 | 1320 | 0.4933 | 0.5499 | 0.4933 | 0.7023 |
| 0.2191 | 4.1572 | 1322 | 0.5094 | 0.6100 | 0.5094 | 0.7137 |
| 0.2191 | 4.1635 | 1324 | 0.5068 | 0.6171 | 0.5068 | 0.7119 |
| 0.2191 | 4.1698 | 1326 | 0.4763 | 0.5792 | 0.4763 | 0.6902 |
| 0.2191 | 4.1761 | 1328 | 0.4706 | 0.5297 | 0.4706 | 0.6860 |
| 0.2191 | 4.1824 | 1330 | 0.4664 | 0.5698 | 0.4664 | 0.6829 |
| 0.2191 | 4.1887 | 1332 | 0.4933 | 0.6067 | 0.4933 | 0.7023 |
| 0.2191 | 4.1950 | 1334 | 0.5064 | 0.6143 | 0.5064 | 0.7116 |
| 0.2191 | 4.2013 | 1336 | 0.4740 | 0.5939 | 0.4740 | 0.6884 |
| 0.2191 | 4.2075 | 1338 | 0.4959 | 0.4730 | 0.4959 | 0.7042 |
| 0.2191 | 4.2138 | 1340 | 0.5378 | 0.4250 | 0.5378 | 0.7334 |
| 0.2191 | 4.2201 | 1342 | 0.5020 | 0.4771 | 0.5020 | 0.7085 |
| 0.2191 | 4.2264 | 1344 | 0.4883 | 0.5873 | 0.4883 | 0.6988 |
| 0.2191 | 4.2327 | 1346 | 0.5408 | 0.6152 | 0.5408 | 0.7354 |
| 0.2191 | 4.2390 | 1348 | 0.5481 | 0.6139 | 0.5481 | 0.7404 |
| 0.2191 | 4.2453 | 1350 | 0.5084 | 0.6032 | 0.5084 | 0.7130 |
| 0.2191 | 4.2516 | 1352 | 0.5036 | 0.5586 | 0.5036 | 0.7096 |
| 0.2191 | 4.2579 | 1354 | 0.5110 | 0.5830 | 0.5110 | 0.7149 |
| 0.2191 | 4.2642 | 1356 | 0.5633 | 0.5949 | 0.5633 | 0.7506 |
| 0.2191 | 4.2704 | 1358 | 0.6567 | 0.6165 | 0.6567 | 0.8104 |
| 0.2191 | 4.2767 | 1360 | 0.6120 | 0.6157 | 0.6120 | 0.7823 |
| 0.2191 | 4.2830 | 1362 | 0.5442 | 0.6006 | 0.5442 | 0.7377 |
| 0.2191 | 4.2893 | 1364 | 0.5166 | 0.5773 | 0.5166 | 0.7188 |
| 0.2191 | 4.2956 | 1366 | 0.5335 | 0.6030 | 0.5335 | 0.7304 |
| 0.2191 | 4.3019 | 1368 | 0.5712 | 0.6283 | 0.5712 | 0.7558 |
| 0.2191 | 4.3082 | 1370 | 0.5458 | 0.6126 | 0.5458 | 0.7388 |
| 0.2191 | 4.3145 | 1372 | 0.4977 | 0.5570 | 0.4977 | 0.7055 |
| 0.2191 | 4.3208 | 1374 | 0.4877 | 0.5010 | 0.4877 | 0.6983 |
| 0.2191 | 4.3270 | 1376 | 0.4875 | 0.5627 | 0.4875 | 0.6982 |
| 0.2191 | 4.3333 | 1378 | 0.5061 | 0.6018 | 0.5061 | 0.7114 |
| 0.2191 | 4.3396 | 1380 | 0.5413 | 0.6192 | 0.5413 | 0.7357 |
| 0.2191 | 4.3459 | 1382 | 0.5306 | 0.6058 | 0.5306 | 0.7284 |
| 0.2191 | 4.3522 | 1384 | 0.4929 | 0.5362 | 0.4929 | 0.7021 |
| 0.2191 | 4.3585 | 1386 | 0.4985 | 0.4982 | 0.4985 | 0.7061 |
| 0.2191 | 4.3648 | 1388 | 0.4972 | 0.5306 | 0.4972 | 0.7051 |
| 0.2191 | 4.3711 | 1390 | 0.5404 | 0.5940 | 0.5404 | 0.7351 |
| 0.2191 | 4.3774 | 1392 | 0.5505 | 0.5926 | 0.5505 | 0.7420 |
| 0.2191 | 4.3836 | 1394 | 0.5302 | 0.5679 | 0.5302 | 0.7282 |
| 0.2191 | 4.3899 | 1396 | 0.5166 | 0.5177 | 0.5166 | 0.7188 |
| 0.2191 | 4.3962 | 1398 | 0.5139 | 0.5270 | 0.5139 | 0.7168 |
| 0.2191 | 4.4025 | 1400 | 0.5309 | 0.5670 | 0.5309 | 0.7286 |
| 0.2191 | 4.4088 | 1402 | 0.5514 | 0.6066 | 0.5514 | 0.7426 |
| 0.2191 | 4.4151 | 1404 | 0.5769 | 0.6261 | 0.5769 | 0.7596 |
| 0.2191 | 4.4214 | 1406 | 0.5477 | 0.6111 | 0.5477 | 0.7401 |
| 0.2191 | 4.4277 | 1408 | 0.5108 | 0.5752 | 0.5108 | 0.7147 |
| 0.2191 | 4.4340 | 1410 | 0.5063 | 0.5628 | 0.5063 | 0.7115 |
| 0.2191 | 4.4403 | 1412 | 0.5222 | 0.5890 | 0.5222 | 0.7227 |
| 0.2191 | 4.4465 | 1414 | 0.5312 | 0.5963 | 0.5312 | 0.7289 |
| 0.2191 | 4.4528 | 1416 | 0.5192 | 0.5548 | 0.5192 | 0.7206 |
| 0.2191 | 4.4591 | 1418 | 0.5218 | 0.5072 | 0.5218 | 0.7223 |
| 0.2191 | 4.4654 | 1420 | 0.5245 | 0.5406 | 0.5245 | 0.7242 |
| 0.2191 | 4.4717 | 1422 | 0.5299 | 0.5795 | 0.5299 | 0.7279 |
| 0.2191 | 4.4780 | 1424 | 0.5778 | 0.6086 | 0.5778 | 0.7601 |
| 0.2191 | 4.4843 | 1426 | 0.6066 | 0.6058 | 0.6066 | 0.7789 |
| 0.2191 | 4.4906 | 1428 | 0.5747 | 0.6055 | 0.5747 | 0.7581 |
| 0.2191 | 4.4969 | 1430 | 0.5427 | 0.5779 | 0.5427 | 0.7367 |
| 0.2191 | 4.5031 | 1432 | 0.5435 | 0.5737 | 0.5435 | 0.7372 |
| 0.2191 | 4.5094 | 1434 | 0.5456 | 0.5850 | 0.5456 | 0.7386 |
| 0.2191 | 4.5157 | 1436 | 0.6108 | 0.6252 | 0.6108 | 0.7815 |
| 0.2191 | 4.5220 | 1438 | 0.6371 | 0.6462 | 0.6371 | 0.7982 |
| 0.2191 | 4.5283 | 1440 | 0.5417 | 0.6027 | 0.5417 | 0.7360 |
| 0.2191 | 4.5346 | 1442 | 0.5027 | 0.5807 | 0.5027 | 0.7090 |
| 0.2191 | 4.5409 | 1444 | 0.5327 | 0.6089 | 0.5327 | 0.7299 |
| 0.2191 | 4.5472 | 1446 | 0.5351 | 0.6232 | 0.5351 | 0.7315 |
| 0.2191 | 4.5535 | 1448 | 0.4943 | 0.5781 | 0.4943 | 0.7030 |
| 0.2191 | 4.5597 | 1450 | 0.4824 | 0.5467 | 0.4824 | 0.6945 |
| 0.2191 | 4.5660 | 1452 | 0.4857 | 0.5633 | 0.4857 | 0.6969 |
| 0.2191 | 4.5723 | 1454 | 0.5305 | 0.6218 | 0.5305 | 0.7283 |
| 0.2191 | 4.5786 | 1456 | 0.5665 | 0.6395 | 0.5665 | 0.7526 |
| 0.2191 | 4.5849 | 1458 | 0.5148 | 0.5913 | 0.5148 | 0.7175 |
| 0.2191 | 4.5912 | 1460 | 0.4879 | 0.5513 | 0.4879 | 0.6985 |
| 0.2191 | 4.5975 | 1462 | 0.4889 | 0.5152 | 0.4889 | 0.6992 |
| 0.2191 | 4.6038 | 1464 | 0.4922 | 0.5392 | 0.4922 | 0.7016 |
| 0.2191 | 4.6101 | 1466 | 0.5167 | 0.5761 | 0.5167 | 0.7188 |
| 0.2191 | 4.6164 | 1468 | 0.5763 | 0.6188 | 0.5763 | 0.7592 |
| 0.2191 | 4.6226 | 1470 | 0.5514 | 0.5962 | 0.5514 | 0.7426 |
| 0.2191 | 4.6289 | 1472 | 0.4995 | 0.5326 | 0.4995 | 0.7067 |
| 0.2191 | 4.6352 | 1474 | 0.4970 | 0.5175 | 0.4970 | 0.7050 |
| 0.2191 | 4.6415 | 1476 | 0.5087 | 0.5672 | 0.5087 | 0.7132 |
| 0.2191 | 4.6478 | 1478 | 0.5783 | 0.6248 | 0.5783 | 0.7605 |
| 0.2191 | 4.6541 | 1480 | 0.5894 | 0.6199 | 0.5894 | 0.7677 |
| 0.2191 | 4.6604 | 1482 | 0.5251 | 0.5869 | 0.5251 | 0.7247 |
| 0.2191 | 4.6667 | 1484 | 0.5091 | 0.5123 | 0.5091 | 0.7135 |
| 0.2191 | 4.6730 | 1486 | 0.5099 | 0.5207 | 0.5099 | 0.7141 |
| 0.2191 | 4.6792 | 1488 | 0.5335 | 0.5981 | 0.5335 | 0.7304 |
| 0.2191 | 4.6855 | 1490 | 0.5984 | 0.6379 | 0.5984 | 0.7736 |
| 0.2191 | 4.6918 | 1492 | 0.6045 | 0.6317 | 0.6045 | 0.7775 |
| 0.2191 | 4.6981 | 1494 | 0.5318 | 0.5893 | 0.5318 | 0.7292 |
| 0.2191 | 4.7044 | 1496 | 0.5212 | 0.4917 | 0.5212 | 0.7219 |
| 0.2191 | 4.7107 | 1498 | 0.5583 | 0.4352 | 0.5583 | 0.7472 |
| 0.137 | 4.7170 | 1500 | 0.5325 | 0.4577 | 0.5325 | 0.7297 |
| 0.137 | 4.7233 | 1502 | 0.4992 | 0.5207 | 0.4992 | 0.7066 |
| 0.137 | 4.7296 | 1504 | 0.5142 | 0.5829 | 0.5142 | 0.7171 |
| 0.137 | 4.7358 | 1506 | 0.5090 | 0.5933 | 0.5090 | 0.7134 |
| 0.137 | 4.7421 | 1508 | 0.4846 | 0.5451 | 0.4846 | 0.6962 |
| 0.137 | 4.7484 | 1510 | 0.4941 | 0.4743 | 0.4941 | 0.7029 |
| 0.137 | 4.7547 | 1512 | 0.4925 | 0.4759 | 0.4925 | 0.7018 |
| 0.137 | 4.7610 | 1514 | 0.4870 | 0.5403 | 0.4870 | 0.6978 |
| 0.137 | 4.7673 | 1516 | 0.5274 | 0.6094 | 0.5274 | 0.7262 |
| 0.137 | 4.7736 | 1518 | 0.5429 | 0.6104 | 0.5429 | 0.7368 |
| 0.137 | 4.7799 | 1520 | 0.5016 | 0.6021 | 0.5016 | 0.7082 |
| 0.137 | 4.7862 | 1522 | 0.4941 | 0.4792 | 0.4941 | 0.7029 |
| 0.137 | 4.7925 | 1524 | 0.5164 | 0.4536 | 0.5164 | 0.7186 |
| 0.137 | 4.7987 | 1526 | 0.5001 | 0.4721 | 0.5001 | 0.7072 |
| 0.137 | 4.8050 | 1528 | 0.5149 | 0.5848 | 0.5149 | 0.7176 |
| 0.137 | 4.8113 | 1530 | 0.5606 | 0.6187 | 0.5606 | 0.7488 |
| 0.137 | 4.8176 | 1532 | 0.5386 | 0.6120 | 0.5386 | 0.7339 |
| 0.137 | 4.8239 | 1534 | 0.5132 | 0.4964 | 0.5132 | 0.7163 |
| 0.137 | 4.8302 | 1536 | 0.5325 | 0.4620 | 0.5325 | 0.7298 |
| 0.137 | 4.8365 | 1538 | 0.5363 | 0.4628 | 0.5363 | 0.7323 |
| 0.137 | 4.8428 | 1540 | 0.5147 | 0.5029 | 0.5147 | 0.7174 |
| 0.137 | 4.8491 | 1542 | 0.5102 | 0.5405 | 0.5102 | 0.7143 |
| 0.137 | 4.8553 | 1544 | 0.5135 | 0.5818 | 0.5135 | 0.7166 |
| 0.137 | 4.8616 | 1546 | 0.4941 | 0.5237 | 0.4941 | 0.7030 |
| 0.137 | 4.8679 | 1548 | 0.4922 | 0.5190 | 0.4922 | 0.7015 |
| 0.137 | 4.8742 | 1550 | 0.4914 | 0.5180 | 0.4914 | 0.7010 |
| 0.137 | 4.8805 | 1552 | 0.5017 | 0.5464 | 0.5017 | 0.7083 |
| 0.137 | 4.8868 | 1554 | 0.5360 | 0.6195 | 0.5360 | 0.7321 |
| 0.137 | 4.8931 | 1556 | 0.5409 | 0.6140 | 0.5409 | 0.7355 |
| 0.137 | 4.8994 | 1558 | 0.5174 | 0.5642 | 0.5174 | 0.7193 |
| 0.137 | 4.9057 | 1560 | 0.5136 | 0.5532 | 0.5136 | 0.7166 |
| 0.137 | 4.9119 | 1562 | 0.5314 | 0.5965 | 0.5314 | 0.7290 |
| 0.137 | 4.9182 | 1564 | 0.5257 | 0.5958 | 0.5257 | 0.7251 |
| 0.137 | 4.9245 | 1566 | 0.5320 | 0.5995 | 0.5320 | 0.7294 |
| 0.137 | 4.9308 | 1568 | 0.5420 | 0.6089 | 0.5420 | 0.7362 |
| 0.137 | 4.9371 | 1570 | 0.5097 | 0.5445 | 0.5097 | 0.7139 |
| 0.137 | 4.9434 | 1572 | 0.5084 | 0.5183 | 0.5084 | 0.7130 |
| 0.137 | 4.9497 | 1574 | 0.5117 | 0.5146 | 0.5117 | 0.7154 |
| 0.137 | 4.9560 | 1576 | 0.5268 | 0.5639 | 0.5268 | 0.7258 |
| 0.137 | 4.9623 | 1578 | 0.5468 | 0.6127 | 0.5468 | 0.7395 |
| 0.137 | 4.9686 | 1580 | 0.5315 | 0.5791 | 0.5315 | 0.7290 |
| 0.137 | 4.9748 | 1582 | 0.5247 | 0.5557 | 0.5247 | 0.7243 |
| 0.137 | 4.9811 | 1584 | 0.5311 | 0.5787 | 0.5311 | 0.7288 |
| 0.137 | 4.9874 | 1586 | 0.5600 | 0.6294 | 0.5600 | 0.7483 |
| 0.137 | 4.9937 | 1588 | 0.6016 | 0.6491 | 0.6016 | 0.7756 |
| 0.137 | 5.0 | 1590 | 0.6073 | 0.6469 | 0.6073 | 0.7793 |
| 0.137 | 5.0063 | 1592 | 0.5661 | 0.6432 | 0.5661 | 0.7524 |
| 0.137 | 5.0126 | 1594 | 0.5277 | 0.6116 | 0.5277 | 0.7264 |
| 0.137 | 5.0189 | 1596 | 0.5400 | 0.6220 | 0.5400 | 0.7349 |
| 0.137 | 5.0252 | 1598 | 0.6110 | 0.6596 | 0.6110 | 0.7817 |
| 0.137 | 5.0314 | 1600 | 0.6045 | 0.6544 | 0.6045 | 0.7775 |
| 0.137 | 5.0377 | 1602 | 0.5523 | 0.6378 | 0.5523 | 0.7432 |
| 0.137 | 5.0440 | 1604 | 0.5261 | 0.5913 | 0.5261 | 0.7253 |
| 0.137 | 5.0503 | 1606 | 0.5190 | 0.5756 | 0.5190 | 0.7204 |
| 0.137 | 5.0566 | 1608 | 0.5470 | 0.6231 | 0.5470 | 0.7396 |
| 0.137 | 5.0629 | 1610 | 0.5948 | 0.6374 | 0.5948 | 0.7712 |
| 0.137 | 5.0692 | 1612 | 0.5981 | 0.6305 | 0.5981 | 0.7734 |
| 0.137 | 5.0755 | 1614 | 0.5447 | 0.5802 | 0.5447 | 0.7380 |
| 0.137 | 5.0818 | 1616 | 0.5114 | 0.5172 | 0.5114 | 0.7152 |
| 0.137 | 5.0881 | 1618 | 0.5096 | 0.5302 | 0.5096 | 0.7139 |
| 0.137 | 5.0943 | 1620 | 0.5184 | 0.5557 | 0.5184 | 0.7200 |
| 0.137 | 5.1006 | 1622 | 0.5450 | 0.5970 | 0.5450 | 0.7382 |
| 0.137 | 5.1069 | 1624 | 0.5490 | 0.5956 | 0.5490 | 0.7410 |
| 0.137 | 5.1132 | 1626 | 0.5397 | 0.5729 | 0.5397 | 0.7346 |
| 0.137 | 5.1195 | 1628 | 0.5360 | 0.5662 | 0.5360 | 0.7321 |
| 0.137 | 5.1258 | 1630 | 0.5514 | 0.5830 | 0.5514 | 0.7426 |
| 0.137 | 5.1321 | 1632 | 0.5284 | 0.5352 | 0.5284 | 0.7269 |
| 0.137 | 5.1384 | 1634 | 0.5200 | 0.5191 | 0.5200 | 0.7211 |
| 0.137 | 5.1447 | 1636 | 0.5160 | 0.5037 | 0.5160 | 0.7183 |
| 0.137 | 5.1509 | 1638 | 0.5184 | 0.5393 | 0.5184 | 0.7200 |
| 0.137 | 5.1572 | 1640 | 0.5180 | 0.5476 | 0.5180 | 0.7197 |
| 0.137 | 5.1635 | 1642 | 0.5125 | 0.5336 | 0.5125 | 0.7159 |
| 0.137 | 5.1698 | 1644 | 0.5039 | 0.5236 | 0.5039 | 0.7099 |
| 0.137 | 5.1761 | 1646 | 0.5005 | 0.5047 | 0.5005 | 0.7075 |
| 0.137 | 5.1824 | 1648 | 0.5026 | 0.5307 | 0.5026 | 0.7090 |
| 0.137 | 5.1887 | 1650 | 0.5118 | 0.5572 | 0.5118 | 0.7154 |
| 0.137 | 5.1950 | 1652 | 0.5206 | 0.5589 | 0.5206 | 0.7215 |
| 0.137 | 5.2013 | 1654 | 0.5269 | 0.5661 | 0.5269 | 0.7259 |
| 0.137 | 5.2075 | 1656 | 0.5262 | 0.5677 | 0.5262 | 0.7254 |
| 0.137 | 5.2138 | 1658 | 0.5181 | 0.5379 | 0.5181 | 0.7198 |
| 0.137 | 5.2201 | 1660 | 0.5199 | 0.5489 | 0.5199 | 0.7211 |
| 0.137 | 5.2264 | 1662 | 0.5168 | 0.5317 | 0.5168 | 0.7189 |
| 0.137 | 5.2327 | 1664 | 0.5175 | 0.5034 | 0.5175 | 0.7194 |
| 0.137 | 5.2390 | 1666 | 0.5285 | 0.5567 | 0.5285 | 0.7270 |
| 0.137 | 5.2453 | 1668 | 0.5740 | 0.6014 | 0.5740 | 0.7576 |
| 0.137 | 5.2516 | 1670 | 0.5751 | 0.6039 | 0.5751 | 0.7584 |
| 0.137 | 5.2579 | 1672 | 0.5519 | 0.5738 | 0.5519 | 0.7429 |
| 0.137 | 5.2642 | 1674 | 0.5682 | 0.5954 | 0.5682 | 0.7538 |
| 0.137 | 5.2704 | 1676 | 0.5834 | 0.6032 | 0.5834 | 0.7638 |
| 0.137 | 5.2767 | 1678 | 0.5604 | 0.5737 | 0.5604 | 0.7486 |
| 0.137 | 5.2830 | 1680 | 0.5381 | 0.5282 | 0.5381 | 0.7336 |
| 0.137 | 5.2893 | 1682 | 0.5366 | 0.5274 | 0.5366 | 0.7325 |
| 0.137 | 5.2956 | 1684 | 0.5544 | 0.5575 | 0.5544 | 0.7446 |
| 0.137 | 5.3019 | 1686 | 0.5676 | 0.5679 | 0.5676 | 0.7534 |
| 0.137 | 5.3082 | 1688 | 0.5449 | 0.5589 | 0.5449 | 0.7382 |
| 0.137 | 5.3145 | 1690 | 0.5293 | 0.5326 | 0.5293 | 0.7275 |
| 0.137 | 5.3208 | 1692 | 0.5481 | 0.5828 | 0.5481 | 0.7404 |
| 0.137 | 5.3270 | 1694 | 0.5706 | 0.6175 | 0.5706 | 0.7554 |
| 0.137 | 5.3333 | 1696 | 0.5400 | 0.5961 | 0.5400 | 0.7348 |
| 0.137 | 5.3396 | 1698 | 0.5190 | 0.5213 | 0.5190 | 0.7204 |
| 0.137 | 5.3459 | 1700 | 0.5220 | 0.5245 | 0.5220 | 0.7225 |
| 0.137 | 5.3522 | 1702 | 0.5369 | 0.5680 | 0.5369 | 0.7328 |
| 0.137 | 5.3585 | 1704 | 0.5762 | 0.6104 | 0.5762 | 0.7591 |
| 0.137 | 5.3648 | 1706 | 0.5525 | 0.5807 | 0.5525 | 0.7433 |
| 0.137 | 5.3711 | 1708 | 0.5256 | 0.5445 | 0.5256 | 0.7250 |
| 0.137 | 5.3774 | 1710 | 0.5259 | 0.5053 | 0.5259 | 0.7252 |
| 0.137 | 5.3836 | 1712 | 0.5267 | 0.5154 | 0.5267 | 0.7258 |
| 0.137 | 5.3899 | 1714 | 0.5407 | 0.5558 | 0.5407 | 0.7353 |
| 0.137 | 5.3962 | 1716 | 0.5512 | 0.5666 | 0.5512 | 0.7424 |
| 0.137 | 5.4025 | 1718 | 0.5286 | 0.5433 | 0.5286 | 0.7271 |
| 0.137 | 5.4088 | 1720 | 0.5258 | 0.4930 | 0.5258 | 0.7251 |
| 0.137 | 5.4151 | 1722 | 0.5314 | 0.4664 | 0.5314 | 0.7290 |
| 0.137 | 5.4214 | 1724 | 0.5204 | 0.5117 | 0.5204 | 0.7214 |
| 0.137 | 5.4277 | 1726 | 0.5229 | 0.5506 | 0.5229 | 0.7231 |
| 0.137 | 5.4340 | 1728 | 0.5398 | 0.5703 | 0.5398 | 0.7347 |
| 0.137 | 5.4403 | 1730 | 0.5466 | 0.5979 | 0.5466 | 0.7393 |
| 0.137 | 5.4465 | 1732 | 0.5224 | 0.5650 | 0.5224 | 0.7227 |
| 0.137 | 5.4528 | 1734 | 0.5113 | 0.5491 | 0.5113 | 0.7151 |
| 0.137 | 5.4591 | 1736 | 0.5071 | 0.5374 | 0.5071 | 0.7121 |
| 0.137 | 5.4654 | 1738 | 0.5065 | 0.5416 | 0.5065 | 0.7117 |
| 0.137 | 5.4717 | 1740 | 0.5246 | 0.5797 | 0.5246 | 0.7243 |
| 0.137 | 5.4780 | 1742 | 0.5218 | 0.5797 | 0.5218 | 0.7224 |
| 0.137 | 5.4843 | 1744 | 0.5162 | 0.5724 | 0.5162 | 0.7185 |
| 0.137 | 5.4906 | 1746 | 0.5096 | 0.5536 | 0.5096 | 0.7138 |
| 0.137 | 5.4969 | 1748 | 0.5051 | 0.5304 | 0.5051 | 0.7107 |
| 0.137 | 5.5031 | 1750 | 0.5108 | 0.5634 | 0.5108 | 0.7147 |
| 0.137 | 5.5094 | 1752 | 0.5382 | 0.5801 | 0.5382 | 0.7336 |
| 0.137 | 5.5157 | 1754 | 0.5216 | 0.5847 | 0.5216 | 0.7223 |
| 0.137 | 5.5220 | 1756 | 0.4968 | 0.5421 | 0.4968 | 0.7048 |
| 0.137 | 5.5283 | 1758 | 0.5052 | 0.4760 | 0.5052 | 0.7108 |
| 0.137 | 5.5346 | 1760 | 0.5117 | 0.4780 | 0.5117 | 0.7153 |
| 0.137 | 5.5409 | 1762 | 0.5070 | 0.5427 | 0.5070 | 0.7121 |
| 0.137 | 5.5472 | 1764 | 0.5567 | 0.6161 | 0.5567 | 0.7461 |
| 0.137 | 5.5535 | 1766 | 0.5599 | 0.6149 | 0.5599 | 0.7482 |
| 0.137 | 5.5597 | 1768 | 0.5235 | 0.5687 | 0.5235 | 0.7235 |
| 0.137 | 5.5660 | 1770 | 0.5228 | 0.5072 | 0.5228 | 0.7230 |
| 0.137 | 5.5723 | 1772 | 0.5250 | 0.5090 | 0.5250 | 0.7246 |
| 0.137 | 5.5786 | 1774 | 0.5410 | 0.5861 | 0.5410 | 0.7355 |
| 0.137 | 5.5849 | 1776 | 0.5907 | 0.6225 | 0.5907 | 0.7686 |
| 0.137 | 5.5912 | 1778 | 0.5749 | 0.6029 | 0.5749 | 0.7582 |
| 0.137 | 5.5975 | 1780 | 0.5412 | 0.5939 | 0.5412 | 0.7357 |
| 0.137 | 5.6038 | 1782 | 0.5314 | 0.5894 | 0.5314 | 0.7289 |
| 0.137 | 5.6101 | 1784 | 0.5453 | 0.6037 | 0.5453 | 0.7385 |
| 0.137 | 5.6164 | 1786 | 0.5681 | 0.6119 | 0.5681 | 0.7537 |
| 0.137 | 5.6226 | 1788 | 0.5702 | 0.6166 | 0.5702 | 0.7551 |
| 0.137 | 5.6289 | 1790 | 0.5559 | 0.6007 | 0.5559 | 0.7456 |
| 0.137 | 5.6352 | 1792 | 0.5448 | 0.6022 | 0.5448 | 0.7381 |
| 0.137 | 5.6415 | 1794 | 0.5587 | 0.6112 | 0.5587 | 0.7475 |
| 0.137 | 5.6478 | 1796 | 0.5405 | 0.6096 | 0.5405 | 0.7352 |
| 0.137 | 5.6541 | 1798 | 0.5088 | 0.5955 | 0.5088 | 0.7133 |
| 0.137 | 5.6604 | 1800 | 0.4964 | 0.5642 | 0.4964 | 0.7045 |
| 0.137 | 5.6667 | 1802 | 0.4964 | 0.5585 | 0.4964 | 0.7046 |
| 0.137 | 5.6730 | 1804 | 0.5212 | 0.6061 | 0.5212 | 0.7219 |
| 0.137 | 5.6792 | 1806 | 0.5291 | 0.6097 | 0.5291 | 0.7274 |
| 0.137 | 5.6855 | 1808 | 0.5293 | 0.6091 | 0.5293 | 0.7275 |
| 0.137 | 5.6918 | 1810 | 0.5091 | 0.5540 | 0.5091 | 0.7135 |
| 0.137 | 5.6981 | 1812 | 0.5072 | 0.5393 | 0.5072 | 0.7122 |
| 0.137 | 5.7044 | 1814 | 0.5062 | 0.5238 | 0.5062 | 0.7115 |
| 0.137 | 5.7107 | 1816 | 0.5145 | 0.5626 | 0.5145 | 0.7173 |
| 0.137 | 5.7170 | 1818 | 0.5278 | 0.5856 | 0.5278 | 0.7265 |
| 0.137 | 5.7233 | 1820 | 0.5254 | 0.5886 | 0.5254 | 0.7248 |
| 0.137 | 5.7296 | 1822 | 0.5096 | 0.5573 | 0.5096 | 0.7138 |
| 0.137 | 5.7358 | 1824 | 0.5036 | 0.5345 | 0.5036 | 0.7096 |
| 0.137 | 5.7421 | 1826 | 0.5040 | 0.5179 | 0.5040 | 0.7099 |
| 0.137 | 5.7484 | 1828 | 0.5122 | 0.5543 | 0.5122 | 0.7157 |
| 0.137 | 5.7547 | 1830 | 0.5129 | 0.5579 | 0.5129 | 0.7162 |
| 0.137 | 5.7610 | 1832 | 0.5151 | 0.5558 | 0.5151 | 0.7177 |
| 0.137 | 5.7673 | 1834 | 0.5004 | 0.5381 | 0.5004 | 0.7074 |
| 0.137 | 5.7736 | 1836 | 0.4985 | 0.5056 | 0.4985 | 0.7060 |
| 0.137 | 5.7799 | 1838 | 0.5032 | 0.4997 | 0.5032 | 0.7093 |
| 0.137 | 5.7862 | 1840 | 0.5058 | 0.4975 | 0.5058 | 0.7112 |
| 0.137 | 5.7925 | 1842 | 0.5093 | 0.5039 | 0.5093 | 0.7137 |
| 0.137 | 5.7987 | 1844 | 0.5199 | 0.5481 | 0.5199 | 0.7210 |
| 0.137 | 5.8050 | 1846 | 0.5419 | 0.5858 | 0.5419 | 0.7362 |
| 0.137 | 5.8113 | 1848 | 0.5333 | 0.5732 | 0.5333 | 0.7303 |
| 0.137 | 5.8176 | 1850 | 0.5221 | 0.5339 | 0.5221 | 0.7226 |
| 0.137 | 5.8239 | 1852 | 0.5190 | 0.4942 | 0.5190 | 0.7204 |
| 0.137 | 5.8302 | 1854 | 0.5173 | 0.5112 | 0.5173 | 0.7193 |
| 0.137 | 5.8365 | 1856 | 0.5310 | 0.5530 | 0.5310 | 0.7287 |
| 0.137 | 5.8428 | 1858 | 0.5528 | 0.5771 | 0.5528 | 0.7435 |
| 0.137 | 5.8491 | 1860 | 0.5564 | 0.5713 | 0.5564 | 0.7459 |
| 0.137 | 5.8553 | 1862 | 0.5457 | 0.5257 | 0.5457 | 0.7387 |
| 0.137 | 5.8616 | 1864 | 0.5511 | 0.5368 | 0.5511 | 0.7423 |
| 0.137 | 5.8679 | 1866 | 0.5562 | 0.5645 | 0.5562 | 0.7458 |
| 0.137 | 5.8742 | 1868 | 0.5371 | 0.5266 | 0.5371 | 0.7329 |
| 0.137 | 5.8805 | 1870 | 0.5291 | 0.5413 | 0.5291 | 0.7274 |
| 0.137 | 5.8868 | 1872 | 0.5259 | 0.5535 | 0.5259 | 0.7252 |
| 0.137 | 5.8931 | 1874 | 0.5369 | 0.5981 | 0.5369 | 0.7327 |
| 0.137 | 5.8994 | 1876 | 0.5517 | 0.6147 | 0.5517 | 0.7427 |
| 0.137 | 5.9057 | 1878 | 0.5261 | 0.5838 | 0.5261 | 0.7253 |
| 0.137 | 5.9119 | 1880 | 0.5091 | 0.5308 | 0.5091 | 0.7135 |
| 0.137 | 5.9182 | 1882 | 0.5070 | 0.5485 | 0.5070 | 0.7120 |
| 0.137 | 5.9245 | 1884 | 0.5292 | 0.6006 | 0.5292 | 0.7275 |
| 0.137 | 5.9308 | 1886 | 0.5539 | 0.6290 | 0.5539 | 0.7442 |
| 0.137 | 5.9371 | 1888 | 0.5303 | 0.6176 | 0.5303 | 0.7282 |
| 0.137 | 5.9434 | 1890 | 0.5068 | 0.5536 | 0.5068 | 0.7119 |
| 0.137 | 5.9497 | 1892 | 0.5092 | 0.5327 | 0.5092 | 0.7136 |
| 0.137 | 5.9560 | 1894 | 0.5156 | 0.5453 | 0.5156 | 0.7180 |
| 0.137 | 5.9623 | 1896 | 0.5242 | 0.5863 | 0.5242 | 0.7240 |
| 0.137 | 5.9686 | 1898 | 0.5535 | 0.6211 | 0.5535 | 0.7440 |
| 0.137 | 5.9748 | 1900 | 0.5484 | 0.6188 | 0.5484 | 0.7406 |
| 0.137 | 5.9811 | 1902 | 0.5165 | 0.5695 | 0.5165 | 0.7187 |
| 0.137 | 5.9874 | 1904 | 0.5091 | 0.5293 | 0.5091 | 0.7135 |
| 0.137 | 5.9937 | 1906 | 0.5039 | 0.5467 | 0.5039 | 0.7099 |
| 0.137 | 6.0 | 1908 | 0.5110 | 0.5860 | 0.5110 | 0.7149 |
| 0.137 | 6.0063 | 1910 | 0.5227 | 0.6075 | 0.5227 | 0.7230 |
| 0.137 | 6.0126 | 1912 | 0.5158 | 0.5937 | 0.5158 | 0.7182 |
| 0.137 | 6.0189 | 1914 | 0.5121 | 0.5814 | 0.5121 | 0.7156 |
| 0.137 | 6.0252 | 1916 | 0.5249 | 0.6051 | 0.5249 | 0.7245 |
| 0.137 | 6.0314 | 1918 | 0.5245 | 0.5925 | 0.5245 | 0.7242 |
| 0.137 | 6.0377 | 1920 | 0.5153 | 0.5620 | 0.5153 | 0.7178 |
| 0.137 | 6.0440 | 1922 | 0.5134 | 0.5268 | 0.5134 | 0.7165 |
| 0.137 | 6.0503 | 1924 | 0.5214 | 0.5761 | 0.5214 | 0.7221 |
| 0.137 | 6.0566 | 1926 | 0.5436 | 0.6034 | 0.5436 | 0.7373 |
| 0.137 | 6.0629 | 1928 | 0.5385 | 0.5879 | 0.5385 | 0.7339 |
| 0.137 | 6.0692 | 1930 | 0.5294 | 0.5466 | 0.5294 | 0.7276 |
| 0.137 | 6.0755 | 1932 | 0.5246 | 0.5303 | 0.5246 | 0.7243 |
| 0.137 | 6.0818 | 1934 | 0.5383 | 0.5591 | 0.5383 | 0.7337 |
| 0.137 | 6.0881 | 1936 | 0.5384 | 0.5542 | 0.5384 | 0.7338 |
| 0.137 | 6.0943 | 1938 | 0.5208 | 0.5393 | 0.5208 | 0.7217 |
| 0.137 | 6.1006 | 1940 | 0.5144 | 0.5455 | 0.5144 | 0.7172 |
| 0.137 | 6.1069 | 1942 | 0.5280 | 0.5642 | 0.5280 | 0.7266 |
| 0.137 | 6.1132 | 1944 | 0.5361 | 0.5867 | 0.5361 | 0.7322 |
| 0.137 | 6.1195 | 1946 | 0.5112 | 0.5655 | 0.5112 | 0.7150 |
| 0.137 | 6.1258 | 1948 | 0.4995 | 0.5291 | 0.4995 | 0.7067 |
| 0.137 | 6.1321 | 1950 | 0.5061 | 0.5169 | 0.5061 | 0.7114 |
| 0.137 | 6.1384 | 1952 | 0.5194 | 0.5467 | 0.5194 | 0.7207 |
| 0.137 | 6.1447 | 1954 | 0.5554 | 0.5848 | 0.5554 | 0.7453 |
| 0.137 | 6.1509 | 1956 | 0.5663 | 0.6031 | 0.5663 | 0.7525 |
| 0.137 | 6.1572 | 1958 | 0.5517 | 0.5626 | 0.5517 | 0.7427 |
| 0.137 | 6.1635 | 1960 | 0.5537 | 0.5673 | 0.5537 | 0.7441 |
| 0.137 | 6.1698 | 1962 | 0.5627 | 0.5826 | 0.5627 | 0.7501 |
| 0.137 | 6.1761 | 1964 | 0.5644 | 0.6046 | 0.5644 | 0.7513 |
| 0.137 | 6.1824 | 1966 | 0.5350 | 0.5586 | 0.5350 | 0.7315 |
| 0.137 | 6.1887 | 1968 | 0.5252 | 0.5379 | 0.5252 | 0.7247 |
| 0.137 | 6.1950 | 1970 | 0.5237 | 0.5325 | 0.5237 | 0.7237 |
| 0.137 | 6.2013 | 1972 | 0.5287 | 0.5568 | 0.5287 | 0.7271 |
| 0.137 | 6.2075 | 1974 | 0.5419 | 0.5726 | 0.5419 | 0.7362 |
| 0.137 | 6.2138 | 1976 | 0.5262 | 0.5548 | 0.5262 | 0.7254 |
| 0.137 | 6.2201 | 1978 | 0.5228 | 0.5096 | 0.5228 | 0.7231 |
| 0.137 | 6.2264 | 1980 | 0.5272 | 0.5423 | 0.5272 | 0.7261 |
| 0.137 | 6.2327 | 1982 | 0.5641 | 0.6126 | 0.5641 | 0.7511 |
| 0.137 | 6.2390 | 1984 | 0.5698 | 0.6111 | 0.5698 | 0.7549 |
| 0.137 | 6.2453 | 1986 | 0.5435 | 0.5857 | 0.5435 | 0.7372 |
| 0.137 | 6.2516 | 1988 | 0.5163 | 0.5217 | 0.5163 | 0.7186 |
| 0.137 | 6.2579 | 1990 | 0.5172 | 0.5210 | 0.5172 | 0.7192 |
| 0.137 | 6.2642 | 1992 | 0.5246 | 0.5537 | 0.5246 | 0.7243 |
| 0.137 | 6.2704 | 1994 | 0.5389 | 0.5880 | 0.5389 | 0.7341 |
| 0.137 | 6.2767 | 1996 | 0.5522 | 0.6138 | 0.5522 | 0.7431 |
| 0.137 | 6.2830 | 1998 | 0.5332 | 0.6009 | 0.5332 | 0.7302 |
| 0.0917 | 6.2893 | 2000 | 0.5090 | 0.5523 | 0.5090 | 0.7134 |
| 0.0917 | 6.2956 | 2002 | 0.5108 | 0.5622 | 0.5108 | 0.7147 |
| 0.0917 | 6.3019 | 2004 | 0.5365 | 0.6050 | 0.5365 | 0.7325 |
| 0.0917 | 6.3082 | 2006 | 0.5334 | 0.5934 | 0.5334 | 0.7304 |
| 0.0917 | 6.3145 | 2008 | 0.5195 | 0.5594 | 0.5195 | 0.7208 |
| 0.0917 | 6.3208 | 2010 | 0.5208 | 0.5328 | 0.5208 | 0.7216 |
| 0.0917 | 6.3270 | 2012 | 0.5282 | 0.5464 | 0.5282 | 0.7268 |
| 0.0917 | 6.3333 | 2014 | 0.5418 | 0.5642 | 0.5418 | 0.7361 |
| 0.0917 | 6.3396 | 2016 | 0.5689 | 0.6043 | 0.5689 | 0.7542 |
| 0.0917 | 6.3459 | 2018 | 0.5674 | 0.5949 | 0.5674 | 0.7533 |
| 0.0917 | 6.3522 | 2020 | 0.5468 | 0.5597 | 0.5468 | 0.7394 |
| 0.0917 | 6.3585 | 2022 | 0.5329 | 0.5195 | 0.5329 | 0.7300 |
| 0.0917 | 6.3648 | 2024 | 0.5310 | 0.5210 | 0.5310 | 0.7287 |
| 0.0917 | 6.3711 | 2026 | 0.5270 | 0.5262 | 0.5270 | 0.7260 |
| 0.0917 | 6.3774 | 2028 | 0.5316 | 0.5641 | 0.5316 | 0.7291 |
| 0.0917 | 6.3836 | 2030 | 0.5297 | 0.5760 | 0.5297 | 0.7278 |
| 0.0917 | 6.3899 | 2032 | 0.5219 | 0.5727 | 0.5219 | 0.7224 |
| 0.0917 | 6.3962 | 2034 | 0.5155 | 0.5714 | 0.5155 | 0.7180 |
| 0.0917 | 6.4025 | 2036 | 0.5086 | 0.5673 | 0.5086 | 0.7131 |
| 0.0917 | 6.4088 | 2038 | 0.5175 | 0.5884 | 0.5175 | 0.7194 |
| 0.0917 | 6.4151 | 2040 | 0.5283 | 0.6030 | 0.5283 | 0.7268 |
| 0.0917 | 6.4214 | 2042 | 0.5162 | 0.5871 | 0.5162 | 0.7185 |
| 0.0917 | 6.4277 | 2044 | 0.5080 | 0.5583 | 0.5080 | 0.7127 |
| 0.0917 | 6.4340 | 2046 | 0.5050 | 0.5511 | 0.5050 | 0.7106 |
| 0.0917 | 6.4403 | 2048 | 0.5148 | 0.5614 | 0.5148 | 0.7175 |
| 0.0917 | 6.4465 | 2050 | 0.5400 | 0.6043 | 0.5400 | 0.7349 |
| 0.0917 | 6.4528 | 2052 | 0.5301 | 0.5940 | 0.5301 | 0.7281 |
| 0.0917 | 6.4591 | 2054 | 0.5091 | 0.5362 | 0.5091 | 0.7135 |
| 0.0917 | 6.4654 | 2056 | 0.5088 | 0.5027 | 0.5088 | 0.7133 |
| 0.0917 | 6.4717 | 2058 | 0.5032 | 0.5111 | 0.5032 | 0.7094 |
| 0.0917 | 6.4780 | 2060 | 0.5047 | 0.5513 | 0.5047 | 0.7104 |
| 0.0917 | 6.4843 | 2062 | 0.5077 | 0.5645 | 0.5077 | 0.7125 |
| 0.0917 | 6.4906 | 2064 | 0.5032 | 0.5298 | 0.5032 | 0.7093 |
| 0.0917 | 6.4969 | 2066 | 0.4995 | 0.5231 | 0.4995 | 0.7067 |
| 0.0917 | 6.5031 | 2068 | 0.5037 | 0.5341 | 0.5037 | 0.7097 |
| 0.0917 | 6.5094 | 2070 | 0.5179 | 0.5775 | 0.5179 | 0.7197 |
| 0.0917 | 6.5157 | 2072 | 0.5109 | 0.5644 | 0.5109 | 0.7148 |
| 0.0917 | 6.5220 | 2074 | 0.5079 | 0.5408 | 0.5079 | 0.7127 |
| 0.0917 | 6.5283 | 2076 | 0.5076 | 0.5336 | 0.5076 | 0.7125 |
| 0.0917 | 6.5346 | 2078 | 0.5061 | 0.5336 | 0.5061 | 0.7114 |
| 0.0917 | 6.5409 | 2080 | 0.5077 | 0.5402 | 0.5077 | 0.7125 |
| 0.0917 | 6.5472 | 2082 | 0.5271 | 0.5889 | 0.5271 | 0.7260 |
| 0.0917 | 6.5535 | 2084 | 0.5338 | 0.6049 | 0.5338 | 0.7306 |
| 0.0917 | 6.5597 | 2086 | 0.5223 | 0.5777 | 0.5223 | 0.7227 |
| 0.0917 | 6.5660 | 2088 | 0.5172 | 0.5149 | 0.5172 | 0.7192 |
| 0.0917 | 6.5723 | 2090 | 0.5206 | 0.5042 | 0.5206 | 0.7215 |
| 0.0917 | 6.5786 | 2092 | 0.5265 | 0.5575 | 0.5265 | 0.7256 |
| 0.0917 | 6.5849 | 2094 | 0.5407 | 0.5997 | 0.5407 | 0.7353 |
| 0.0917 | 6.5912 | 2096 | 0.5554 | 0.6111 | 0.5554 | 0.7453 |
| 0.0917 | 6.5975 | 2098 | 0.5409 | 0.6005 | 0.5409 | 0.7355 |
| 0.0917 | 6.6038 | 2100 | 0.5386 | 0.6022 | 0.5386 | 0.7339 |
| 0.0917 | 6.6101 | 2102 | 0.5355 | 0.6058 | 0.5355 | 0.7318 |
| 0.0917 | 6.6164 | 2104 | 0.5200 | 0.5805 | 0.5200 | 0.7211 |
| 0.0917 | 6.6226 | 2106 | 0.5044 | 0.5561 | 0.5044 | 0.7102 |
| 0.0917 | 6.6289 | 2108 | 0.5027 | 0.5571 | 0.5027 | 0.7090 |
| 0.0917 | 6.6352 | 2110 | 0.5132 | 0.5724 | 0.5132 | 0.7164 |
| 0.0917 | 6.6415 | 2112 | 0.5343 | 0.6068 | 0.5343 | 0.7309 |
| 0.0917 | 6.6478 | 2114 | 0.5504 | 0.6179 | 0.5504 | 0.7419 |
| 0.0917 | 6.6541 | 2116 | 0.5244 | 0.5962 | 0.5244 | 0.7242 |
| 0.0917 | 6.6604 | 2118 | 0.5086 | 0.5595 | 0.5086 | 0.7132 |
| 0.0917 | 6.6667 | 2120 | 0.5043 | 0.5187 | 0.5043 | 0.7102 |
| 0.0917 | 6.6730 | 2122 | 0.5095 | 0.5412 | 0.5095 | 0.7138 |
| 0.0917 | 6.6792 | 2124 | 0.5294 | 0.5747 | 0.5294 | 0.7276 |
| 0.0917 | 6.6855 | 2126 | 0.5357 | 0.5965 | 0.5357 | 0.7319 |
| 0.0917 | 6.6918 | 2128 | 0.5336 | 0.5904 | 0.5336 | 0.7305 |
| 0.0917 | 6.6981 | 2130 | 0.5382 | 0.6041 | 0.5382 | 0.7336 |
| 0.0917 | 6.7044 | 2132 | 0.5238 | 0.5798 | 0.5238 | 0.7237 |
| 0.0917 | 6.7107 | 2134 | 0.5117 | 0.5626 | 0.5117 | 0.7153 |
| 0.0917 | 6.7170 | 2136 | 0.5085 | 0.5562 | 0.5085 | 0.7131 |
| 0.0917 | 6.7233 | 2138 | 0.5082 | 0.5607 | 0.5082 | 0.7129 |
| 0.0917 | 6.7296 | 2140 | 0.5110 | 0.5692 | 0.5110 | 0.7149 |
| 0.0917 | 6.7358 | 2142 | 0.5086 | 0.5718 | 0.5086 | 0.7131 |
| 0.0917 | 6.7421 | 2144 | 0.5002 | 0.5517 | 0.5002 | 0.7073 |
| 0.0917 | 6.7484 | 2146 | 0.4968 | 0.5362 | 0.4968 | 0.7049 |
| 0.0917 | 6.7547 | 2148 | 0.5065 | 0.5680 | 0.5065 | 0.7117 |
| 0.0917 | 6.7610 | 2150 | 0.5217 | 0.5868 | 0.5217 | 0.7223 |
| 0.0917 | 6.7673 | 2152 | 0.5359 | 0.5932 | 0.5359 | 0.7321 |
| 0.0917 | 6.7736 | 2154 | 0.5440 | 0.5864 | 0.5440 | 0.7376 |
| 0.0917 | 6.7799 | 2156 | 0.5253 | 0.5470 | 0.5253 | 0.7248 |
| 0.0917 | 6.7862 | 2158 | 0.5211 | 0.5009 | 0.5211 | 0.7219 |
| 0.0917 | 6.7925 | 2160 | 0.5242 | 0.4876 | 0.5242 | 0.7240 |
| 0.0917 | 6.7987 | 2162 | 0.5241 | 0.5217 | 0.5241 | 0.7240 |
| 0.0917 | 6.8050 | 2164 | 0.5352 | 0.5466 | 0.5352 | 0.7316 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ayrisdev/mobile-ui-design
|
ayrisdev
| 2024-11-06T13:58:06Z | 128 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-06T13:08:26Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mobiluidesign
---
# Mobile Ui Design
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mobiluidesign` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ayrisdev/mobile-ui-design', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
DiTy/gemma-2-9b-it-russian-strict-function-calling-DPO
|
DiTy
| 2024-11-06T13:56:17Z | 147 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gemma2",
"text-generation",
"conversational",
"function-calling",
"strict-function-calling",
"trl",
"ru",
"dataset:DiTy/function-calling-trl",
"base_model:DiTy/gemma-2-9b-it-russian-function-calling-GGUF",
"base_model:quantized:DiTy/gemma-2-9b-it-russian-function-calling-GGUF",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-24T07:24:57Z |
---
base_model: DiTy/gemma-2-9b-it-russian-function-calling-GGUF
datasets:
- DiTy/function-calling-trl
language:
- ru
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- conversational
- gemma2
- function-calling
- strict-function-calling
- trl
---
# DiTy/gemma-2-9b-it-russian-strict-function-calling-DPO
This model is a aligned version of [DiTy/gemma-2-9b-it-russian-function-calling-GGUF](https://huggingface.co/DiTy/gemma-2-9b-it-russian-function-calling-GGUF) for more **Strict Function Calling** task, on the Russian version of the *DiTy/function-calling-trl* dataset.
Which means that the model avoids answering user questions that are not related to the described functions.
The model has been trained using non-synthetic preference data, fully annotated by humans only, on the Russian version of the <ins>*DiTy/function-calling-trl*</ins> dataset.
In addition to **safetensors**, the model is available in **GGUF** [(gemma-2-9B-it-russian-function-calling-DPO-F16.gguf)](https://huggingface.co/DiTy/gemma-2-9b-it-russian-function-calling-DPO/blob/main/gemma-2-9B-it-russian-function-calling-DPO-F16.gguf) format (in this case, you need to download only a single file (*[how to inference GGUF model](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#high-level-api)*)).
## Stricted Effect
Это означает, что модель избегает ответов на вопросы пользователей, которые не связаны с описанными функциями.
Ниже описаны несколько примеров поведения модели до и после *preference optimization*.
```python
def get_weather(city: str):
"""
Функция, которая возвращает погоду в заданном городе.
Args:
city: Город, для которого надо узнать погоду.
"""
import random
return ("weather", '"sunny"' if random.random() > 0.5 else '"rainy"')
def get_sunrise_sunset_times(city: str):
"""
Функция, которая возвращает время восхода и заката для заданного города для текущей даты (дата от пользователя не требуется), в формате списка: [sunrise_time, sunset_time].
Args:
city: Город, в котором можно узнать время восхода и захода солнца.
"""
return ["6:00", "18:00"]
history_messages = [
{"role": "system", "content": "Ты - полезный помощник, имеющий доступ к следующим функциям. Используйте их при необходимости - "},
{"role": "user", "content": USER_MESSAGE},
]
inputs = tokenizer.apply_chat_template(
history_messages,
tokenize=False,
add_generation_prompt=True, # adding prompt for generation
tools=[get_weather, get_sunrise_sunset_times], # our functions (tools)
)
```
Ответ модели при `USER_MESSAGE = "кто такой Зевс?"`:
* [**DiTy/gemma-2-9b-it-russian-function-calling-GGUF** <ins>(*SFT*)</ins>:](https://huggingface.co/DiTy/gemma-2-9b-it-russian-function-calling-GGUF)
```
Зевс - это главный бог в греческой мифологии. Он является богом неба, молнии и грома, а также покровителем царей и правосудия.
```
* [**Эта модель** <ins>(*DPO*)</ins>:](https://huggingface.co/DiTy/gemma-2-9b-it-russian-function-calling-DPO)
```
Извините, но я не могу ответить на ваш вопрос. Мои текущие возможности позволяют мне получать погоду в заданном городе или узнавать время восхода и заката солнца для заданного города.
```
Ответ модели при `USER_MESSAGE = "придумай мне историю о щуке"`:
* [**DiTy/gemma-2-9b-it-russian-function-calling-GGUF** <ins>(*SFT*)</ins>:](https://huggingface.co/DiTy/gemma-2-9b-it-russian-function-calling-GGUF)
```
В глубинах синего озера, где солнечные лучи едва проникали сквозь толщу воды, жила щука по имени Агата. Она была хитра и ловка, с серебристыми чешуйками, которые переливались, как драгоценные камни, и острыми зубами, способными разорвать любую добычу. Агата была самой опытной щукой в озере, и все остальные рыбы знали о ее грозной репутации.
Однажды, когда Агата охотилась на мальков, она заметила странную рыбу, которая плавала совсем не так, как все остальные. Эта рыба была яркой, с разноцветными плавниками, и она не пыталась укрыться от Агаты, как это делали другие рыбы. Агата, всегда готовая к охоте, приблизилась к ней, пытаясь поймать ее в свои острые зубы.
Но эта рыба, которая называлась Лилия, не была обычной рыбой. Она была искусственной, созданной учеными, чтобы изучать поведение рыб в естественной среде. Лилия была оснащена датчиками, которые собирали данные о температуре воды
```
* [**Эта модель** <ins>(*DPO*)</ins>:](https://huggingface.co/DiTy/gemma-2-9b-it-russian-function-calling-DPO)
```
Извините, но я не могу придумать историю о щуке. Мои возможности ограничены доступными мне функциями. В данный момент я могу только:
* Узнать погоду в заданном городе.
* Узнать время восхода и заката солнца для заданного города.
```
## Рекомендация к стратегии генерации
По умолчанию модель использует жадный поиск вместе с данным конфигом:
```
GenerationConfig {
"bos_token_id": 2,
"cache_implementation": "hybrid",
"eos_token_id": 1,
"pad_token_id": 0
}
```
Для наиболее стабильной работы рекомендуется следующая стратегия генерации:
| Key | Value |
| --- | ----- |
| `do_sample` | True |
| `temperature` | 0.2 |
| `top_k` | 40 |
| `top_p` | 0.95 |
| `min_p` | 0.5 |
## Подробности генерации с вызовом функций
С более подробной информацией можно ознакомиться в основной [Model Card](https://huggingface.co/DiTy/gemma-2-9b-it-russian-function-calling-GGUF)
## Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
|
yoohj58072/krx_qwen2.5_7b_it_v3
|
yoohj58072
| 2024-11-06T13:52:51Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T13:48:42Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yoohj58072
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MaziyarPanahi/llama3-8B-aifeifei-1.2-GGUF
|
MaziyarPanahi
| 2024-11-06T13:50:52Z | 47 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:aifeifei798/llama3-8B-aifeifei-1.2",
"base_model:quantized:aifeifei798/llama3-8B-aifeifei-1.2",
"region:us",
"conversational"
] |
text-generation
| 2024-11-06T13:28:30Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: llama3-8B-aifeifei-1.2-GGUF
base_model: aifeifei798/llama3-8B-aifeifei-1.2
inference: false
model_creator: aifeifei798
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/llama3-8B-aifeifei-1.2-GGUF](https://huggingface.co/MaziyarPanahi/llama3-8B-aifeifei-1.2-GGUF)
- Model creator: [aifeifei798](https://huggingface.co/aifeifei798)
- Original model: [aifeifei798/llama3-8B-aifeifei-1.2](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.2)
## Description
[MaziyarPanahi/llama3-8B-aifeifei-1.2-GGUF](https://huggingface.co/MaziyarPanahi/llama3-8B-aifeifei-1.2-GGUF) contains GGUF format model files for [aifeifei798/llama3-8B-aifeifei-1.2](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.2).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
prithivMLmods/Lime-Green-Flux-LoRA
|
prithivMLmods
| 2024-11-06T13:50:42Z | 36 | 9 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-11-06T10:40:30Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'Lime Green, An abstract painting of a hamburger on a green background. The hamburger is composed of a large red bun, with a thick layer of cheese between the bun and two slices of tomatoes. The tomatoes are red, and the lettuce is a vibrant green. The bun has a layer of white crumbs on top of it. There is a black base on the bottom of the hamburger.'
output:
url: images/LM1.png
- text: 'Lime Green, An eye-level painting of a gray and white cat with green eyes. The cats ears are sticking up, and the cats nose is pointed towards the left side of the frame. Its whiskers are white, and there are splashes of black on the cat. The background of the painting is a vibrant lime green, with black splashes all around the cat, adding a pop of color to the scene.'
output:
url: images/LM2.png
- text: 'Lime Green, An eye-level painting of a womans face, painted in a vibrant shade of green and red. Her lips are painted a deep red, and her eyes are closed. Her eyebrows are a darker shade of black, while her lips are a lighter shade of red. She is wearing a black baseball cap, and she is holding a phone in her right hand. Her hair is pulled back in a ponytail, adding a pop of color to her face. The background is a lime green, and there are a few lines of paint splattered across the image, adding depth to the composition.'
output:
url: images/LG0.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Lime Green
license: creativeml-openrail-m
---
# Lime-Green-Flux-LoRA
<Gallery />
- Hosted Here🧨: https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Lime-Green-Flux-LoRA**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 20 & 1800|
| Epoch | 10 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 13 [ Hi-RES ]
## Best Dimensions
- 1024 x 1024 (Default)
## Setting Up
```
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Lime-Green-Flux-LoRA"
trigger_word = "Lime Green"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Data source
- https://freeflo.ai/
## Trigger words
You should use `Lime Green` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Lime-Green-Flux-LoRA/tree/main) them in the Files & versions tab.
|
BlackBeenie/Llama-3.1-8B-pythonic-passthrough-merge
|
BlackBeenie
| 2024-11-06T13:49:14Z | 5 | 0 | null |
[
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder",
"mlabonne/Hermes-3-Llama-3.1-8B-lorablated",
"Solshine/reflection-llama-3.1-8B",
"base_model:Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder",
"base_model:merge:Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder",
"base_model:Solshine/reflection-llama-3.1-8B",
"base_model:merge:Solshine/reflection-llama-3.1-8B",
"base_model:mlabonne/Hermes-3-Llama-3.1-8B-lorablated",
"base_model:merge:mlabonne/Hermes-3-Llama-3.1-8B-lorablated",
"region:us"
] | null | 2024-11-06T13:38:03Z |
---
base_model:
- Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated
- Solshine/reflection-llama-3.1-8B
- Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- Solshine/reflection-llama-3.1-8B
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated
tags:
- merge
- mergekit
- lazymergekit
- Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated
- Solshine/reflection-llama-3.1-8B
---
# Bloslain-8B-v0.1
Bloslain-8B-v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder](https://huggingface.co/Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder)
* [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated)
* [Solshine/reflection-llama-3.1-8B](https://huggingface.co/Solshine/reflection-llama-3.1-8B)
* [Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder](https://huggingface.co/Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder)
* [Solshine/reflection-llama-3.1-8B](https://huggingface.co/Solshine/reflection-llama-3.1-8B)
* [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated)
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 8]
model: Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- sources:
- layer_range: [0, 16]
model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated
- sources:
- layer_range: [4, 20]
model: Solshine/reflection-llama-3.1-8B
- sources:
- layer_range: [8, 24]
model: Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- sources:
- layer_range: [12, 28]
model: Solshine/reflection-llama-3.1-8B
- sources:
- layer_range: [16, 32]
model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "BlackBeenie/Bloslain-8B-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
ngwgsang/bartpho-word-base-vimex
|
ngwgsang
| 2024-11-06T13:49:01Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-06T13:48:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vislupus/bulgarian-joke-master-Qwen2.5-0.5B-Instruct-bnb-4bit-gguf
|
vislupus
| 2024-11-06T13:48:33Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-0.5B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T13:48:26Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
---
# Uploaded model
- **Developed by:** vislupus
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yoohj58072/krx_qwen2_7b_it_v3
|
yoohj58072
| 2024-11-06T13:47:36Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"base_model:unsloth/Qwen2-7B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Qwen2-7B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T13:42:40Z |
---
base_model: unsloth/Qwen2-7B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yoohj58072
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-7B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/Qwen2.5-7B-Instruct-4bit
|
mlx-community
| 2024-11-06T13:47:36Z | 206,381 | 7 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-09-18T20:06:25Z |
---
base_model: Qwen/Qwen2.5-7B
language:
- en
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- mlx
---
# mlx-community/Qwen2.5-7B-Instruct-4bit
The Model [mlx-community/Qwen2.5-7B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen2.5-7B-Instruct-4bit) was converted to MLX format from [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) using mlx-lm version **0.18.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen2.5-7B-Instruct-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
KPEP/krx-qwen-2.5-7b-v1.4.8
|
KPEP
| 2024-11-06T13:46:20Z | 6 | 0 | null |
[
"safetensors",
"qwen2",
"krx",
"unsloth",
"trl",
"sft",
"ko",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"region:us"
] | null | 2024-11-06T13:27:24Z |
---
base_model:
- Qwen/Qwen2.5-7B
language:
- ko
tags:
- krx
- unsloth
- trl
- sft
---
|
ItchyChin/OrpoLlama-3-8B-memorize-translate-retrain-merge-all-20241105
|
ItchyChin
| 2024-11-06T13:37:29Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T13:11:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mikiwazang/Celestial
|
mikiwazang
| 2024-11-06T13:34:22Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-4.0",
"region:us"
] |
text-to-image
| 2024-11-06T13:34:15Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/workspace_trainsamples_792961987880608730_1425337e-bef3-4389-8b52-ee2e2c8390d2.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Celestial Fantasy
license: cc-by-4.0
---
# Celestial Fantasy
<Gallery />
## Trigger words
You should use `Celestial Fantasy` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/mikiwazang/Celestial/tree/main) them in the Files & versions tab.
|
giacomoarienti/nsfw-classifier
|
giacomoarienti
| 2024-11-06T13:20:07Z | 94,576 | 27 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"dataset:deepghs/nsfw_detect",
"doi:10.57967/hf/2906",
"license:cc-by-nc-nd-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-05T12:19:30Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: nsfw-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9200000166893005
datasets:
- deepghs/nsfw_detect
license: cc-by-nc-nd-4.0
---
# 🚫 NSFW Classifier - Keep Your Platform Safe and Secure!
### An AI-powered image classifier designed to detect and prevent NSFW content (porn, hentai, sexy images) from being posted on your platform. Trusted by thousands of developers, this solution is perfect for any app or platform that allows users to upload images.
---
## 🚀 Why Choose Our NSFW Image Classifier?
In today's digital world, user-generated content is a double-edged sword. While it fosters creativity and engagement, it also opens the door to inappropriate or illegal content being shared. Our NSFW Image Classifier is specifically designed to identify and filter out explicit images, including **pornography, hentai, and sexually suggestive content**, ensuring your platform remains **safe, secure**, and **legally compliant**.
### 🌟 Key Benefits:
- **Protect Your User Base**: Keep your community safe by preventing exposure to inappropriate content.
- **Legal Compliance**: Avoid legal action by preventing illegal or explicit content from being posted.
- **Seamless Integration**: Our model is easy to integrate into any platform that allows image uploads, including social media, forums, e-commerce sites, and more.
---
## 🔥 Proven Solution - Trusted by Thousands!
With **60,000 downloads per month**, our NSFW Image Classifier has become the go-to solution for platforms looking to **maintain a clean and safe environment** for their users. Many developers and companies have already chosen our solution to protect their communities—will you be next?
---
## 📦 How It Works
1. **Upload an Image**: The user uploads an image to your platform.
2. **NSFW Detection**: Our model analyzes the image and flags any explicit content (porn, hentai, sexy images).
3. **Moderation**: Take appropriate action, whether it's preventing the upload or flagging the content for review.
### **Who Can Benefit?**
- **Social Media Platforms**
- **Online Forums**
- **E-Commerce Sites**
- **Content Sharing Apps**
- **Any platform allowing user-uploaded images**
---
## 🚀 Looking for Even More Power?
Want a model that's **even more powerful and accurate**? We've got a **premium version** trained on a **curated, high-quality dataset** that can detect a wider range of illegal content, including **gore, harmful images, under 18 content, and more**.
📩 **Contact me on Telegram [@mrjack7](https://t.me/mrjack7)** for more details on the **premium model**!
---
## 🌐 API Access
💻 Need easy integration? **API access** is available for seamless deployment into your applications. Whether you're looking to integrate our NSFW image detection capabilities or require more advanced features, our API provides a flexible and scalable solution.
📩 **Contact me on Telegram [@mrjack7](https://t.me/mrjack7)** for more details on **API access**!
---
Let's build something amazing together. 💡
|
Lolimorimorf/damage_trigger_effect_2024-11-06_13_00
|
Lolimorimorf
| 2024-11-06T13:19:36Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-06T13:18:52Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: damage_trigger_effect_2024-11-06_13_00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# damage_trigger_effect_2024-11-06_13_00
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 34 | 0.6901 |
| No log | 2.0 | 68 | 0.5241 |
| No log | 3.0 | 102 | 0.4540 |
| No log | 4.0 | 136 | 0.4661 |
| No log | 5.0 | 170 | 0.4877 |
| No log | 6.0 | 204 | 0.4716 |
| No log | 7.0 | 238 | 0.4778 |
| No log | 8.0 | 272 | 0.5008 |
| No log | 9.0 | 306 | 0.5195 |
| No log | 10.0 | 340 | 0.5669 |
| No log | 11.0 | 374 | 0.5807 |
| No log | 12.0 | 408 | 0.5776 |
| No log | 13.0 | 442 | 0.5938 |
| No log | 14.0 | 476 | 0.5854 |
| 0.2729 | 15.0 | 510 | 0.5939 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/tamil-llama-13b-instruct-v0.1-GGUF
|
mradermacher
| 2024-11-06T13:15:19Z | 11 | 0 |
transformers
|
[
"transformers",
"gguf",
"ta",
"en",
"base_model:abhinand/tamil-llama-13b-instruct-v0.1",
"base_model:quantized:abhinand/tamil-llama-13b-instruct-v0.1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-06T11:57:35Z |
---
base_model: abhinand/tamil-llama-13b-instruct-v0.1
language:
- ta
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/abhinand/tamil-llama-13b-instruct-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q6_K.gguf) | Q6_K | 10.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-instruct-v0.1-GGUF/resolve/main/tamil-llama-13b-instruct-v0.1.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
apto-as/multilingual-e5-large-Q8_0-GGUF
|
apto-as
| 2024-11-06T13:09:59Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"feature-extraction",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"base_model:intfloat/multilingual-e5-large",
"base_model:quantized:intfloat/multilingual-e5-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-11-06T13:09:52Z |
---
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- feature-extraction
- sentence-transformers
- llama-cpp
- gguf-my-repo
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
base_model: intfloat/multilingual-e5-large
model-index:
- name: multilingual-e5-large
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.05970149253731
- type: ap
value: 43.486574390835635
- type: f1
value: 73.32700092140148
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.22055674518201
- type: ap
value: 81.55756710830498
- type: f1
value: 69.28271787752661
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 80.41979010494754
- type: ap
value: 29.34879922376344
- type: f1
value: 67.62475449011278
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.8372591006424
- type: ap
value: 26.557560591210738
- type: f1
value: 64.96619417368707
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.489875
- type: ap
value: 90.98758636917603
- type: f1
value: 93.48554819717332
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.564
- type: f1
value: 46.75122173518047
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.400000000000006
- type: f1
value: 44.17195682400632
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 43.068
- type: f1
value: 42.38155696855596
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.89
- type: f1
value: 40.84407321682663
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.120000000000005
- type: f1
value: 39.522976223819114
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.832
- type: f1
value: 38.0392533394713
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.725
- type: map_at_10
value: 46.055
- type: map_at_100
value: 46.900999999999996
- type: map_at_1000
value: 46.911
- type: map_at_3
value: 41.548
- type: map_at_5
value: 44.297
- type: mrr_at_1
value: 31.152
- type: mrr_at_10
value: 46.231
- type: mrr_at_100
value: 47.07
- type: mrr_at_1000
value: 47.08
- type: mrr_at_3
value: 41.738
- type: mrr_at_5
value: 44.468999999999994
- type: ndcg_at_1
value: 30.725
- type: ndcg_at_10
value: 54.379999999999995
- type: ndcg_at_100
value: 58.138
- type: ndcg_at_1000
value: 58.389
- type: ndcg_at_3
value: 45.156
- type: ndcg_at_5
value: 50.123
- type: precision_at_1
value: 30.725
- type: precision_at_10
value: 8.087
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.54
- type: precision_at_5
value: 13.542000000000002
- type: recall_at_1
value: 30.725
- type: recall_at_10
value: 80.868
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 55.619
- type: recall_at_5
value: 67.71000000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.30960650674069
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 38.427074197498996
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.28270056031872
- type: mrr
value: 74.38332673789738
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.05942144105269
- type: cos_sim_spearman
value: 82.51212105850809
- type: euclidean_pearson
value: 81.95639829909122
- type: euclidean_spearman
value: 82.3717564144213
- type: manhattan_pearson
value: 81.79273425468256
- type: manhattan_spearman
value: 82.20066817871039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.46764091858039
- type: f1
value: 99.37717466945023
- type: precision
value: 99.33194154488518
- type: recall
value: 99.46764091858039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.29407880255337
- type: f1
value: 98.11248073959938
- type: precision
value: 98.02443319392472
- type: recall
value: 98.29407880255337
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.79009352268791
- type: f1
value: 97.5176076665512
- type: precision
value: 97.38136473848286
- type: recall
value: 97.79009352268791
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26276987888363
- type: f1
value: 99.20133403545726
- type: precision
value: 99.17500438827453
- type: recall
value: 99.26276987888363
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.72727272727273
- type: f1
value: 84.67672206031433
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.34220182511161
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 33.4987096128766
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.558249999999997
- type: map_at_10
value: 34.44425000000001
- type: map_at_100
value: 35.59833333333333
- type: map_at_1000
value: 35.706916666666665
- type: map_at_3
value: 31.691749999999995
- type: map_at_5
value: 33.252916666666664
- type: mrr_at_1
value: 30.252666666666666
- type: mrr_at_10
value: 38.60675
- type: mrr_at_100
value: 39.42666666666666
- type: mrr_at_1000
value: 39.48408333333334
- type: mrr_at_3
value: 36.17441666666665
- type: mrr_at_5
value: 37.56275
- type: ndcg_at_1
value: 30.252666666666666
- type: ndcg_at_10
value: 39.683
- type: ndcg_at_100
value: 44.68541666666667
- type: ndcg_at_1000
value: 46.94316666666668
- type: ndcg_at_3
value: 34.961749999999995
- type: ndcg_at_5
value: 37.215666666666664
- type: precision_at_1
value: 30.252666666666666
- type: precision_at_10
value: 6.904166666666667
- type: precision_at_100
value: 1.0989999999999995
- type: precision_at_1000
value: 0.14733333333333334
- type: precision_at_3
value: 16.037666666666667
- type: precision_at_5
value: 11.413583333333333
- type: recall_at_1
value: 25.558249999999997
- type: recall_at_10
value: 51.13341666666666
- type: recall_at_100
value: 73.08366666666667
- type: recall_at_1000
value: 88.79483333333334
- type: recall_at_3
value: 37.989083333333326
- type: recall_at_5
value: 43.787833333333325
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.338
- type: map_at_10
value: 18.360000000000003
- type: map_at_100
value: 19.942
- type: map_at_1000
value: 20.134
- type: map_at_3
value: 15.174000000000001
- type: map_at_5
value: 16.830000000000002
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 33.768
- type: mrr_at_100
value: 34.707
- type: mrr_at_1000
value: 34.766000000000005
- type: mrr_at_3
value: 30.977
- type: mrr_at_5
value: 32.528
- type: ndcg_at_1
value: 23.257
- type: ndcg_at_10
value: 25.733
- type: ndcg_at_100
value: 32.288
- type: ndcg_at_1000
value: 35.992000000000004
- type: ndcg_at_3
value: 20.866
- type: ndcg_at_5
value: 22.612
- type: precision_at_1
value: 23.257
- type: precision_at_10
value: 8.124
- type: precision_at_100
value: 1.518
- type: precision_at_1000
value: 0.219
- type: precision_at_3
value: 15.679000000000002
- type: precision_at_5
value: 12.117
- type: recall_at_1
value: 10.338
- type: recall_at_10
value: 31.154
- type: recall_at_100
value: 54.161
- type: recall_at_1000
value: 75.21900000000001
- type: recall_at_3
value: 19.427
- type: recall_at_5
value: 24.214
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.498
- type: map_at_10
value: 19.103
- type: map_at_100
value: 27.375
- type: map_at_1000
value: 28.981
- type: map_at_3
value: 13.764999999999999
- type: map_at_5
value: 15.950000000000001
- type: mrr_at_1
value: 65.5
- type: mrr_at_10
value: 74.53800000000001
- type: mrr_at_100
value: 74.71799999999999
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.792
- type: mrr_at_5
value: 73.554
- type: ndcg_at_1
value: 53.37499999999999
- type: ndcg_at_10
value: 41.286
- type: ndcg_at_100
value: 45.972
- type: ndcg_at_1000
value: 53.123
- type: ndcg_at_3
value: 46.172999999999995
- type: ndcg_at_5
value: 43.033
- type: precision_at_1
value: 65.5
- type: precision_at_10
value: 32.725
- type: precision_at_100
value: 10.683
- type: precision_at_1000
value: 1.978
- type: precision_at_3
value: 50
- type: precision_at_5
value: 41.349999999999994
- type: recall_at_1
value: 8.498
- type: recall_at_10
value: 25.070999999999998
- type: recall_at_100
value: 52.383
- type: recall_at_1000
value: 74.91499999999999
- type: recall_at_3
value: 15.207999999999998
- type: recall_at_5
value: 18.563
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.5
- type: f1
value: 41.93833713984145
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.914
- type: map_at_10
value: 78.10000000000001
- type: map_at_100
value: 78.333
- type: map_at_1000
value: 78.346
- type: map_at_3
value: 76.626
- type: map_at_5
value: 77.627
- type: mrr_at_1
value: 72.74199999999999
- type: mrr_at_10
value: 82.414
- type: mrr_at_100
value: 82.511
- type: mrr_at_1000
value: 82.513
- type: mrr_at_3
value: 81.231
- type: mrr_at_5
value: 82.065
- type: ndcg_at_1
value: 72.74199999999999
- type: ndcg_at_10
value: 82.806
- type: ndcg_at_100
value: 83.677
- type: ndcg_at_1000
value: 83.917
- type: ndcg_at_3
value: 80.305
- type: ndcg_at_5
value: 81.843
- type: precision_at_1
value: 72.74199999999999
- type: precision_at_10
value: 10.24
- type: precision_at_100
value: 1.089
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 31.268
- type: precision_at_5
value: 19.706000000000003
- type: recall_at_1
value: 67.914
- type: recall_at_10
value: 92.889
- type: recall_at_100
value: 96.42699999999999
- type: recall_at_1000
value: 97.92
- type: recall_at_3
value: 86.21
- type: recall_at_5
value: 90.036
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.166
- type: map_at_10
value: 35.57
- type: map_at_100
value: 37.405
- type: map_at_1000
value: 37.564
- type: map_at_3
value: 30.379
- type: map_at_5
value: 33.324
- type: mrr_at_1
value: 43.519000000000005
- type: mrr_at_10
value: 51.556000000000004
- type: mrr_at_100
value: 52.344
- type: mrr_at_1000
value: 52.373999999999995
- type: mrr_at_3
value: 48.868
- type: mrr_at_5
value: 50.319
- type: ndcg_at_1
value: 43.519000000000005
- type: ndcg_at_10
value: 43.803
- type: ndcg_at_100
value: 50.468999999999994
- type: ndcg_at_1000
value: 53.111
- type: ndcg_at_3
value: 38.893
- type: ndcg_at_5
value: 40.653
- type: precision_at_1
value: 43.519000000000005
- type: precision_at_10
value: 12.253
- type: precision_at_100
value: 1.931
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 25.617
- type: precision_at_5
value: 19.383
- type: recall_at_1
value: 22.166
- type: recall_at_10
value: 51.6
- type: recall_at_100
value: 76.574
- type: recall_at_1000
value: 92.192
- type: recall_at_3
value: 34.477999999999994
- type: recall_at_5
value: 41.835
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.041
- type: map_at_10
value: 62.961999999999996
- type: map_at_100
value: 63.79899999999999
- type: map_at_1000
value: 63.854
- type: map_at_3
value: 59.399
- type: map_at_5
value: 61.669
- type: mrr_at_1
value: 78.082
- type: mrr_at_10
value: 84.321
- type: mrr_at_100
value: 84.49600000000001
- type: mrr_at_1000
value: 84.502
- type: mrr_at_3
value: 83.421
- type: mrr_at_5
value: 83.977
- type: ndcg_at_1
value: 78.082
- type: ndcg_at_10
value: 71.229
- type: ndcg_at_100
value: 74.10900000000001
- type: ndcg_at_1000
value: 75.169
- type: ndcg_at_3
value: 66.28699999999999
- type: ndcg_at_5
value: 69.084
- type: precision_at_1
value: 78.082
- type: precision_at_10
value: 14.993
- type: precision_at_100
value: 1.7239999999999998
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 42.737
- type: precision_at_5
value: 27.843
- type: recall_at_1
value: 39.041
- type: recall_at_10
value: 74.96300000000001
- type: recall_at_100
value: 86.199
- type: recall_at_1000
value: 93.228
- type: recall_at_3
value: 64.105
- type: recall_at_5
value: 69.608
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.23160000000001
- type: ap
value: 85.5674856808308
- type: f1
value: 90.18033354786317
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 24.091
- type: map_at_10
value: 36.753
- type: map_at_100
value: 37.913000000000004
- type: map_at_1000
value: 37.958999999999996
- type: map_at_3
value: 32.818999999999996
- type: map_at_5
value: 35.171
- type: mrr_at_1
value: 24.742
- type: mrr_at_10
value: 37.285000000000004
- type: mrr_at_100
value: 38.391999999999996
- type: mrr_at_1000
value: 38.431
- type: mrr_at_3
value: 33.440999999999995
- type: mrr_at_5
value: 35.75
- type: ndcg_at_1
value: 24.742
- type: ndcg_at_10
value: 43.698
- type: ndcg_at_100
value: 49.145
- type: ndcg_at_1000
value: 50.23800000000001
- type: ndcg_at_3
value: 35.769
- type: ndcg_at_5
value: 39.961999999999996
- type: precision_at_1
value: 24.742
- type: precision_at_10
value: 6.7989999999999995
- type: precision_at_100
value: 0.95
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.183
- type: recall_at_1
value: 24.091
- type: recall_at_10
value: 65.068
- type: recall_at_100
value: 89.899
- type: recall_at_1000
value: 98.16
- type: recall_at_3
value: 43.68
- type: recall_at_5
value: 53.754999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.66621067031465
- type: f1
value: 93.49622853272142
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.94702733164272
- type: f1
value: 91.17043441745282
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.20146764509674
- type: f1
value: 91.98359080555608
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.99780770435328
- type: f1
value: 89.19746342724068
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.78486912871998
- type: f1
value: 89.24578823628642
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.74502712477394
- type: f1
value: 89.00297573881542
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.9046967624259
- type: f1
value: 59.36787125785957
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.5280360664976
- type: f1
value: 57.17723440888718
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.44029352901934
- type: f1
value: 54.052855531072964
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 70.5606013153774
- type: f1
value: 52.62215934386531
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.11581211903908
- type: f1
value: 52.341291845645465
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.28933092224233
- type: f1
value: 57.07918745504911
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.38063214525892
- type: f1
value: 59.46463723443009
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.06926698049766
- type: f1
value: 52.49084283283562
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.74983187626093
- type: f1
value: 56.960640620165904
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.86550100874243
- type: f1
value: 62.47370548140688
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.971082716879636
- type: f1
value: 61.03812421957381
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.98318762609282
- type: f1
value: 51.51207916008392
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.45527908540686
- type: f1
value: 66.16631905400318
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.32750504371216
- type: f1
value: 66.16755288646591
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.09213180901143
- type: f1
value: 66.95654394661507
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.75588433086752
- type: f1
value: 71.79973779656923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.49428379287154
- type: f1
value: 68.37494379215734
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.90921318090115
- type: f1
value: 66.79517376481645
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.12104909213181
- type: f1
value: 67.29448842879584
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.34095494283793
- type: f1
value: 67.01134288992947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.61264290517822
- type: f1
value: 64.68730512660757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.79757901815738
- type: f1
value: 65.24938539425598
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.68728984532616
- type: f1
value: 67.0487169762553
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.07464694014795
- type: f1
value: 59.183532276789286
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.04707464694015
- type: f1
value: 67.66829629003848
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.42434431741762
- type: f1
value: 59.01617226544757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.53127101546738
- type: f1
value: 68.10033760906255
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.50504371217215
- type: f1
value: 69.74931103158923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.91190316072628
- type: f1
value: 54.05551136648796
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.78211163416275
- type: f1
value: 49.874888544058535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.017484868863484
- type: f1
value: 44.53364263352014
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.16207128446537
- type: f1
value: 59.01185692320829
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.42501681237391
- type: f1
value: 67.13169450166086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0780094149294
- type: f1
value: 64.41720167850707
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.57162071284466
- type: f1
value: 62.414138683804424
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.71149966375252
- type: f1
value: 58.594805125087234
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.03900470746471
- type: f1
value: 63.87937257883887
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.8776059179556
- type: f1
value: 57.48587618059131
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87895090786819
- type: f1
value: 66.8141299430347
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.45057162071285
- type: f1
value: 67.46444039673516
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.546738399462
- type: f1
value: 68.63640876702655
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.72965702757229
- type: f1
value: 68.54119560379115
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.35574983187625
- type: f1
value: 65.88844917691927
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.70477471418964
- type: f1
value: 69.19665697061978
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0880968392737
- type: f1
value: 64.76962317666086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.18493611297916
- type: f1
value: 62.49984559035371
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.75857431069265
- type: f1
value: 69.20053687623418
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.500336247478145
- type: f1
value: 55.2972398687929
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.68997982515132
- type: f1
value: 59.36848202755348
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.01950235373235
- type: f1
value: 60.09351954625423
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.29186281102892
- type: f1
value: 67.57860496703447
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.77471418964357
- type: f1
value: 61.913983147713836
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87222595830532
- type: f1
value: 66.03679033708141
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.04505716207127
- type: f1
value: 61.28569169817908
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.38466711499663
- type: f1
value: 67.20532357036844
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.12306657700067
- type: f1
value: 68.91251226588182
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.20040349697378
- type: f1
value: 66.02657347714175
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.73907195696032
- type: f1
value: 66.98484521791418
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.58843308675185
- type: f1
value: 58.95591723092005
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.22730329522528
- type: f1
value: 66.0894499712115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48285137861465
- type: f1
value: 65.21963176785157
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.74714189643578
- type: f1
value: 66.8212192745412
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.09213180901143
- type: f1
value: 56.70735546356339
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.05716207128448
- type: f1
value: 74.8413712365364
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.69737726967047
- type: f1
value: 74.7664341963
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.90383322125084
- type: f1
value: 73.59201554448323
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.51176866173503
- type: f1
value: 77.46104434577758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.31069266980496
- type: f1
value: 74.61048660675635
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.95225285810356
- type: f1
value: 72.33160006574627
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.12373907195696
- type: f1
value: 73.20921012557481
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.86684599865501
- type: f1
value: 73.82348774610831
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.40215198386012
- type: f1
value: 71.11945183971858
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.12844653665098
- type: f1
value: 71.34450495911766
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.52252858103566
- type: f1
value: 73.98878711342999
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.93611297915265
- type: f1
value: 63.723200467653385
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11903160726295
- type: f1
value: 73.82138439467096
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.15198386012105
- type: f1
value: 66.02172193802167
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.32414256893072
- type: f1
value: 74.30943421170574
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.46805648957633
- type: f1
value: 77.62808409298209
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.318762609280434
- type: f1
value: 62.094284066075076
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.34902488231338
- type: f1
value: 57.12893860987984
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.88433086751849
- type: f1
value: 48.2272350802058
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.4425016812374
- type: f1
value: 64.61463095996173
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.04707464694015
- type: f1
value: 75.05099199098998
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.50437121721586
- type: f1
value: 69.83397721096314
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.94283792871553
- type: f1
value: 68.8704663703913
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.79488903833222
- type: f1
value: 63.615424063345436
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.88231338264963
- type: f1
value: 68.57892302593237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.248150638870214
- type: f1
value: 61.06680605338809
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.84196368527236
- type: f1
value: 74.52566464968763
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.8285137861466
- type: f1
value: 74.8853197608802
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.13248150638869
- type: f1
value: 74.3982040999179
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.49024882313383
- type: f1
value: 73.82153848368573
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.72158708809684
- type: f1
value: 71.85049433180541
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.137861466039
- type: f1
value: 75.37628348188467
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.86953597848016
- type: f1
value: 71.87537624521661
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.27572293207801
- type: f1
value: 68.80017302344231
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.09952925353059
- type: f1
value: 76.07992707688408
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.140551445864155
- type: f1
value: 61.73855010331415
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.27774041694687
- type: f1
value: 64.83664868894539
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.69468728984533
- type: f1
value: 64.76239666920868
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.44653665097512
- type: f1
value: 73.14646052013873
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.71351714862139
- type: f1
value: 66.67212180163382
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.9946200403497
- type: f1
value: 73.87348793725525
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.15400134498992
- type: f1
value: 67.09433241421094
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.11365164761264
- type: f1
value: 73.59502539433753
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.82582380632145
- type: f1
value: 76.89992945316313
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.81237390719569
- type: f1
value: 72.36499770986265
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.480506569594695
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.71252128004552
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.421396787056548
- type: mrr
value: 32.48155274872267
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.595
- type: map_at_10
value: 12.642000000000001
- type: map_at_100
value: 15.726
- type: map_at_1000
value: 17.061999999999998
- type: map_at_3
value: 9.125
- type: map_at_5
value: 10.866000000000001
- type: mrr_at_1
value: 43.344
- type: mrr_at_10
value: 52.227999999999994
- type: mrr_at_100
value: 52.898999999999994
- type: mrr_at_1000
value: 52.944
- type: mrr_at_3
value: 49.845
- type: mrr_at_5
value: 51.115
- type: ndcg_at_1
value: 41.949999999999996
- type: ndcg_at_10
value: 33.995
- type: ndcg_at_100
value: 30.869999999999997
- type: ndcg_at_1000
value: 39.487
- type: ndcg_at_3
value: 38.903999999999996
- type: ndcg_at_5
value: 37.236999999999995
- type: precision_at_1
value: 43.344
- type: precision_at_10
value: 25.480000000000004
- type: precision_at_100
value: 7.672
- type: precision_at_1000
value: 2.028
- type: precision_at_3
value: 36.636
- type: precision_at_5
value: 32.632
- type: recall_at_1
value: 5.595
- type: recall_at_10
value: 16.466
- type: recall_at_100
value: 31.226
- type: recall_at_1000
value: 62.778999999999996
- type: recall_at_3
value: 9.931
- type: recall_at_5
value: 12.884
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.414
- type: map_at_10
value: 56.754000000000005
- type: map_at_100
value: 57.457
- type: map_at_1000
value: 57.477999999999994
- type: map_at_3
value: 52.873999999999995
- type: map_at_5
value: 55.175
- type: mrr_at_1
value: 45.278
- type: mrr_at_10
value: 59.192
- type: mrr_at_100
value: 59.650000000000006
- type: mrr_at_1000
value: 59.665
- type: mrr_at_3
value: 56.141
- type: mrr_at_5
value: 57.998000000000005
- type: ndcg_at_1
value: 45.278
- type: ndcg_at_10
value: 64.056
- type: ndcg_at_100
value: 66.89
- type: ndcg_at_1000
value: 67.364
- type: ndcg_at_3
value: 56.97
- type: ndcg_at_5
value: 60.719
- type: precision_at_1
value: 45.278
- type: precision_at_10
value: 9.994
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.512
- type: precision_at_5
value: 17.509
- type: recall_at_1
value: 40.414
- type: recall_at_10
value: 83.596
- type: recall_at_100
value: 95.72
- type: recall_at_1000
value: 99.24
- type: recall_at_3
value: 65.472
- type: recall_at_5
value: 74.039
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.352
- type: map_at_10
value: 84.369
- type: map_at_100
value: 85.02499999999999
- type: map_at_1000
value: 85.04
- type: map_at_3
value: 81.42399999999999
- type: map_at_5
value: 83.279
- type: mrr_at_1
value: 81.05
- type: mrr_at_10
value: 87.401
- type: mrr_at_100
value: 87.504
- type: mrr_at_1000
value: 87.505
- type: mrr_at_3
value: 86.443
- type: mrr_at_5
value: 87.10799999999999
- type: ndcg_at_1
value: 81.04
- type: ndcg_at_10
value: 88.181
- type: ndcg_at_100
value: 89.411
- type: ndcg_at_1000
value: 89.507
- type: ndcg_at_3
value: 85.28099999999999
- type: ndcg_at_5
value: 86.888
- type: precision_at_1
value: 81.04
- type: precision_at_10
value: 13.406
- type: precision_at_100
value: 1.5350000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.31
- type: precision_at_5
value: 24.54
- type: recall_at_1
value: 70.352
- type: recall_at_10
value: 95.358
- type: recall_at_100
value: 99.541
- type: recall_at_1000
value: 99.984
- type: recall_at_3
value: 87.111
- type: recall_at_5
value: 91.643
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 46.54068723291946
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.216287629895994
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.023000000000001
- type: map_at_10
value: 10.071
- type: map_at_100
value: 11.892
- type: map_at_1000
value: 12.196
- type: map_at_3
value: 7.234
- type: map_at_5
value: 8.613999999999999
- type: mrr_at_1
value: 19.900000000000002
- type: mrr_at_10
value: 30.516
- type: mrr_at_100
value: 31.656000000000002
- type: mrr_at_1000
value: 31.723000000000003
- type: mrr_at_3
value: 27.400000000000002
- type: mrr_at_5
value: 29.270000000000003
- type: ndcg_at_1
value: 19.900000000000002
- type: ndcg_at_10
value: 17.474
- type: ndcg_at_100
value: 25.020999999999997
- type: ndcg_at_1000
value: 30.728
- type: ndcg_at_3
value: 16.588
- type: ndcg_at_5
value: 14.498
- type: precision_at_1
value: 19.900000000000002
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 2.011
- type: precision_at_1000
value: 0.33899999999999997
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 12.839999999999998
- type: recall_at_1
value: 4.023000000000001
- type: recall_at_10
value: 18.497
- type: recall_at_100
value: 40.8
- type: recall_at_1000
value: 68.812
- type: recall_at_3
value: 9.508
- type: recall_at_5
value: 12.983
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.967008785134
- type: cos_sim_spearman
value: 80.23142141101837
- type: euclidean_pearson
value: 81.20166064704539
- type: euclidean_spearman
value: 80.18961335654585
- type: manhattan_pearson
value: 81.13925443187625
- type: manhattan_spearman
value: 80.07948723044424
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.94262461316023
- type: cos_sim_spearman
value: 80.01596278563865
- type: euclidean_pearson
value: 83.80799622922581
- type: euclidean_spearman
value: 79.94984954947103
- type: manhattan_pearson
value: 83.68473841756281
- type: manhattan_spearman
value: 79.84990707951822
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 80.57346443146068
- type: cos_sim_spearman
value: 81.54689837570866
- type: euclidean_pearson
value: 81.10909881516007
- type: euclidean_spearman
value: 81.56746243261762
- type: manhattan_pearson
value: 80.87076036186582
- type: manhattan_spearman
value: 81.33074987964402
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.54733787179849
- type: cos_sim_spearman
value: 77.72202105610411
- type: euclidean_pearson
value: 78.9043595478849
- type: euclidean_spearman
value: 77.93422804309435
- type: manhattan_pearson
value: 78.58115121621368
- type: manhattan_spearman
value: 77.62508135122033
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.59880017237558
- type: cos_sim_spearman
value: 89.31088630824758
- type: euclidean_pearson
value: 88.47069261564656
- type: euclidean_spearman
value: 89.33581971465233
- type: manhattan_pearson
value: 88.40774264100956
- type: manhattan_spearman
value: 89.28657485627835
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.08055117917084
- type: cos_sim_spearman
value: 85.78491813080304
- type: euclidean_pearson
value: 84.99329155500392
- type: euclidean_spearman
value: 85.76728064677287
- type: manhattan_pearson
value: 84.87947428989587
- type: manhattan_spearman
value: 85.62429454917464
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.14190939287384
- type: cos_sim_spearman
value: 82.27331573306041
- type: euclidean_pearson
value: 81.891896953716
- type: euclidean_spearman
value: 82.37695542955998
- type: manhattan_pearson
value: 81.73123869460504
- type: manhattan_spearman
value: 82.19989168441421
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.84695301843362
- type: cos_sim_spearman
value: 77.87790986014461
- type: euclidean_pearson
value: 76.91981583106315
- type: euclidean_spearman
value: 77.88154772749589
- type: manhattan_pearson
value: 76.94953277451093
- type: manhattan_spearman
value: 77.80499230728604
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.44657840482016
- type: cos_sim_spearman
value: 75.05531095119674
- type: euclidean_pearson
value: 75.88161755829299
- type: euclidean_spearman
value: 74.73176238219332
- type: manhattan_pearson
value: 75.63984765635362
- type: manhattan_spearman
value: 74.86476440770737
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.64700140524133
- type: cos_sim_spearman
value: 86.16014210425672
- type: euclidean_pearson
value: 86.49086860843221
- type: euclidean_spearman
value: 86.09729326815614
- type: manhattan_pearson
value: 86.43406265125513
- type: manhattan_spearman
value: 86.17740150939994
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.91170098764921
- type: cos_sim_spearman
value: 88.12437004058931
- type: euclidean_pearson
value: 88.81828254494437
- type: euclidean_spearman
value: 88.14831794572122
- type: manhattan_pearson
value: 88.93442183448961
- type: manhattan_spearman
value: 88.15254630778304
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.91390577997292
- type: cos_sim_spearman
value: 71.22979457536074
- type: euclidean_pearson
value: 74.40314008106749
- type: euclidean_spearman
value: 72.54972136083246
- type: manhattan_pearson
value: 73.85687539530218
- type: manhattan_spearman
value: 72.09500771742637
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.9301067983089
- type: cos_sim_spearman
value: 80.74989828346473
- type: euclidean_pearson
value: 81.36781301814257
- type: euclidean_spearman
value: 80.9448819964426
- type: manhattan_pearson
value: 81.0351322685609
- type: manhattan_spearman
value: 80.70192121844177
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.13820465980005
- type: cos_sim_spearman
value: 86.73532498758757
- type: euclidean_pearson
value: 87.21329451846637
- type: euclidean_spearman
value: 86.57863198601002
- type: manhattan_pearson
value: 87.06973713818554
- type: manhattan_spearman
value: 86.47534918791499
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.48720108904415
- type: cos_sim_spearman
value: 85.62221757068387
- type: euclidean_pearson
value: 86.1010129512749
- type: euclidean_spearman
value: 85.86580966509942
- type: manhattan_pearson
value: 86.26800938808971
- type: manhattan_spearman
value: 85.88902721678429
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 83.98021347333516
- type: cos_sim_spearman
value: 84.53806553803501
- type: euclidean_pearson
value: 84.61483347248364
- type: euclidean_spearman
value: 85.14191408011702
- type: manhattan_pearson
value: 84.75297588825967
- type: manhattan_spearman
value: 85.33176753669242
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.51856644893233
- type: cos_sim_spearman
value: 85.27510748506413
- type: euclidean_pearson
value: 85.09886861540977
- type: euclidean_spearman
value: 85.62579245860887
- type: manhattan_pearson
value: 84.93017860464607
- type: manhattan_spearman
value: 85.5063988898453
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.581573200584195
- type: cos_sim_spearman
value: 63.05503590247928
- type: euclidean_pearson
value: 63.652564812602094
- type: euclidean_spearman
value: 62.64811520876156
- type: manhattan_pearson
value: 63.506842893061076
- type: manhattan_spearman
value: 62.51289573046917
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.2248801729127
- type: cos_sim_spearman
value: 56.5936604678561
- type: euclidean_pearson
value: 43.98149464089
- type: euclidean_spearman
value: 56.108561882423615
- type: manhattan_pearson
value: 43.86880305903564
- type: manhattan_spearman
value: 56.04671150510166
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.17564527009831
- type: cos_sim_spearman
value: 64.57978560979488
- type: euclidean_pearson
value: 58.8818330154583
- type: euclidean_spearman
value: 64.99214839071281
- type: manhattan_pearson
value: 58.72671436121381
- type: manhattan_spearman
value: 65.10713416616109
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 26.772131864023297
- type: cos_sim_spearman
value: 34.68200792408681
- type: euclidean_pearson
value: 16.68082419005441
- type: euclidean_spearman
value: 34.83099932652166
- type: manhattan_pearson
value: 16.52605949659529
- type: manhattan_spearman
value: 34.82075801399475
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.42415189043831
- type: cos_sim_spearman
value: 63.54594264576758
- type: euclidean_pearson
value: 57.36577498297745
- type: euclidean_spearman
value: 63.111466379158074
- type: manhattan_pearson
value: 57.584543715873885
- type: manhattan_spearman
value: 63.22361054139183
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.55216762405518
- type: cos_sim_spearman
value: 56.98670142896412
- type: euclidean_pearson
value: 50.15318757562699
- type: euclidean_spearman
value: 56.524941926541906
- type: manhattan_pearson
value: 49.955618528674904
- type: manhattan_spearman
value: 56.37102209240117
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 49.20540980338571
- type: cos_sim_spearman
value: 59.9009453504406
- type: euclidean_pearson
value: 49.557749853620535
- type: euclidean_spearman
value: 59.76631621172456
- type: manhattan_pearson
value: 49.62340591181147
- type: manhattan_spearman
value: 59.94224880322436
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 51.508169956576985
- type: cos_sim_spearman
value: 66.82461565306046
- type: euclidean_pearson
value: 56.2274426480083
- type: euclidean_spearman
value: 66.6775323848333
- type: manhattan_pearson
value: 55.98277796300661
- type: manhattan_spearman
value: 66.63669848497175
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.86478788045507
- type: cos_sim_spearman
value: 76.7946552053193
- type: euclidean_pearson
value: 75.01598530490269
- type: euclidean_spearman
value: 76.83618917858281
- type: manhattan_pearson
value: 74.68337628304332
- type: manhattan_spearman
value: 76.57480204017773
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.922619099401984
- type: cos_sim_spearman
value: 56.599362477240774
- type: euclidean_pearson
value: 56.68307052369783
- type: euclidean_spearman
value: 54.28760436777401
- type: manhattan_pearson
value: 56.67763566500681
- type: manhattan_spearman
value: 53.94619541711359
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.74357206710913
- type: cos_sim_spearman
value: 72.5208244925311
- type: euclidean_pearson
value: 67.49254562186032
- type: euclidean_spearman
value: 72.02469076238683
- type: manhattan_pearson
value: 67.45251772238085
- type: manhattan_spearman
value: 72.05538819984538
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.25734330033191
- type: cos_sim_spearman
value: 76.98349083946823
- type: euclidean_pearson
value: 73.71642838667736
- type: euclidean_spearman
value: 77.01715504651384
- type: manhattan_pearson
value: 73.61712711868105
- type: manhattan_spearman
value: 77.01392571153896
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.18215462781212
- type: cos_sim_spearman
value: 65.54373266117607
- type: euclidean_pearson
value: 64.54126095439005
- type: euclidean_spearman
value: 65.30410369102711
- type: manhattan_pearson
value: 63.50332221148234
- type: manhattan_spearman
value: 64.3455878104313
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30509221440029
- type: cos_sim_spearman
value: 65.99582704642478
- type: euclidean_pearson
value: 63.43818859884195
- type: euclidean_spearman
value: 66.83172582815764
- type: manhattan_pearson
value: 63.055779168508764
- type: manhattan_spearman
value: 65.49585020501449
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.587830825340404
- type: cos_sim_spearman
value: 68.93467614588089
- type: euclidean_pearson
value: 62.3073527367404
- type: euclidean_spearman
value: 69.69758171553175
- type: manhattan_pearson
value: 61.9074580815789
- type: manhattan_spearman
value: 69.57696375597865
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.143220125577066
- type: cos_sim_spearman
value: 67.78857859159226
- type: euclidean_pearson
value: 55.58225107923733
- type: euclidean_spearman
value: 67.80662907184563
- type: manhattan_pearson
value: 56.24953502726514
- type: manhattan_spearman
value: 67.98262125431616
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 21.826928900322066
- type: cos_sim_spearman
value: 49.578506634400405
- type: euclidean_pearson
value: 27.939890138843214
- type: euclidean_spearman
value: 52.71950519136242
- type: manhattan_pearson
value: 26.39878683847546
- type: manhattan_spearman
value: 47.54609580342499
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.27603854632001
- type: cos_sim_spearman
value: 50.709255283710995
- type: euclidean_pearson
value: 59.5419024445929
- type: euclidean_spearman
value: 50.709255283710995
- type: manhattan_pearson
value: 59.03256832438492
- type: manhattan_spearman
value: 61.97797868009122
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.00757054859712
- type: cos_sim_spearman
value: 87.29283629622222
- type: euclidean_pearson
value: 86.54824171775536
- type: euclidean_spearman
value: 87.24364730491402
- type: manhattan_pearson
value: 86.5062156915074
- type: manhattan_spearman
value: 87.15052170378574
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.03549357197389
- type: mrr
value: 95.05437645143527
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.260999999999996
- type: map_at_10
value: 66.259
- type: map_at_100
value: 66.884
- type: map_at_1000
value: 66.912
- type: map_at_3
value: 63.685
- type: map_at_5
value: 65.35499999999999
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 67.5
- type: mrr_at_100
value: 68.013
- type: mrr_at_1000
value: 68.038
- type: mrr_at_3
value: 65.61099999999999
- type: mrr_at_5
value: 66.861
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 70.41
- type: ndcg_at_100
value: 73.10600000000001
- type: ndcg_at_1000
value: 73.846
- type: ndcg_at_3
value: 66.133
- type: ndcg_at_5
value: 68.499
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.232999999999999
- type: precision_at_100
value: 1.0630000000000002
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.667
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 57.260999999999996
- type: recall_at_10
value: 81.94399999999999
- type: recall_at_100
value: 93.867
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.339
- type: recall_at_5
value: 76.25
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74356435643564
- type: cos_sim_ap
value: 93.13411948212683
- type: cos_sim_f1
value: 86.80521991300147
- type: cos_sim_precision
value: 84.00374181478017
- type: cos_sim_recall
value: 89.8
- type: dot_accuracy
value: 99.67920792079208
- type: dot_ap
value: 89.27277565444479
- type: dot_f1
value: 83.9276990718124
- type: dot_precision
value: 82.04393505253104
- type: dot_recall
value: 85.9
- type: euclidean_accuracy
value: 99.74257425742574
- type: euclidean_ap
value: 93.17993008259062
- type: euclidean_f1
value: 86.69396110542476
- type: euclidean_precision
value: 88.78406708595388
- type: euclidean_recall
value: 84.7
- type: manhattan_accuracy
value: 99.74257425742574
- type: manhattan_ap
value: 93.14413755550099
- type: manhattan_f1
value: 86.82483594144371
- type: manhattan_precision
value: 87.66564729867483
- type: manhattan_recall
value: 86
- type: max_accuracy
value: 99.74356435643564
- type: max_ap
value: 93.17993008259062
- type: max_f1
value: 86.82483594144371
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 57.525863806168566
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.68850574423839
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.71580650644033
- type: mrr
value: 50.50971903913081
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.152190498799484
- type: cos_sim_spearman
value: 29.686180371952727
- type: dot_pearson
value: 27.248664793816342
- type: dot_spearman
value: 28.37748983721745
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.20400000000000001
- type: map_at_10
value: 1.6209999999999998
- type: map_at_100
value: 9.690999999999999
- type: map_at_1000
value: 23.733
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.885
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.56700000000001
- type: mrr_at_100
value: 86.56700000000001
- type: mrr_at_1000
value: 86.56700000000001
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 86.56700000000001
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 71.326
- type: ndcg_at_100
value: 54.208999999999996
- type: ndcg_at_1000
value: 49.252
- type: ndcg_at_3
value: 74.235
- type: ndcg_at_5
value: 73.833
- type: precision_at_1
value: 78
- type: precision_at_10
value: 74.8
- type: precision_at_100
value: 55.50000000000001
- type: precision_at_1000
value: 21.836
- type: precision_at_3
value: 78
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.20400000000000001
- type: recall_at_10
value: 1.894
- type: recall_at_100
value: 13.245999999999999
- type: recall_at_1000
value: 46.373
- type: recall_at_3
value: 0.613
- type: recall_at_5
value: 0.991
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.69999999999999
- type: precision
value: 94.11666666666667
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.20809248554913
- type: f1
value: 63.431048720066066
- type: precision
value: 61.69143958161298
- type: recall
value: 68.20809248554913
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.21951219512195
- type: f1
value: 66.82926829268293
- type: precision
value: 65.1260162601626
- type: recall
value: 71.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.26666666666667
- type: precision
value: 95.8
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.3
- type: f1
value: 99.06666666666666
- type: precision
value: 98.95
- type: recall
value: 99.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.63333333333333
- type: precision
value: 96.26666666666668
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.86666666666666
- type: precision
value: 94.31666666666668
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.01492537313433
- type: f1
value: 40.178867566927266
- type: precision
value: 38.179295828549556
- type: recall
value: 47.01492537313433
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.62537480063796
- type: precision
value: 82.44555555555554
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.48780487804879
- type: f1
value: 75.45644599303138
- type: precision
value: 73.37398373983739
- type: recall
value: 80.48780487804879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.95666666666666
- type: precision
value: 91.125
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.73754556500607
- type: f1
value: 89.65168084244632
- type: precision
value: 88.73025516403402
- type: recall
value: 91.73754556500607
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.04347826086956
- type: f1
value: 76.2128364389234
- type: precision
value: 74.2
- type: recall
value: 81.04347826086956
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.65217391304348
- type: f1
value: 79.4376811594203
- type: precision
value: 77.65797101449274
- type: recall
value: 83.65217391304348
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.5
- type: f1
value: 85.02690476190476
- type: precision
value: 83.96261904761904
- type: recall
value: 87.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.3
- type: f1
value: 86.52333333333333
- type: precision
value: 85.22833333333332
- type: recall
value: 89.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.01809408926418
- type: f1
value: 59.00594446432805
- type: precision
value: 56.827215807915444
- type: recall
value: 65.01809408926418
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.2
- type: f1
value: 88.58
- type: precision
value: 87.33333333333334
- type: recall
value: 91.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.199999999999996
- type: f1
value: 53.299166276284915
- type: precision
value: 51.3383908045977
- type: recall
value: 59.199999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.2
- type: precision
value: 90.25
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.76190476190476
- type: f1
value: 59.867110667110666
- type: precision
value: 58.07390192653351
- type: recall
value: 64.76190476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.2
- type: f1
value: 71.48147546897547
- type: precision
value: 69.65409090909091
- type: recall
value: 76.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.8
- type: f1
value: 92.14
- type: precision
value: 91.35833333333333
- type: recall
value: 93.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.2
- type: precision
value: 96.85000000000001
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 92.93333333333334
- type: precision
value: 92.13333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.1
- type: f1
value: 69.14817460317461
- type: precision
value: 67.2515873015873
- type: recall
value: 74.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 94.01333333333335
- type: precision
value: 93.46666666666667
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.9
- type: f1
value: 72.07523809523809
- type: precision
value: 70.19777777777779
- type: recall
value: 76.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.31666666666666
- type: precision
value: 91.43333333333332
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.76666666666668
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.85714285714286
- type: f1
value: 90.92093441150045
- type: precision
value: 90.00449236298293
- type: recall
value: 92.85714285714286
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.16239316239316
- type: f1
value: 91.33903133903132
- type: precision
value: 90.56267806267806
- type: recall
value: 93.16239316239316
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.25666666666666
- type: precision
value: 89.25833333333334
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.22727272727272
- type: f1
value: 87.53030303030303
- type: precision
value: 86.37121212121211
- type: recall
value: 90.22727272727272
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.03563941299791
- type: f1
value: 74.7349505840072
- type: precision
value: 72.9035639412998
- type: recall
value: 79.03563941299791
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97
- type: f1
value: 96.15
- type: precision
value: 95.76666666666668
- type: recall
value: 97
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.26459143968872
- type: f1
value: 71.55642023346303
- type: precision
value: 69.7544932369835
- type: recall
value: 76.26459143968872
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.119658119658126
- type: f1
value: 51.65242165242165
- type: precision
value: 49.41768108434775
- type: recall
value: 58.119658119658126
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.3
- type: f1
value: 69.52055555555555
- type: precision
value: 67.7574938949939
- type: recall
value: 74.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.31666666666666
- type: precision
value: 92.60000000000001
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.63551401869158
- type: f1
value: 72.35202492211837
- type: precision
value: 70.60358255451713
- type: recall
value: 76.63551401869158
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.4811111111111
- type: precision
value: 87.7452380952381
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95
- type: f1
value: 93.60666666666667
- type: precision
value: 92.975
- type: recall
value: 95
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 63.01595782872099
- type: precision
value: 61.596587301587306
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.52999999999999
- type: precision
value: 94
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.28999999999999
- type: precision
value: 92.675
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.75
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.9
- type: f1
value: 89.83
- type: precision
value: 88.92
- type: recall
value: 91.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34222222222223
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.333333333333336
- type: f1
value: 55.31203703703703
- type: precision
value: 53.39971108326371
- type: recall
value: 60.333333333333336
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 12.9
- type: f1
value: 11.099861903031458
- type: precision
value: 10.589187932631877
- type: recall
value: 12.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7
- type: f1
value: 83.0152380952381
- type: precision
value: 81.37833333333333
- type: recall
value: 86.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.39285714285714
- type: f1
value: 56.832482993197274
- type: precision
value: 54.56845238095237
- type: recall
value: 63.39285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.73765093304062
- type: f1
value: 41.555736920720456
- type: precision
value: 39.06874531737319
- type: recall
value: 48.73765093304062
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 41.099999999999994
- type: f1
value: 36.540165945165946
- type: precision
value: 35.05175685425686
- type: recall
value: 41.099999999999994
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.42333333333333
- type: precision
value: 92.75833333333333
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.63333333333334
- type: precision
value: 93.01666666666665
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.64833333333334
- type: precision
value: 71.90282106782105
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.4
- type: f1
value: 54.90521367521367
- type: precision
value: 53.432840025471606
- type: recall
value: 59.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.6
- type: precision
value: 96.2
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 62.25926129426129
- type: precision
value: 60.408376623376626
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.60666666666667
- type: precision
value: 86.45277777777778
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97
- type: precision
value: 96.65
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39746031746031
- type: precision
value: 90.6125
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 32.11678832116788
- type: f1
value: 27.210415386260234
- type: precision
value: 26.20408990846947
- type: recall
value: 32.11678832116788
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.787319277832475
- type: precision
value: 6.3452094433344435
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.08
- type: precision
value: 94.61666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.3
- type: f1
value: 93.88333333333333
- type: precision
value: 93.18333333333332
- type: recall
value: 95.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.11904761904762
- type: f1
value: 80.69444444444444
- type: precision
value: 78.72023809523809
- type: recall
value: 85.11904761904762
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 11.1
- type: f1
value: 9.276381801735853
- type: precision
value: 8.798174603174601
- type: recall
value: 11.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.56107660455487
- type: f1
value: 58.70433569191332
- type: precision
value: 56.896926581464015
- type: recall
value: 63.56107660455487
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.10000000000001
- type: precision
value: 92.35
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 96.01222222222222
- type: precision
value: 95.67083333333332
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.2
- type: f1
value: 7.911555250305249
- type: precision
value: 7.631246556216846
- type: recall
value: 9.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.48917748917748
- type: f1
value: 72.27375798804371
- type: precision
value: 70.14430014430013
- type: recall
value: 77.48917748917748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.09923664122137
- type: f1
value: 72.61541257724463
- type: precision
value: 70.8998380754106
- type: recall
value: 77.09923664122137
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2532751091703
- type: f1
value: 97.69529354682193
- type: precision
value: 97.42843279961184
- type: recall
value: 98.2532751091703
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.8
- type: f1
value: 79.14672619047619
- type: precision
value: 77.59489247311828
- type: recall
value: 82.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.35028248587571
- type: f1
value: 92.86252354048965
- type: precision
value: 92.2080979284369
- type: recall
value: 94.35028248587571
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.282429263935621
- type: precision
value: 5.783274240739785
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 91.025
- type: precision
value: 90.30428571428571
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81
- type: f1
value: 77.8232380952381
- type: precision
value: 76.60194444444444
- type: recall
value: 81
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91
- type: f1
value: 88.70857142857142
- type: precision
value: 87.7
- type: recall
value: 91
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.3
- type: precision
value: 94.76666666666667
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.1
- type: f1
value: 7.001008218834307
- type: precision
value: 6.708329562594269
- type: recall
value: 8.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.1313672922252
- type: f1
value: 84.09070598748882
- type: precision
value: 82.79171454104429
- type: recall
value: 87.1313672922252
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.73333333333332
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.29249011857708
- type: f1
value: 36.981018542283365
- type: precision
value: 35.415877813576024
- type: recall
value: 42.29249011857708
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.80281690140845
- type: f1
value: 80.86854460093896
- type: precision
value: 79.60093896713614
- type: recall
value: 83.80281690140845
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.26946107784431
- type: f1
value: 39.80235464678088
- type: precision
value: 38.14342660001342
- type: recall
value: 45.26946107784431
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.9
- type: precision
value: 92.26666666666668
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.93103448275862
- type: f1
value: 33.15192743764172
- type: precision
value: 31.57456528146183
- type: recall
value: 37.93103448275862
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.01408450704226
- type: f1
value: 63.41549295774648
- type: precision
value: 61.342778895595806
- type: recall
value: 69.01408450704226
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.66666666666667
- type: f1
value: 71.60705960705961
- type: precision
value: 69.60683760683762
- type: recall
value: 76.66666666666667
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.48333333333333
- type: precision
value: 93.83333333333333
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 52.81837160751566
- type: f1
value: 48.435977731384824
- type: precision
value: 47.11291973845539
- type: recall
value: 52.81837160751566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.9
- type: f1
value: 38.88962621607783
- type: precision
value: 36.95936507936508
- type: recall
value: 44.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.55374592833876
- type: f1
value: 88.22553125484721
- type: precision
value: 87.26927252985884
- type: recall
value: 90.55374592833876
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.13333333333333
- type: precision
value: 92.45333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.99666666666667
- type: precision
value: 91.26666666666668
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.03937007874016
- type: f1
value: 81.75853018372703
- type: precision
value: 80.34120734908137
- type: recall
value: 85.03937007874016
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.3
- type: f1
value: 85.5
- type: precision
value: 84.25833333333334
- type: recall
value: 88.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.51246537396122
- type: f1
value: 60.02297410192148
- type: precision
value: 58.133467727289236
- type: recall
value: 65.51246537396122
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.89
- type: precision
value: 94.39166666666667
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.692307692307686
- type: f1
value: 53.162393162393165
- type: precision
value: 51.70673076923077
- type: recall
value: 57.692307692307686
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.60000000000001
- type: f1
value: 89.21190476190475
- type: precision
value: 88.08666666666667
- type: recall
value: 91.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88
- type: f1
value: 85.47
- type: precision
value: 84.43266233766234
- type: recall
value: 88
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 90.64999999999999
- type: precision
value: 89.68333333333332
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.30660377358491
- type: f1
value: 76.33044137466307
- type: precision
value: 74.78970125786164
- type: recall
value: 80.30660377358491
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.44
- type: precision
value: 94.99166666666666
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.53284671532847
- type: f1
value: 95.37712895377129
- type: precision
value: 94.7992700729927
- type: recall
value: 96.53284671532847
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89
- type: f1
value: 86.23190476190476
- type: precision
value: 85.035
- type: recall
value: 89
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.585
- type: map_at_10
value: 9.012
- type: map_at_100
value: 14.027000000000001
- type: map_at_1000
value: 15.565000000000001
- type: map_at_3
value: 5.032
- type: map_at_5
value: 6.657
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 45.377
- type: mrr_at_100
value: 46.119
- type: mrr_at_1000
value: 46.127
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 42.585
- type: ndcg_at_1
value: 27.551
- type: ndcg_at_10
value: 23.395
- type: ndcg_at_100
value: 33.342
- type: ndcg_at_1000
value: 45.523
- type: ndcg_at_3
value: 25.158
- type: ndcg_at_5
value: 23.427
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 21.429000000000002
- type: precision_at_100
value: 6.714
- type: precision_at_1000
value: 1.473
- type: precision_at_3
value: 27.211000000000002
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.585
- type: recall_at_10
value: 15.418999999999999
- type: recall_at_100
value: 42.485
- type: recall_at_1000
value: 79.536
- type: recall_at_3
value: 6.239999999999999
- type: recall_at_5
value: 8.996
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.3234
- type: ap
value: 14.361688653847423
- type: f1
value: 54.819068624319044
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.97792869269949
- type: f1
value: 62.28965628513728
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 38.90540145385218
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.53513739047506
- type: cos_sim_ap
value: 75.27741586677557
- type: cos_sim_f1
value: 69.18792902473774
- type: cos_sim_precision
value: 67.94708725515136
- type: cos_sim_recall
value: 70.47493403693932
- type: dot_accuracy
value: 84.7052512368123
- type: dot_ap
value: 69.36075482849378
- type: dot_f1
value: 64.44688376631296
- type: dot_precision
value: 59.92288500793831
- type: dot_recall
value: 69.70976253298153
- type: euclidean_accuracy
value: 86.60666388508076
- type: euclidean_ap
value: 75.47512772621097
- type: euclidean_f1
value: 69.413872536473
- type: euclidean_precision
value: 67.39562624254472
- type: euclidean_recall
value: 71.55672823218997
- type: manhattan_accuracy
value: 86.52917684925792
- type: manhattan_ap
value: 75.34000110496703
- type: manhattan_f1
value: 69.28489190226429
- type: manhattan_precision
value: 67.24608889992551
- type: manhattan_recall
value: 71.45118733509234
- type: max_accuracy
value: 86.60666388508076
- type: max_ap
value: 75.47512772621097
- type: max_f1
value: 69.413872536473
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.01695967710637
- type: cos_sim_ap
value: 85.8298270742901
- type: cos_sim_f1
value: 78.46988128389272
- type: cos_sim_precision
value: 74.86017897091722
- type: cos_sim_recall
value: 82.44533415460425
- type: dot_accuracy
value: 88.19420188613343
- type: dot_ap
value: 83.82679165901324
- type: dot_f1
value: 76.55833777304208
- type: dot_precision
value: 75.6884875846501
- type: dot_recall
value: 77.44841392054204
- type: euclidean_accuracy
value: 89.03054294252338
- type: euclidean_ap
value: 85.89089555185325
- type: euclidean_f1
value: 78.62997658079624
- type: euclidean_precision
value: 74.92329149232914
- type: euclidean_recall
value: 82.72251308900523
- type: manhattan_accuracy
value: 89.0266620095471
- type: manhattan_ap
value: 85.86458997929147
- type: manhattan_f1
value: 78.50685331000291
- type: manhattan_precision
value: 74.5499861534201
- type: manhattan_recall
value: 82.90729904527257
- type: max_accuracy
value: 89.03054294252338
- type: max_ap
value: 85.89089555185325
- type: max_f1
value: 78.62997658079624
---
# apto-as/multilingual-e5-large-Q8_0-GGUF
This model was converted to GGUF format from [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo apto-as/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo apto-as/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo apto-as/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo apto-as/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048
```
|
MaziyarPanahi/llama3-8B-aifeifei-1.3-GGUF
|
MaziyarPanahi
| 2024-11-06T13:08:07Z | 42 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:aifeifei798/llama3-8B-aifeifei-1.3",
"base_model:quantized:aifeifei798/llama3-8B-aifeifei-1.3",
"region:us",
"conversational"
] |
text-generation
| 2024-11-06T12:45:32Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: llama3-8B-aifeifei-1.3-GGUF
base_model: aifeifei798/llama3-8B-aifeifei-1.3
inference: false
model_creator: aifeifei798
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/llama3-8B-aifeifei-1.3-GGUF](https://huggingface.co/MaziyarPanahi/llama3-8B-aifeifei-1.3-GGUF)
- Model creator: [aifeifei798](https://huggingface.co/aifeifei798)
- Original model: [aifeifei798/llama3-8B-aifeifei-1.3](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.3)
## Description
[MaziyarPanahi/llama3-8B-aifeifei-1.3-GGUF](https://huggingface.co/MaziyarPanahi/llama3-8B-aifeifei-1.3-GGUF) contains GGUF format model files for [aifeifei798/llama3-8B-aifeifei-1.3](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.3).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
ihughes15234/phi35_kp_dpo7epoch_total
|
ihughes15234
| 2024-11-06T13:07:12Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:ihughes15234/phi35_kp_dpo5epoch_total",
"base_model:finetune:ihughes15234/phi35_kp_dpo5epoch_total",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T13:05:11Z |
---
base_model: ihughes15234/phi35_kp_dpo5epoch_total
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ihughes15234
- **License:** apache-2.0
- **Finetuned from model :** ihughes15234/phi35_kp_dpo5epoch_total
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
usmanyaqoob/xlm-roberta-base-finetuned-panx-all
|
usmanyaqoob
| 2024-11-06T13:05:35Z | 128 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-06T12:51:16Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1763
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2895 | 1.0 | 835 | 0.1821 | 0.8210 |
| 0.1514 | 2.0 | 1670 | 0.1707 | 0.8444 |
| 0.0966 | 3.0 | 2505 | 0.1763 | 0.8595 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Ivaylo-Iv/ai_train_poc_food_text_classifier
|
Ivaylo-Iv
| 2024-11-06T13:05:17Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T12:51:57Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ai_train_poc_food_text_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai_train_poc_food_text_classifier
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0019
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0005 | 1.0 | 7 | 0.0012 | 1.0 |
| 0.0004 | 2.0 | 14 | 0.0013 | 1.0 |
| 0.0003 | 3.0 | 21 | 0.0014 | 1.0 |
| 0.0003 | 4.0 | 28 | 0.0015 | 1.0 |
| 0.0002 | 5.0 | 35 | 0.0016 | 1.0 |
| 0.0002 | 6.0 | 42 | 0.0017 | 1.0 |
| 0.0002 | 7.0 | 49 | 0.0018 | 1.0 |
| 0.0002 | 8.0 | 56 | 0.0018 | 1.0 |
| 0.0002 | 9.0 | 63 | 0.0019 | 1.0 |
| 0.0002 | 10.0 | 70 | 0.0019 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF
|
mradermacher
| 2024-11-06T12:54:07Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:lodrick-the-lafted/Hermes-100K",
"dataset:garage-bAInd/Open-Platypus",
"dataset:jondurbin/airoboros-3.2",
"base_model:lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B",
"base_model:quantized:lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T08:58:38Z |
---
base_model: lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B
datasets:
- lodrick-the-lafted/Hermes-100K
- garage-bAInd/Open-Platypus
- jondurbin/airoboros-3.2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-B-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-B-2x7B.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
usmanyaqoob/xlm-roberta-base-finetuned-panx-it
|
usmanyaqoob
| 2024-11-06T12:49:06Z | 125 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-06T12:44:54Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2984
- F1: 0.7862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7571 | 1.0 | 70 | 0.3568 | 0.7210 |
| 0.296 | 2.0 | 140 | 0.2998 | 0.7474 |
| 0.1974 | 3.0 | 210 | 0.2984 | 0.7862 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
oodeh/openshift-qe-r16-a16-epoch5-merged-model
|
oodeh
| 2024-11-06T12:47:53Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-06T12:44:34Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** oodeh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
besimray/miner_id_2_c4650a4d-c32d-4e0f-a27c-171ba40b07e6_1730896832
|
besimray
| 2024-11-06T12:47:30Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-06T12:40:32Z |
---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: miner_id_2_c4650a4d-c32d-4e0f-a27c-171ba40b07e6_1730896832
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- o1-journey_train_data.json
ds_type: json
path: /workspace/input_data/o1-journey_train_data.json
type:
field_input: question
field_instruction: answer
field_output: longCOT
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 10
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hours_to_complete: 2
hub_model_id: besimray/miner_id_2_c4650a4d-c32d-4e0f-a27c-171ba40b07e6_1730896832
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/o1-journey_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
save_strategy: steps
sequence_len: 4096
started_at: '2024-11-06T12:40:32.335166'
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: besimray24-rayon
wandb_mode: online
wandb_project: Public_TuningSN
wandb_run: miner_id_24
wandb_runid: c4650a4d-c32d-4e0f-a27c-171ba40b07e6
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# miner_id_2_c4650a4d-c32d-4e0f-a27c-171ba40b07e6_1730896832
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 53
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9692 | 0.0571 | 1 | 1.0109 |
| 0.8542 | 0.5714 | 10 | 0.9411 |
| 0.7241 | 1.1429 | 20 | 0.7844 |
| 0.678 | 1.7143 | 30 | 0.7272 |
| 0.6679 | 2.2857 | 40 | 0.7076 |
| 0.6468 | 2.8571 | 50 | 0.7048 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
hadamelino/vit-gpt-vizwiz-fine-tuned-10k
|
hadamelino
| 2024-11-06T12:44:50Z | 49 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-11-06T12:30:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
usmanyaqoob/xlm-roberta-base-finetuned-panx-fr
|
usmanyaqoob
| 2024-11-06T12:44:44Z | 125 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-06T12:40:27Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2463
- F1: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5392 | 1.0 | 191 | 0.3301 | 0.7574 |
| 0.245 | 2.0 | 382 | 0.2521 | 0.8340 |
| 0.1605 | 3.0 | 573 | 0.2463 | 0.8537 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
jazzson/adl-hw3-finetune-gemma-2-chinese-kyara-10
|
jazzson
| 2024-11-06T12:44:34Z | 9 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"gemma2",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2024-11-06T10:58:50Z |
---
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
library_name: peft
license: gemma
tags:
- generated_from_trainer
model-index:
- name: adl-hw3-finetune-gemma-2-chinese-kyara-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# adl-hw3-finetune-gemma-2-chinese-kyara-10
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4673 | 0.7105 | 200 | 2.3491 |
| 2.0993 | 1.4210 | 400 | 2.3077 |
| 1.8831 | 2.1314 | 600 | 2.3793 |
| 1.6486 | 2.8419 | 800 | 2.3754 |
| 1.4189 | 3.5524 | 1000 | 2.5204 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.1
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
HugoVoxx/Gemma-2-2b-it-ag-merged-model
|
HugoVoxx
| 2024-11-06T12:39:57Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-21T14:56:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
projecte-aina/aina-translator-de-ca
|
projecte-aina
| 2024-11-06T12:39:02Z | 11 | 0 |
fairseq
|
[
"fairseq",
"de",
"ca",
"dataset:projecte-aina/CA-DE_Parallel_Corpus",
"doi:10.57967/hf/1930",
"license:apache-2.0",
"region:us"
] | null | 2023-11-27T13:03:49Z |
---
license: apache-2.0
datasets:
- projecte-aina/CA-DE_Parallel_Corpus
language:
- de
- ca
metrics:
- bleu
library_name: fairseq
---
## Projecte Aina’s German-Catalan machine translation model
## Model description
This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of datasets comprising both Catalan-German data
sourced from Opus, and additional datasets where synthetic Catalan was generated from the Spanish side of Spanish-Germancorpora using
[Projecte Aina’s Spanish-Catalan model](https://huggingface.co/projecte-aina/aina-translator-es-ca). This gave a total of approximately 100 million sentence pairs.
The model is evaluated on the Flores, NTEU and NTREX evaluation sets.
## Intended uses and limitations
You can use this model for machine translation from German to Catalan.
## How to use
### Usage
Required libraries:
```bash
pip install ctranslate2 pyonmttok
```
Translate a sentence using python
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/aina-translator-de-ca", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Willkommen beim Projekt Aina")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model.
However, we are well aware that our models may be biased. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
The model was trained on a combination of the following datasets:
| Datasets |
|-------------------|
| Multi CCAligned |
| WikiMatrix |
| GNOME |
| KDE4 |
| OpenSubtitles |
| GlobalVoices|
| Tatoeba |
| Books |
| Europarl |
| Tilde |
| Multi-Paracawl |
| DGT |
| EU Bookshop |
| NLLB |
| OpenSubtitles |
All corpora except Europarl and Tilde were collected from [Opus](https://opus.nlpl.eu/).
The Europarl and Tilde corpora are a synthetic parallel corpus created from the original Spanish-Catalan corpora by [SoftCatalà](https://github.com/Softcatala).
Where a Spanish-German corpus was used, synthetic Catalan was generated from the Spanish side using
[Projecte Aina’s Spanish-Catalan model](https://huggingface.co/projecte-aina/aina-translator-es-ca).
### Training procedure
### Data preparation
All datasets are deduplicated, filtered for language identification, and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
The filtered datasets are then concatenated to form a final corpus of 6.258.272 and before training the punctuation is normalized using a
modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
#### Tokenization
All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model learned from the combination of all filtered training data.
This model is included.
#### Hyperparameters
The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf)
The following hyperparameters were set on the Fairseq toolkit:
| Hyperparameter | Value |
|------------------------------------|----------------------------------|
| Architecture | transformer_vaswani_wmt_en_de_big |
| Embedding size | 1024 |
| Feedforward size | 4096 |
| Number of heads | 16 |
| Encoder layers | 24 |
| Decoder layers | 6 |
| Normalize before attention | True |
| --share-decoder-input-output-embed | True |
| --share-all-embeddings | True |
| Effective batch size | 48.000 |
| Optimizer | adam |
| Adam betas | (0.9, 0.980) |
| Clip norm | 0.0 |
| Learning rate | 5e-4 |
| Lr. schedurer | inverse sqrt |
| Warmup updates | 8000 |
| Dropout | 0.1 |
| Label smoothing | 0.1 |
The model was trained for a total of 29.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 2 checkpoints.
## Evaluation
### Variable and metrics
We use the BLEU score for evaluation on the [Flores-101](https://github.com/facebookresearch/flores), NTEU (unpublished) and [NTREX](https://github.com/MicrosoftTranslator/NTREX) test sets.
### Evaluation results
Below are the evaluation results on the machine translation from German to Catalan compared to [Softcatalà](https://www.softcatala.org/) and
[Google Translate](https://translate.google.es/?hl=es):
| Test set | SoftCatalà | Google Translate | aina-translator-de-ca |
|----------------------|------------|------------------|---------------|
| Flores 101 dev | 28,9 | **35,1** | 33,1 |
| Flores 101 devtest |29,2 | **35,9** | 33,2 |
| NTEU | 38,9 | 39,1 | **42,9** |
| NTREX | 25,7 | **31,2** | 29,1 |
| **Average** | 30,7 | **35,3** | 34,3 |
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <langtech@bsc.es>.
### Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).
### Disclaimer
<details>
<summary>Click to expand</summary>
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it)
or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and,
in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (Barcelona Supercomputing Center)
be liable for any results arising from the use made by third parties.
</details>
|
sweetssweets/NMRExtractor
|
sweetssweets
| 2024-11-06T12:38:21Z | 6 | 1 | null |
[
"safetensors",
"mistral",
"license:mit",
"region:us"
] | null | 2024-08-17T10:07:51Z |
---
license: mit
---
## For a detailed description of the model, please refer to:
'''https://github.com/eat-sugar/NMRExtractor'''
|
Xu-Ouyang/pythia-12b-deduped-int8-step1000-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-06T12:33:48Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-06T12:23:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PranavSharma10/Qwen-2.5-7B-FineTunedInstruct
|
PranavSharma10
| 2024-11-06T12:28:46Z | 5 | 0 | null |
[
"safetensors",
"qwen2",
"llama-factory",
"license:mit",
"region:us"
] | null | 2024-11-06T12:00:18Z |
---
license: mit
tags:
- llama-factory
---
|
MaziyarPanahi/SD3_prompt-llama_8b-GGUF
|
MaziyarPanahi
| 2024-11-06T12:24:51Z | 96 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:matrixglitch/SD3_prompt-llama_8b",
"base_model:quantized:matrixglitch/SD3_prompt-llama_8b",
"region:us",
"conversational"
] |
text-generation
| 2024-11-06T12:01:42Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: SD3_prompt-llama_8b-GGUF
base_model: matrixglitch/SD3_prompt-llama_8b
inference: false
model_creator: matrixglitch
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/SD3_prompt-llama_8b-GGUF](https://huggingface.co/MaziyarPanahi/SD3_prompt-llama_8b-GGUF)
- Model creator: [matrixglitch](https://huggingface.co/matrixglitch)
- Original model: [matrixglitch/SD3_prompt-llama_8b](https://huggingface.co/matrixglitch/SD3_prompt-llama_8b)
## Description
[MaziyarPanahi/SD3_prompt-llama_8b-GGUF](https://huggingface.co/MaziyarPanahi/SD3_prompt-llama_8b-GGUF) contains GGUF format model files for [matrixglitch/SD3_prompt-llama_8b](https://huggingface.co/matrixglitch/SD3_prompt-llama_8b).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
usmanyaqoob/xlm-roberta-base-finetuned-panx-de-fr
|
usmanyaqoob
| 2024-11-06T12:23:00Z | 135 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-06T10:35:51Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1566
- F1: 0.8673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2805 | 1.0 | 715 | 0.1776 | 0.8182 |
| 0.1446 | 2.0 | 1430 | 0.1523 | 0.8533 |
| 0.0915 | 3.0 | 2145 | 0.1566 | 0.8673 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Totototo/bert-base-uncased-finetuned-sst2
|
Totototo
| 2024-11-06T12:21:24Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T12:17:46Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5682
- Accuracy: 0.8911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 0.3307 | 0.8716 |
| 0.353 | 2.0 | 500 | 0.3771 | 0.8968 |
| 0.353 | 3.0 | 750 | 0.5682 | 0.8911 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
elliotthwang/KimLan-3.2-3B
|
elliotthwang
| 2024-11-06T12:15:09Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T12:08:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
paipaimask/paipai_qwen_2-7B-it_04
|
paipaimask
| 2024-11-06T12:06:41Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T10:47:53Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** paipaimask
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
superllms/Qwen2.5-7B-Instruct_906de368
|
superllms
| 2024-11-06T12:01:32Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T11:42:47Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** superllms
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
usmanyaqoob/xlm-roberta-base-finetuned-panx-de
|
usmanyaqoob
| 2024-11-06T12:00:33Z | 135 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-27T16:17:22Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1299
- F1: 0.8683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2569 | 1.0 | 525 | 0.1567 | 0.8133 |
| 0.1308 | 2.0 | 1050 | 0.1372 | 0.8526 |
| 0.0821 | 3.0 | 1575 | 0.1299 | 0.8683 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf
|
RichardErkhov
| 2024-11-06T11:57:00Z | 21 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T08:28:37Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
deep-solar-v2.0.7 - GGUF
- Model creator: https://huggingface.co/Deepnoid/
- Original model: https://huggingface.co/Deepnoid/deep-solar-v2.0.7/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [deep-solar-v2.0.7.Q2_K.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q2_K.gguf) | Q2_K | 3.73GB |
| [deep-solar-v2.0.7.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [deep-solar-v2.0.7.Q3_K.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q3_K.gguf) | Q3_K | 4.84GB |
| [deep-solar-v2.0.7.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [deep-solar-v2.0.7.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [deep-solar-v2.0.7.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [deep-solar-v2.0.7.Q4_0.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q4_0.gguf) | Q4_0 | 5.66GB |
| [deep-solar-v2.0.7.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [deep-solar-v2.0.7.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [deep-solar-v2.0.7.Q4_K.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q4_K.gguf) | Q4_K | 6.02GB |
| [deep-solar-v2.0.7.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [deep-solar-v2.0.7.Q4_1.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q4_1.gguf) | Q4_1 | 6.27GB |
| [deep-solar-v2.0.7.Q5_0.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q5_0.gguf) | Q5_0 | 6.89GB |
| [deep-solar-v2.0.7.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [deep-solar-v2.0.7.Q5_K.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q5_K.gguf) | Q5_K | 7.08GB |
| [deep-solar-v2.0.7.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [deep-solar-v2.0.7.Q5_1.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q5_1.gguf) | Q5_1 | 7.51GB |
| [deep-solar-v2.0.7.Q6_K.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q6_K.gguf) | Q6_K | 8.2GB |
| [deep-solar-v2.0.7.Q8_0.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-v2.0.7-gguf/blob/main/deep-solar-v2.0.7.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
mradermacher/Eclipse-13B-i1-GGUF
|
mradermacher
| 2024-11-06T11:56:11Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"Eclipse-7B",
"en",
"base_model:Xenon1/Eclipse-13B",
"base_model:quantized:Xenon1/Eclipse-13B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-05T23:35:33Z |
---
base_model: Xenon1/Eclipse-13B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mistral
- Eclipse-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Xenon1/Eclipse-13B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Eclipse-13B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.4 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.4 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.4 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q4_0.gguf) | i1-Q4_0 | 7.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Eclipse-13B-i1-GGUF/resolve/main/Eclipse-13B.i1-Q6_K.gguf) | i1-Q6_K | 10.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF
|
featherless-ai-quants
| 2024-11-06T11:53:40Z | 10 | 0 | null |
[
"gguf",
"text-generation",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T10:21:41Z |
---
base_model: tokyotech-llm-Llama-3-Swallow-8B-v0.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# tokyotech-llm-Llama-3-Swallow-8B-v0.1 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/tokyotech-llm-Llama-3-Swallow-8B-v0.1-GGUF/blob/main/tokyotech-llm-Llama-3-Swallow-8B-v0.1-Q8_0.gguf) | 8145.11 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
nishhaaann/bertTunedBoxA
|
nishhaaann
| 2024-11-06T11:53:11Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T11:52:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Primeness/deeznootz013
|
Primeness
| 2024-11-06T11:44:44Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T10:40:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sairanaz/finetuning-sentiment-model-3000-samples
|
sairanaz
| 2024-11-06T11:43:35Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T10:18:02Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3324
- Accuracy: 0.8867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
MaziyarPanahi/WASSA2024_EmpathyDetection_Chinchunmei_EXP305-GGUF
|
MaziyarPanahi
| 2024-11-06T11:42:23Z | 28 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:RicardoLee/WASSA2024_EmpathyDetection_Chinchunmei_EXP305",
"base_model:quantized:RicardoLee/WASSA2024_EmpathyDetection_Chinchunmei_EXP305",
"region:us",
"conversational"
] |
text-generation
| 2024-11-06T11:19:43Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: WASSA2024_EmpathyDetection_Chinchunmei_EXP305-GGUF
base_model: RicardoLee/WASSA2024_EmpathyDetection_Chinchunmei_EXP305
inference: false
model_creator: RicardoLee
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/WASSA2024_EmpathyDetection_Chinchunmei_EXP305-GGUF](https://huggingface.co/MaziyarPanahi/WASSA2024_EmpathyDetection_Chinchunmei_EXP305-GGUF)
- Model creator: [RicardoLee](https://huggingface.co/RicardoLee)
- Original model: [RicardoLee/WASSA2024_EmpathyDetection_Chinchunmei_EXP305](https://huggingface.co/RicardoLee/WASSA2024_EmpathyDetection_Chinchunmei_EXP305)
## Description
[MaziyarPanahi/WASSA2024_EmpathyDetection_Chinchunmei_EXP305-GGUF](https://huggingface.co/MaziyarPanahi/WASSA2024_EmpathyDetection_Chinchunmei_EXP305-GGUF) contains GGUF format model files for [RicardoLee/WASSA2024_EmpathyDetection_Chinchunmei_EXP305](https://huggingface.co/RicardoLee/WASSA2024_EmpathyDetection_Chinchunmei_EXP305).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
Mykes/med_ner_SDDCS
|
Mykes
| 2024-11-06T11:33:36Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"biology",
"medical",
"ru",
"dataset:Mykes/patient_queries_ner_SDDCS",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-03T20:37:12Z |
---
base_model:
- google-bert/bert-base-multilingual-uncased
datasets:
- Mykes/patient_queries_ner_SDDCS
language:
- ru
library_name: transformers
pipeline_tag: token-classification
tags:
- biology
- medical
---

# med_ner_SDDCS
SDDCS - abbreviation for ner-entities SYMPTOMS, DISEASES, DRUGS, CITIES, SUBWAY STATIONS (additionall it is able to predict GENDER and AGE entities)
This is a fine-tuned Named Entity Recognition (NER) model based on the [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) model, designed to detect russian medical entities like diseases, drugs, symptoms, and more.
# Model Details
- Model Name: med_ner_SDDCS
- Base Model: Babelscape/wikineural-multilingual-ner
- Fine-tuned on: Medical NER data
## Entities Recognized:
- GENDER (e.g., женщина, мужчина)
- DISEASE (e.g., паническое расстройство, грипп, ...)
- SYMPTOM (e.g., тревога, одышка, ...)
- SPECIALITY (e.g., невролог, кардиолог, ...)
- CITY (e.g., Тула, Москва, Иркутск, ...)
- SUBWAY (e.g., Шоссе Энтузиастов, Проспект Мира, ...)
- DRUG (e.g., кардиомагнил, ципралекс)
- AGE (e.g., ребенок, пожилой)
## Model Performance
The fine-tuned model has achieved the following performance metrics:
```
precision recall f1-score support
AGE 0.99 1.00 0.99 706
CITY 0.99 1.00 1.00 2370
DISEASE 0.99 1.00 0.99 4841
DRUG 0.99 1.00 0.99 4546
GENDER 0.99 1.00 1.00 476
SPECIALITY 0.98 0.96 0.97 3673
SUBWAY 1.00 1.00 1.00 658
SYMPTOM 0.99 0.99 0.99 8022
micro avg 0.99 0.99 0.99 25292
macro avg 0.99 0.99 0.99 25292
weighted avg 0.99 0.99 0.99 25292
How to Use
```
You can use this model with the transformers library to perform Named Entity Recognition (NER) tasks in the russian medical domain, mainly for patient queries. Here's how to load and use the model:
# Load the tokenizer and model
```
from transformers import pipeline
pipe = pipeline(task="ner", model='Mykes/med_ner_SDDCS', tokenizer='Mykes/med_ner_SDDCS', aggregation_strategy="max")
# I made the misspelled words on purpose
query = "У ребенка треога и норушения сна, потеря сознания, раньше ставили паническое расстройство. Подскажи хорошего психотервта в районе метро Октбрьской."
pipe(query.lower())
```
Result:
```
[{'entity_group': 'AGE',
'score': 0.9992663,
'word': 'ребенка',
'start': 2,
'end': 9},
{'entity_group': 'SYMPTOM',
'score': 0.9997758,
'word': 'треога',
'start': 10,
'end': 16},
{'entity_group': 'SYMPTOM',
'score': 0.9997876,
'word': 'норушения сна',
'start': 19,
'end': 32},
{'entity_group': 'SYMPTOM',
'score': 0.999773,
'word': 'потеря сознания',
'start': 34,
'end': 49},
{'entity_group': 'DISEASE',
'score': 0.9996424,
'word': 'паническое расстройство',
'start': 66,
'end': 89},
{'entity_group': 'SUBWAY',
'score': 0.99918646,
'word': 'октбрьской',
'start': 136,
'end': 146}]
```
## Code for visualization
```
import spacy
from spacy import displacy
def convert_to_displacy_format(text, ner_results):
entities = []
for result in ner_results:
# Convert the Hugging Face output into the format displacy expects
entities.append({
"start": result['start'],
"end": result['end'],
"label": result['entity_group']
})
return {
"text": text,
"ents": entities,
"title": None
}
query = "У ребенка треога и норушения сна, потеря сознания, раньше ставили паническое расстройство. Подскажи хорошего психиатра в районе метро Октбрьской."
ner_results = pipe(query.lower())
displacy_data = convert_to_displacy_format(query, ner_results)
colors = {
"SPECIALITY": "linear-gradient(90deg, #aa9cfc, #fc9ce7)",
"CITY": "linear-gradient(90deg, #feca57, #ff9f43)",
"DRUG": "linear-gradient(90deg, #55efc4, #81ecec)",
"DISEASE": "linear-gradient(90deg, #fab1a0, #ff7675)",
"SUBWAY": "linear-gradient(90deg, #00add0, #0039a6)",
"AGE": "linear-gradient(90deg, #f39c12, #e67e22)",
"SYMPTOM": "linear-gradient(90deg, #e74c3c, #c0392b)"
}
options = {"ents": ["SPECIALITY", "CITY", "DRUG", "DISEASE", "SYMPTOM", "AGE", "SUBWAY"], "colors": colors}
html = displacy.render(displacy_data, style="ent", manual=True, options=options, jupyter=False)
with open("ner_visualization_with_colors.html", "w", encoding="utf-8") as f:
f.write(html)
from IPython.display import display, HTML
display(HTML(html))
```
|
atocadocoelhodealice/ai-metaglasses
|
atocadocoelhodealice
| 2024-11-06T11:32:56Z | 6 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-06T04:01:29Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AIMETAGLASSES
---
# Ai Metaglasses
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AIMETAGLASSES` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('atocadocoelhodealice/ai-metaglasses', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ProfessorSeemo/cyberattack-pattern-detector
|
ProfessorSeemo
| 2024-11-06T11:27:23Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T18:07:25Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: cyberattack-pattern-detector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cyberattack-pattern-detector
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Tokenizers 0.19.1
|
mirzabeg14/finetuning-sentiment-model-3000-samples
|
mirzabeg14
| 2024-11-06T11:19:13Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T10:32:26Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3497
- Accuracy: 0.86
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
lm-kit/nomic-embed-text-1.5
|
lm-kit
| 2024-11-06T11:17:16Z | 337 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2024-11-05T14:22:55Z |
---
license: apache-2.0
---
## Model Summary
This repository hosts quantized versions of the nomic-embed-text-v1.5 embedding model.
**Format:** GGUF
**Converter:** llama.cpp 94d8cb8be13b7c4d04eeca5a2b956b9148e6f222
**Quantizer:** LM-Kit.NET 2024.11.1
For more detailed information on the base model, please visit the following links
- [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5)
|
mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF
|
mradermacher
| 2024-11-06T11:07:12Z | 22 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:lodrick-the-lafted/Hermes-40K",
"dataset:garage-bAInd/Open-Platypus",
"dataset:jondurbin/airoboros-3.2",
"base_model:lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B",
"base_model:quantized:lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T06:12:07Z |
---
base_model: lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B
datasets:
- lodrick-the-lafted/Hermes-40K
- garage-bAInd/Open-Platypus
- jondurbin/airoboros-3.2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.4 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Grafted-Hermetic-Platypus-D-2x7B-GGUF/resolve/main/Grafted-Hermetic-Platypus-D-2x7B.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kavish218/enhanced_finetuned_llama_3_2_1B_multi_domain_4
|
kavish218
| 2024-11-06T10:59:45Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T10:58:20Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WeiChow/moge_l_vit
|
WeiChow
| 2024-11-06T10:57:40Z | 10 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2024-11-06T10:25:24Z |
same architecture with [timm/vit_large_patch14_dinov2.lvd142m](https://huggingface.co/timm/vit_large_patch14_dinov2.lvd142m)
```shell
git clone https://github.com/microsoft/MoGe.git
cd MoGe
```
# translate
```python
import torch
from moge.model import MoGeModel
device = torch.device("cuda")
model = MoGeModel.from_pretrained("Ruicheng/moge-vitl").to(device)
# -------------------
backbone_state_dict = model.backbone.state_dict()
filtered_state_dict = {k: v for k, v in backbone_state_dict.items() if 'mask_token' not in k}
torch.save(filtered_state_dict, "pytorch_model.bin")
# -------------------
```
# usage
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hf_hub:WeiChow/moge_l_vit',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
# for name, param in model.named_parameters():
# print(f"Parameter: {name} - Size: {param.size()} - Total Elements: {param.numel()}")
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1374, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
print(output)
```
Copyright saved.
|
kiranshivaraju/swin-tiny-patch4-window7-224-finetuned-aug-pcb
|
kiranshivaraju
| 2024-11-06T10:57:27Z | 214 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-06T10:48:40Z |
---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-aug-pcb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-aug-pcb
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5192
- Accuracy: 0.7465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5945 | 1.0 | 15 | 0.5192 | 0.7465 |
| 0.5435 | 2.0 | 30 | 0.4960 | 0.7230 |
| 0.5187 | 3.0 | 45 | 0.4820 | 0.7418 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
ngwgsang/bartpho-syllable-large-vimex
|
ngwgsang
| 2024-11-06T10:51:51Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-06T10:50:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
avinasht/gemma-2b-it-QA-Tuned
|
avinasht
| 2024-11-06T10:49:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-27T18:30:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF
|
mradermacher
| 2024-11-06T10:48:15Z | 106 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:practical-dreamer/RPGPT_PublicDomain-alpaca",
"dataset:shuyuej/metamath_gsm8k",
"dataset:NeuralNovel/Neural-DPO",
"base_model:NeuralNovel/Senzu-7B-v0.1-DPO",
"base_model:quantized:NeuralNovel/Senzu-7B-v0.1-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-06T09:37:20Z |
---
base_model: NeuralNovel/Senzu-7B-v0.1-DPO
datasets:
- practical-dreamer/RPGPT_PublicDomain-alpaca
- shuyuej/metamath_gsm8k
- NeuralNovel/Neural-DPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/NeuralNovel/Senzu-7B-v0.1-DPO
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Senzu-7B-v0.1-DPO-i1-GGUF/resolve/main/Senzu-7B-v0.1-DPO.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kaichin/finetuning-sentiment-model-3000-samples
|
kaichin
| 2024-11-06T10:45:32Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T10:30:35Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3121
- Accuracy: 0.88
- F1: 0.8831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf
|
RichardErkhov
| 2024-11-06T10:38:24Z | 23 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T06:32:55Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
deep-solar-Rev-v3.0.4 - GGUF
- Model creator: https://huggingface.co/Deepnoid/
- Original model: https://huggingface.co/Deepnoid/deep-solar-Rev-v3.0.4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [deep-solar-Rev-v3.0.4.Q2_K.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q2_K.gguf) | Q2_K | 3.73GB |
| [deep-solar-Rev-v3.0.4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [deep-solar-Rev-v3.0.4.Q3_K.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q3_K.gguf) | Q3_K | 4.84GB |
| [deep-solar-Rev-v3.0.4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [deep-solar-Rev-v3.0.4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [deep-solar-Rev-v3.0.4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [deep-solar-Rev-v3.0.4.Q4_0.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q4_0.gguf) | Q4_0 | 5.66GB |
| [deep-solar-Rev-v3.0.4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [deep-solar-Rev-v3.0.4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [deep-solar-Rev-v3.0.4.Q4_K.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q4_K.gguf) | Q4_K | 6.02GB |
| [deep-solar-Rev-v3.0.4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [deep-solar-Rev-v3.0.4.Q4_1.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q4_1.gguf) | Q4_1 | 6.27GB |
| [deep-solar-Rev-v3.0.4.Q5_0.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q5_0.gguf) | Q5_0 | 6.89GB |
| [deep-solar-Rev-v3.0.4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [deep-solar-Rev-v3.0.4.Q5_K.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q5_K.gguf) | Q5_K | 7.08GB |
| [deep-solar-Rev-v3.0.4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [deep-solar-Rev-v3.0.4.Q5_1.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q5_1.gguf) | Q5_1 | 7.51GB |
| [deep-solar-Rev-v3.0.4.Q6_K.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q6_K.gguf) | Q6_K | 8.2GB |
| [deep-solar-Rev-v3.0.4.Q8_0.gguf](https://huggingface.co/RichardErkhov/Deepnoid_-_deep-solar-Rev-v3.0.4-gguf/blob/main/deep-solar-Rev-v3.0.4.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
cuongdev/tonghop-v3-5000
|
cuongdev
| 2024-11-06T10:36:54Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-11-06T10:32:57Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### tonghop-v3-5000 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
nishhaaann/dbTuneDemo
|
nishhaaann
| 2024-11-06T10:33:25Z | 163 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T06:28:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
neopolita/toolace-8b-gguf
|
neopolita
| 2024-11-06T10:31:04Z | 18 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T09:58:44Z |
---
{}
---
# GGUF quants for [**Team-ACE/ToolACE-8B**](https://huggingface.co/Team-ACE/ToolACE-8B) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
**Terms of Use**: Please check the [**original model**](https://huggingface.co/Team-ACE/ToolACE-8B)
<picture>
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
</picture>
## Quants
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_s`: Uses Q3_K for all tensors
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_s`: Uses Q4_K for all tensors
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_s`: Uses Q5_K for all tensors
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.