modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
SHENMU007/neunit_BASE_V12.5
|
SHENMU007
| 2023-07-12T23:39:19Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-12T20:39:15Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ayanban011/vit-base_tobacco_bs_16_lr_1e-5_e_200_wr_0.05_wd_0.4_split
|
ayanban011
| 2023-07-12T23:33:16Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-12T18:58:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco_bs_16_lr_1e-5_e_200_wr_0.05_wd_0.4_split
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco_bs_16_lr_1e-5_e_200_wr_0.05_wd_0.4_split
This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0411
- Accuracy: 0.8333
- Brier Loss: 0.3084
- Nll: 1.3568
- F1 Micro: 0.8333
- F1 Macro: 0.8183
- Ece: 0.1563
- Aurc: 0.0847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.99 | 43 | 0.7544 | 0.7960 | 0.3088 | 1.3391 | 0.7960 | 0.7715 | 0.1991 | 0.0817 |
| No log | 2.0 | 87 | 0.7158 | 0.8218 | 0.2920 | 1.1888 | 0.8218 | 0.7941 | 0.1863 | 0.0741 |
| No log | 2.98 | 130 | 0.7144 | 0.7989 | 0.2932 | 1.2958 | 0.7989 | 0.7701 | 0.1628 | 0.0749 |
| No log | 3.99 | 174 | 0.6762 | 0.8305 | 0.2749 | 1.1916 | 0.8305 | 0.8076 | 0.1844 | 0.0678 |
| No log | 4.98 | 217 | 0.6710 | 0.8362 | 0.2745 | 1.0739 | 0.8362 | 0.8076 | 0.1696 | 0.0664 |
| No log | 5.99 | 261 | 0.6532 | 0.8362 | 0.2675 | 1.0011 | 0.8362 | 0.8115 | 0.1750 | 0.0602 |
| No log | 6.98 | 304 | 0.6404 | 0.8362 | 0.2635 | 1.0072 | 0.8362 | 0.8106 | 0.1714 | 0.0633 |
| No log | 7.99 | 348 | 0.6635 | 0.8218 | 0.2707 | 1.0903 | 0.8218 | 0.8030 | 0.1513 | 0.0770 |
| No log | 9.0 | 392 | 0.6167 | 0.8420 | 0.2534 | 1.0176 | 0.8420 | 0.8259 | 0.1613 | 0.0796 |
| No log | 9.99 | 435 | 0.6496 | 0.8276 | 0.2703 | 0.9646 | 0.8276 | 0.8085 | 0.1643 | 0.0588 |
| No log | 11.0 | 479 | 0.6091 | 0.8506 | 0.2467 | 1.1036 | 0.8506 | 0.8308 | 0.1483 | 0.0650 |
| 0.4309 | 11.98 | 522 | 0.6075 | 0.8420 | 0.2483 | 0.9144 | 0.8420 | 0.8246 | 0.1391 | 0.0519 |
| 0.4309 | 12.99 | 566 | 0.6164 | 0.8276 | 0.2576 | 0.9703 | 0.8276 | 0.8092 | 0.1467 | 0.0645 |
| 0.4309 | 13.98 | 609 | 0.5893 | 0.8592 | 0.2347 | 1.1493 | 0.8592 | 0.8483 | 0.1347 | 0.0715 |
| 0.4309 | 14.99 | 653 | 0.6123 | 0.8477 | 0.2485 | 1.1889 | 0.8477 | 0.8232 | 0.1587 | 0.0764 |
| 0.4309 | 16.0 | 697 | 0.6352 | 0.8420 | 0.2615 | 1.1999 | 0.8420 | 0.8403 | 0.1368 | 0.0668 |
| 0.4309 | 16.99 | 740 | 0.6329 | 0.8333 | 0.2625 | 1.1748 | 0.8333 | 0.8249 | 0.1267 | 0.0744 |
| 0.4309 | 18.0 | 784 | 0.6350 | 0.8448 | 0.2590 | 1.2154 | 0.8448 | 0.8386 | 0.1423 | 0.0688 |
| 0.4309 | 18.98 | 827 | 0.5892 | 0.8592 | 0.2383 | 1.1001 | 0.8592 | 0.8515 | 0.1293 | 0.0630 |
| 0.4309 | 19.99 | 871 | 0.5981 | 0.8477 | 0.2476 | 1.0104 | 0.8477 | 0.8375 | 0.1345 | 0.0630 |
| 0.4309 | 20.98 | 914 | 0.6484 | 0.8420 | 0.2642 | 1.3553 | 0.8420 | 0.8292 | 0.1490 | 0.0770 |
| 0.4309 | 21.99 | 958 | 0.6298 | 0.8305 | 0.2657 | 1.1220 | 0.8305 | 0.8208 | 0.1292 | 0.0670 |
| 0.1285 | 22.98 | 1001 | 0.6325 | 0.8391 | 0.2633 | 1.2549 | 0.8391 | 0.8362 | 0.1328 | 0.0708 |
| 0.1285 | 23.99 | 1045 | 0.6032 | 0.8534 | 0.2486 | 1.1258 | 0.8534 | 0.8444 | 0.1229 | 0.0706 |
| 0.1285 | 25.0 | 1089 | 0.6080 | 0.8534 | 0.2460 | 1.2033 | 0.8534 | 0.8414 | 0.1257 | 0.0755 |
| 0.1285 | 25.99 | 1132 | 0.6321 | 0.8391 | 0.2667 | 1.2242 | 0.8391 | 0.8355 | 0.1349 | 0.0697 |
| 0.1285 | 27.0 | 1176 | 0.6325 | 0.8592 | 0.2522 | 1.2029 | 0.8592 | 0.8493 | 0.1278 | 0.0778 |
| 0.1285 | 27.98 | 1219 | 0.6585 | 0.8534 | 0.2546 | 1.3669 | 0.8534 | 0.8378 | 0.1368 | 0.0890 |
| 0.1285 | 28.99 | 1263 | 0.6302 | 0.8563 | 0.2517 | 1.2419 | 0.8563 | 0.8508 | 0.1294 | 0.0751 |
| 0.1285 | 29.98 | 1306 | 0.6663 | 0.8477 | 0.2637 | 1.4132 | 0.8477 | 0.8339 | 0.1399 | 0.0828 |
| 0.1285 | 30.99 | 1350 | 0.7063 | 0.8362 | 0.2799 | 1.4323 | 0.8362 | 0.8330 | 0.1441 | 0.0863 |
| 0.1285 | 32.0 | 1394 | 0.6564 | 0.8506 | 0.2570 | 1.1583 | 0.8506 | 0.8417 | 0.1358 | 0.0847 |
| 0.1285 | 32.99 | 1437 | 0.6738 | 0.8477 | 0.2647 | 1.3855 | 0.8477 | 0.8398 | 0.1305 | 0.0775 |
| 0.1285 | 34.0 | 1481 | 0.6528 | 0.8563 | 0.2559 | 1.2601 | 0.8563 | 0.8462 | 0.1310 | 0.0789 |
| 0.0385 | 34.98 | 1524 | 0.6534 | 0.8563 | 0.2537 | 1.2931 | 0.8563 | 0.8461 | 0.1241 | 0.0773 |
| 0.0385 | 35.99 | 1568 | 0.6541 | 0.8534 | 0.2525 | 1.2589 | 0.8534 | 0.8449 | 0.1315 | 0.0833 |
| 0.0385 | 36.98 | 1611 | 0.6769 | 0.8592 | 0.2545 | 1.4351 | 0.8592 | 0.8492 | 0.1242 | 0.0792 |
| 0.0385 | 37.99 | 1655 | 0.6824 | 0.8592 | 0.2576 | 1.2241 | 0.8592 | 0.8472 | 0.1327 | 0.0810 |
| 0.0385 | 38.98 | 1698 | 0.6843 | 0.8563 | 0.2589 | 1.3394 | 0.8563 | 0.8450 | 0.1311 | 0.0802 |
| 0.0385 | 39.99 | 1742 | 0.6964 | 0.8506 | 0.2630 | 1.2625 | 0.8506 | 0.8405 | 0.1310 | 0.0789 |
| 0.0385 | 41.0 | 1786 | 0.7051 | 0.8534 | 0.2671 | 1.3296 | 0.8534 | 0.8434 | 0.1353 | 0.0794 |
| 0.0385 | 41.99 | 1829 | 0.7006 | 0.8506 | 0.2645 | 1.2965 | 0.8506 | 0.8400 | 0.1373 | 0.0796 |
| 0.0385 | 43.0 | 1873 | 0.7054 | 0.8563 | 0.2646 | 1.2973 | 0.8563 | 0.8450 | 0.1313 | 0.0790 |
| 0.0385 | 43.98 | 1916 | 0.7143 | 0.8506 | 0.2673 | 1.2640 | 0.8506 | 0.8399 | 0.1359 | 0.0803 |
| 0.0385 | 44.99 | 1960 | 0.7168 | 0.8534 | 0.2665 | 1.3058 | 0.8534 | 0.8429 | 0.1389 | 0.0820 |
| 0.0206 | 45.98 | 2003 | 0.7204 | 0.8506 | 0.2669 | 1.3009 | 0.8506 | 0.8384 | 0.1336 | 0.0805 |
| 0.0206 | 46.99 | 2047 | 0.7265 | 0.8534 | 0.2683 | 1.2633 | 0.8534 | 0.8415 | 0.1319 | 0.0806 |
| 0.0206 | 48.0 | 2091 | 0.7311 | 0.8506 | 0.2695 | 1.2725 | 0.8506 | 0.8396 | 0.1372 | 0.0811 |
| 0.0206 | 48.99 | 2134 | 0.7384 | 0.8477 | 0.2729 | 1.3385 | 0.8477 | 0.8364 | 0.1387 | 0.0807 |
| 0.0206 | 50.0 | 2178 | 0.7383 | 0.8534 | 0.2695 | 1.1951 | 0.8534 | 0.8406 | 0.1344 | 0.0827 |
| 0.0206 | 50.98 | 2221 | 0.7440 | 0.8506 | 0.2740 | 1.3360 | 0.8506 | 0.8394 | 0.1418 | 0.0812 |
| 0.0206 | 51.99 | 2265 | 0.7455 | 0.8506 | 0.2727 | 1.2704 | 0.8506 | 0.8388 | 0.1351 | 0.0816 |
| 0.0206 | 52.98 | 2308 | 0.7474 | 0.8506 | 0.2708 | 1.2622 | 0.8506 | 0.8384 | 0.1334 | 0.0823 |
| 0.0206 | 53.99 | 2352 | 0.7581 | 0.8477 | 0.2750 | 1.3446 | 0.8477 | 0.8374 | 0.1406 | 0.0826 |
| 0.0206 | 54.98 | 2395 | 0.7571 | 0.8477 | 0.2751 | 1.3703 | 0.8477 | 0.8363 | 0.1378 | 0.0814 |
| 0.0206 | 55.99 | 2439 | 0.7618 | 0.8477 | 0.2752 | 1.3702 | 0.8477 | 0.8363 | 0.1363 | 0.0827 |
| 0.0206 | 57.0 | 2483 | 0.7638 | 0.8477 | 0.2749 | 1.3774 | 0.8477 | 0.8363 | 0.1394 | 0.0819 |
| 0.0135 | 57.99 | 2526 | 0.7693 | 0.8477 | 0.2760 | 1.3370 | 0.8477 | 0.8363 | 0.1378 | 0.0824 |
| 0.0135 | 59.0 | 2570 | 0.7724 | 0.8448 | 0.2779 | 1.3710 | 0.8448 | 0.8344 | 0.1431 | 0.0823 |
| 0.0135 | 59.98 | 2613 | 0.7780 | 0.8477 | 0.2784 | 1.3328 | 0.8477 | 0.8363 | 0.1463 | 0.0828 |
| 0.0135 | 60.99 | 2657 | 0.7818 | 0.8477 | 0.2795 | 1.3289 | 0.8477 | 0.8363 | 0.1466 | 0.0828 |
| 0.0135 | 61.98 | 2700 | 0.7847 | 0.8420 | 0.2805 | 1.3308 | 0.8420 | 0.8308 | 0.1418 | 0.0830 |
| 0.0135 | 62.99 | 2744 | 0.7851 | 0.8448 | 0.2782 | 1.3650 | 0.8448 | 0.8344 | 0.1411 | 0.0834 |
| 0.0135 | 64.0 | 2788 | 0.7925 | 0.8420 | 0.2829 | 1.4383 | 0.8420 | 0.8319 | 0.1425 | 0.0821 |
| 0.0135 | 64.99 | 2831 | 0.7959 | 0.8448 | 0.2826 | 1.4130 | 0.8448 | 0.8353 | 0.1431 | 0.0826 |
| 0.0135 | 66.0 | 2875 | 0.7989 | 0.8420 | 0.2821 | 1.4040 | 0.8420 | 0.8285 | 0.1446 | 0.0833 |
| 0.0135 | 66.98 | 2918 | 0.7996 | 0.8477 | 0.2807 | 1.3296 | 0.8477 | 0.8363 | 0.1464 | 0.0837 |
| 0.0135 | 67.99 | 2962 | 0.8042 | 0.8448 | 0.2824 | 1.3637 | 0.8448 | 0.8344 | 0.1434 | 0.0837 |
| 0.0097 | 68.98 | 3005 | 0.8095 | 0.8391 | 0.2845 | 1.3635 | 0.8391 | 0.8275 | 0.1468 | 0.0835 |
| 0.0097 | 69.99 | 3049 | 0.8073 | 0.8448 | 0.2824 | 1.3640 | 0.8448 | 0.8344 | 0.1413 | 0.0833 |
| 0.0097 | 70.98 | 3092 | 0.8140 | 0.8477 | 0.2834 | 1.3617 | 0.8477 | 0.8363 | 0.1444 | 0.0837 |
| 0.0097 | 71.99 | 3136 | 0.8152 | 0.8420 | 0.2842 | 1.4009 | 0.8420 | 0.8277 | 0.1439 | 0.0840 |
| 0.0097 | 73.0 | 3180 | 0.8163 | 0.8391 | 0.2858 | 1.4029 | 0.8391 | 0.8246 | 0.1482 | 0.0836 |
| 0.0097 | 73.99 | 3223 | 0.8192 | 0.8391 | 0.2844 | 1.3644 | 0.8391 | 0.8240 | 0.1475 | 0.0843 |
| 0.0097 | 75.0 | 3267 | 0.8225 | 0.8448 | 0.2836 | 1.3593 | 0.8448 | 0.8344 | 0.1473 | 0.0847 |
| 0.0097 | 75.98 | 3310 | 0.8267 | 0.8362 | 0.2859 | 1.3642 | 0.8362 | 0.8207 | 0.1473 | 0.0840 |
| 0.0097 | 76.99 | 3354 | 0.8275 | 0.8391 | 0.2847 | 1.3618 | 0.8391 | 0.8240 | 0.1450 | 0.0849 |
| 0.0097 | 77.98 | 3397 | 0.8325 | 0.8362 | 0.2879 | 1.3686 | 0.8362 | 0.8207 | 0.1491 | 0.0843 |
| 0.0097 | 78.99 | 3441 | 0.8389 | 0.8448 | 0.2885 | 1.3629 | 0.8448 | 0.8329 | 0.1504 | 0.0833 |
| 0.0097 | 80.0 | 3485 | 0.8420 | 0.8420 | 0.2887 | 1.3610 | 0.8420 | 0.8261 | 0.1458 | 0.0837 |
| 0.0073 | 80.99 | 3528 | 0.8452 | 0.8362 | 0.2900 | 1.4064 | 0.8362 | 0.8221 | 0.1488 | 0.0833 |
| 0.0073 | 82.0 | 3572 | 0.8492 | 0.8362 | 0.2898 | 1.4076 | 0.8362 | 0.8221 | 0.1500 | 0.0837 |
| 0.0073 | 82.98 | 3615 | 0.8478 | 0.8362 | 0.2895 | 1.3609 | 0.8362 | 0.8207 | 0.1485 | 0.0847 |
| 0.0073 | 83.99 | 3659 | 0.8483 | 0.8391 | 0.2880 | 1.3622 | 0.8391 | 0.8243 | 0.1480 | 0.0842 |
| 0.0073 | 84.98 | 3702 | 0.8534 | 0.8420 | 0.2892 | 1.3609 | 0.8420 | 0.8261 | 0.1468 | 0.0843 |
| 0.0073 | 85.99 | 3746 | 0.8547 | 0.8333 | 0.2898 | 1.4028 | 0.8333 | 0.8186 | 0.1513 | 0.0846 |
| 0.0073 | 86.98 | 3789 | 0.8618 | 0.8391 | 0.2906 | 1.3597 | 0.8391 | 0.8243 | 0.1445 | 0.0846 |
| 0.0073 | 87.99 | 3833 | 0.8594 | 0.8420 | 0.2885 | 1.3265 | 0.8420 | 0.8311 | 0.1462 | 0.0848 |
| 0.0073 | 89.0 | 3877 | 0.8669 | 0.8391 | 0.2911 | 1.3592 | 0.8391 | 0.8243 | 0.1471 | 0.0843 |
| 0.0073 | 89.99 | 3920 | 0.8664 | 0.8391 | 0.2901 | 1.3597 | 0.8391 | 0.8243 | 0.1468 | 0.0852 |
| 0.0073 | 91.0 | 3964 | 0.8678 | 0.8420 | 0.2905 | 1.3253 | 0.8420 | 0.8296 | 0.1462 | 0.0854 |
| 0.0057 | 91.98 | 4007 | 0.8719 | 0.8391 | 0.2909 | 1.3585 | 0.8391 | 0.8243 | 0.1475 | 0.0853 |
| 0.0057 | 92.99 | 4051 | 0.8768 | 0.8391 | 0.2930 | 1.3595 | 0.8391 | 0.8243 | 0.1493 | 0.0852 |
| 0.0057 | 93.98 | 4094 | 0.8785 | 0.8333 | 0.2928 | 1.4034 | 0.8333 | 0.8203 | 0.1529 | 0.0849 |
| 0.0057 | 94.99 | 4138 | 0.8859 | 0.8333 | 0.2942 | 1.3684 | 0.8333 | 0.8183 | 0.1543 | 0.0844 |
| 0.0057 | 96.0 | 4182 | 0.8839 | 0.8362 | 0.2937 | 1.3597 | 0.8362 | 0.8221 | 0.1497 | 0.0852 |
| 0.0057 | 96.99 | 4225 | 0.8864 | 0.8333 | 0.2940 | 1.4012 | 0.8333 | 0.8203 | 0.1532 | 0.0850 |
| 0.0057 | 98.0 | 4269 | 0.8879 | 0.8362 | 0.2941 | 1.3607 | 0.8362 | 0.8221 | 0.1504 | 0.0849 |
| 0.0057 | 98.98 | 4312 | 0.8921 | 0.8333 | 0.2954 | 1.3609 | 0.8333 | 0.8183 | 0.1521 | 0.0851 |
| 0.0057 | 99.99 | 4356 | 0.8949 | 0.8391 | 0.2945 | 1.3575 | 0.8391 | 0.8243 | 0.1491 | 0.0854 |
| 0.0057 | 100.98 | 4399 | 0.8945 | 0.8362 | 0.2945 | 1.3591 | 0.8362 | 0.8221 | 0.1500 | 0.0856 |
| 0.0057 | 101.99 | 4443 | 0.8985 | 0.8333 | 0.2944 | 1.3599 | 0.8333 | 0.8183 | 0.1530 | 0.0854 |
| 0.0057 | 102.98 | 4486 | 0.8987 | 0.8391 | 0.2951 | 1.3586 | 0.8391 | 0.8246 | 0.1499 | 0.0850 |
| 0.0045 | 103.99 | 4530 | 0.9025 | 0.8362 | 0.2957 | 1.3592 | 0.8362 | 0.8221 | 0.1510 | 0.0857 |
| 0.0045 | 105.0 | 4574 | 0.9082 | 0.8305 | 0.2972 | 1.3625 | 0.8305 | 0.8165 | 0.1568 | 0.0852 |
| 0.0045 | 105.99 | 4617 | 0.9087 | 0.8362 | 0.2958 | 1.3579 | 0.8362 | 0.8221 | 0.1505 | 0.0858 |
| 0.0045 | 107.0 | 4661 | 0.9105 | 0.8305 | 0.2977 | 1.3619 | 0.8305 | 0.8165 | 0.1561 | 0.0844 |
| 0.0045 | 107.98 | 4704 | 0.9136 | 0.8305 | 0.2978 | 1.3994 | 0.8305 | 0.8165 | 0.1559 | 0.0851 |
| 0.0045 | 108.99 | 4748 | 0.9148 | 0.8391 | 0.2968 | 1.3573 | 0.8391 | 0.8243 | 0.1504 | 0.0856 |
| 0.0045 | 109.98 | 4791 | 0.9188 | 0.8333 | 0.2974 | 1.3569 | 0.8333 | 0.8183 | 0.1532 | 0.0850 |
| 0.0045 | 110.99 | 4835 | 0.9164 | 0.8362 | 0.2959 | 1.3595 | 0.8362 | 0.8221 | 0.1507 | 0.0857 |
| 0.0045 | 112.0 | 4879 | 0.9221 | 0.8333 | 0.2977 | 1.3573 | 0.8333 | 0.8183 | 0.1550 | 0.0857 |
| 0.0045 | 112.99 | 4922 | 0.9256 | 0.8305 | 0.2990 | 1.3599 | 0.8305 | 0.8165 | 0.1574 | 0.0852 |
| 0.0045 | 114.0 | 4966 | 0.9284 | 0.8305 | 0.2994 | 1.3610 | 0.8305 | 0.8165 | 0.1572 | 0.0848 |
| 0.0037 | 114.98 | 5009 | 0.9312 | 0.8333 | 0.2998 | 1.3565 | 0.8333 | 0.8183 | 0.1537 | 0.0857 |
| 0.0037 | 115.99 | 5053 | 0.9322 | 0.8333 | 0.2995 | 1.3583 | 0.8333 | 0.8183 | 0.1543 | 0.0852 |
| 0.0037 | 116.98 | 5096 | 0.9385 | 0.8305 | 0.3007 | 1.3593 | 0.8305 | 0.8165 | 0.1577 | 0.0852 |
| 0.0037 | 117.99 | 5140 | 0.9386 | 0.8305 | 0.3009 | 1.4329 | 0.8305 | 0.8165 | 0.1582 | 0.0851 |
| 0.0037 | 118.98 | 5183 | 0.9386 | 0.8333 | 0.2996 | 1.3570 | 0.8333 | 0.8183 | 0.1542 | 0.0855 |
| 0.0037 | 119.99 | 5227 | 0.9406 | 0.8333 | 0.2995 | 1.3554 | 0.8333 | 0.8183 | 0.1540 | 0.0848 |
| 0.0037 | 121.0 | 5271 | 0.9442 | 0.8305 | 0.3006 | 1.3589 | 0.8305 | 0.8165 | 0.1570 | 0.0849 |
| 0.0037 | 121.99 | 5314 | 0.9435 | 0.8333 | 0.3000 | 1.3551 | 0.8333 | 0.8183 | 0.1546 | 0.0855 |
| 0.0037 | 123.0 | 5358 | 0.9456 | 0.8333 | 0.2996 | 1.3550 | 0.8333 | 0.8183 | 0.1544 | 0.0848 |
| 0.0037 | 123.98 | 5401 | 0.9490 | 0.8333 | 0.3008 | 1.3561 | 0.8333 | 0.8183 | 0.1547 | 0.0850 |
| 0.0037 | 124.99 | 5445 | 0.9500 | 0.8333 | 0.3011 | 1.3592 | 0.8333 | 0.8183 | 0.1551 | 0.0846 |
| 0.0037 | 125.98 | 5488 | 0.9513 | 0.8333 | 0.3003 | 1.3549 | 0.8333 | 0.8183 | 0.1544 | 0.0845 |
| 0.0031 | 126.99 | 5532 | 0.9575 | 0.8305 | 0.3024 | 1.3580 | 0.8305 | 0.8165 | 0.1581 | 0.0849 |
| 0.0031 | 128.0 | 5576 | 0.9593 | 0.8305 | 0.3025 | 1.4028 | 0.8305 | 0.8165 | 0.1591 | 0.0851 |
| 0.0031 | 128.99 | 5619 | 0.9594 | 0.8305 | 0.3021 | 1.3619 | 0.8305 | 0.8165 | 0.1579 | 0.0849 |
| 0.0031 | 130.0 | 5663 | 0.9628 | 0.8305 | 0.3025 | 1.3589 | 0.8305 | 0.8165 | 0.1587 | 0.0847 |
| 0.0031 | 130.98 | 5706 | 0.9652 | 0.8305 | 0.3031 | 1.3599 | 0.8305 | 0.8165 | 0.1593 | 0.0844 |
| 0.0031 | 131.99 | 5750 | 0.9646 | 0.8362 | 0.3005 | 1.3353 | 0.8362 | 0.8205 | 0.1520 | 0.0851 |
| 0.0031 | 132.98 | 5793 | 0.9658 | 0.8333 | 0.3021 | 1.3562 | 0.8333 | 0.8183 | 0.1555 | 0.0849 |
| 0.0031 | 133.99 | 5837 | 0.9698 | 0.8333 | 0.3023 | 1.3545 | 0.8333 | 0.8183 | 0.1554 | 0.0845 |
| 0.0031 | 134.98 | 5880 | 0.9716 | 0.8333 | 0.3032 | 1.3559 | 0.8333 | 0.8183 | 0.1555 | 0.0852 |
| 0.0031 | 135.99 | 5924 | 0.9736 | 0.8305 | 0.3037 | 1.3624 | 0.8305 | 0.8165 | 0.1584 | 0.0849 |
| 0.0031 | 137.0 | 5968 | 0.9760 | 0.8333 | 0.3039 | 1.3575 | 0.8333 | 0.8183 | 0.1551 | 0.0845 |
| 0.0026 | 137.99 | 6011 | 0.9789 | 0.8305 | 0.3041 | 1.3569 | 0.8305 | 0.8165 | 0.1592 | 0.0848 |
| 0.0026 | 139.0 | 6055 | 0.9801 | 0.8305 | 0.3040 | 1.3574 | 0.8305 | 0.8165 | 0.1598 | 0.0854 |
| 0.0026 | 139.98 | 6098 | 0.9806 | 0.8333 | 0.3035 | 1.3552 | 0.8333 | 0.8183 | 0.1557 | 0.0852 |
| 0.0026 | 140.99 | 6142 | 0.9835 | 0.8333 | 0.3041 | 1.3574 | 0.8333 | 0.8183 | 0.1564 | 0.0846 |
| 0.0026 | 141.98 | 6185 | 0.9838 | 0.8333 | 0.3037 | 1.3549 | 0.8333 | 0.8183 | 0.1557 | 0.0849 |
| 0.0026 | 142.99 | 6229 | 0.9872 | 0.8333 | 0.3044 | 1.3544 | 0.8333 | 0.8183 | 0.1557 | 0.0851 |
| 0.0026 | 144.0 | 6273 | 0.9900 | 0.8305 | 0.3056 | 1.3654 | 0.8305 | 0.8165 | 0.1597 | 0.0847 |
| 0.0026 | 144.99 | 6316 | 0.9907 | 0.8333 | 0.3049 | 1.3551 | 0.8333 | 0.8183 | 0.1565 | 0.0854 |
| 0.0026 | 146.0 | 6360 | 0.9896 | 0.8333 | 0.3044 | 1.3569 | 0.8333 | 0.8183 | 0.1563 | 0.0843 |
| 0.0026 | 146.98 | 6403 | 0.9938 | 0.8333 | 0.3053 | 1.3550 | 0.8333 | 0.8183 | 0.1562 | 0.0844 |
| 0.0026 | 147.99 | 6447 | 0.9962 | 0.8305 | 0.3056 | 1.3615 | 0.8305 | 0.8165 | 0.1594 | 0.0844 |
| 0.0026 | 148.98 | 6490 | 0.9954 | 0.8305 | 0.3051 | 1.3601 | 0.8305 | 0.8165 | 0.1590 | 0.0847 |
| 0.0022 | 149.99 | 6534 | 0.9961 | 0.8333 | 0.3043 | 1.3550 | 0.8333 | 0.8183 | 0.1554 | 0.0847 |
| 0.0022 | 150.98 | 6577 | 1.0026 | 0.8333 | 0.3059 | 1.3555 | 0.8333 | 0.8183 | 0.1563 | 0.0853 |
| 0.0022 | 151.99 | 6621 | 1.0004 | 0.8333 | 0.3049 | 1.3544 | 0.8333 | 0.8183 | 0.1566 | 0.0847 |
| 0.0022 | 153.0 | 6665 | 1.0024 | 0.8305 | 0.3058 | 1.3606 | 0.8305 | 0.8165 | 0.1595 | 0.0846 |
| 0.0022 | 153.99 | 6708 | 1.0054 | 0.8305 | 0.3064 | 1.3598 | 0.8305 | 0.8165 | 0.1591 | 0.0848 |
| 0.0022 | 155.0 | 6752 | 1.0053 | 0.8333 | 0.3054 | 1.3548 | 0.8333 | 0.8183 | 0.1562 | 0.0845 |
| 0.0022 | 155.98 | 6795 | 1.0068 | 0.8333 | 0.3053 | 1.3548 | 0.8333 | 0.8183 | 0.1562 | 0.0846 |
| 0.0022 | 156.99 | 6839 | 1.0076 | 0.8333 | 0.3055 | 1.3551 | 0.8333 | 0.8183 | 0.1561 | 0.0844 |
| 0.0022 | 157.98 | 6882 | 1.0105 | 0.8333 | 0.3059 | 1.3546 | 0.8333 | 0.8183 | 0.1563 | 0.0845 |
| 0.0022 | 158.99 | 6926 | 1.0114 | 0.8333 | 0.3061 | 1.3555 | 0.8333 | 0.8183 | 0.1559 | 0.0851 |
| 0.0022 | 160.0 | 6970 | 1.0108 | 0.8333 | 0.3061 | 1.3586 | 0.8333 | 0.8183 | 0.1561 | 0.0848 |
| 0.002 | 160.99 | 7013 | 1.0129 | 0.8333 | 0.3064 | 1.3577 | 0.8333 | 0.8183 | 0.1560 | 0.0845 |
| 0.002 | 162.0 | 7057 | 1.0141 | 0.8333 | 0.3060 | 1.3542 | 0.8333 | 0.8183 | 0.1562 | 0.0845 |
| 0.002 | 162.98 | 7100 | 1.0150 | 0.8333 | 0.3063 | 1.3555 | 0.8333 | 0.8183 | 0.1563 | 0.0847 |
| 0.002 | 163.99 | 7144 | 1.0181 | 0.8305 | 0.3071 | 1.3616 | 0.8305 | 0.8165 | 0.1587 | 0.0847 |
| 0.002 | 164.98 | 7187 | 1.0197 | 0.8305 | 0.3073 | 1.3610 | 0.8305 | 0.8165 | 0.1585 | 0.0847 |
| 0.002 | 165.99 | 7231 | 1.0203 | 0.8333 | 0.3071 | 1.3566 | 0.8333 | 0.8183 | 0.1565 | 0.0846 |
| 0.002 | 166.98 | 7274 | 1.0214 | 0.8333 | 0.3070 | 1.3561 | 0.8333 | 0.8183 | 0.1564 | 0.0845 |
| 0.002 | 167.99 | 7318 | 1.0211 | 0.8333 | 0.3067 | 1.3558 | 0.8333 | 0.8183 | 0.1562 | 0.0846 |
| 0.002 | 169.0 | 7362 | 1.0255 | 0.8305 | 0.3077 | 1.3564 | 0.8305 | 0.8165 | 0.1592 | 0.0846 |
| 0.002 | 169.99 | 7405 | 1.0238 | 0.8333 | 0.3066 | 1.3535 | 0.8333 | 0.8183 | 0.1567 | 0.0844 |
| 0.002 | 171.0 | 7449 | 1.0258 | 0.8333 | 0.3075 | 1.3580 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.002 | 171.98 | 7492 | 1.0260 | 0.8333 | 0.3073 | 1.3594 | 0.8333 | 0.8183 | 0.1559 | 0.0846 |
| 0.0018 | 172.99 | 7536 | 1.0281 | 0.8305 | 0.3077 | 1.3584 | 0.8305 | 0.8165 | 0.1586 | 0.0847 |
| 0.0018 | 173.98 | 7579 | 1.0274 | 0.8333 | 0.3073 | 1.3577 | 0.8333 | 0.8183 | 0.1560 | 0.0851 |
| 0.0018 | 174.99 | 7623 | 1.0323 | 0.8305 | 0.3082 | 1.3577 | 0.8305 | 0.8165 | 0.1596 | 0.0848 |
| 0.0018 | 176.0 | 7667 | 1.0303 | 0.8333 | 0.3076 | 1.3579 | 0.8333 | 0.8183 | 0.1561 | 0.0846 |
| 0.0018 | 176.99 | 7710 | 1.0325 | 0.8333 | 0.3081 | 1.3567 | 0.8333 | 0.8183 | 0.1565 | 0.0845 |
| 0.0018 | 178.0 | 7754 | 1.0319 | 0.8333 | 0.3077 | 1.3569 | 0.8333 | 0.8183 | 0.1560 | 0.0847 |
| 0.0018 | 178.98 | 7797 | 1.0340 | 0.8333 | 0.3081 | 1.3568 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.0018 | 179.99 | 7841 | 1.0331 | 0.8333 | 0.3072 | 1.3550 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0018 | 180.98 | 7884 | 1.0346 | 0.8333 | 0.3079 | 1.3563 | 0.8333 | 0.8183 | 0.1561 | 0.0847 |
| 0.0018 | 181.99 | 7928 | 1.0344 | 0.8333 | 0.3079 | 1.3577 | 0.8333 | 0.8183 | 0.1565 | 0.0847 |
| 0.0018 | 182.98 | 7971 | 1.0363 | 0.8333 | 0.3080 | 1.3556 | 0.8333 | 0.8183 | 0.1566 | 0.0850 |
| 0.0016 | 183.99 | 8015 | 1.0368 | 0.8333 | 0.3080 | 1.3569 | 0.8333 | 0.8183 | 0.1561 | 0.0847 |
| 0.0016 | 185.0 | 8059 | 1.0369 | 0.8333 | 0.3080 | 1.3563 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.0016 | 185.99 | 8102 | 1.0373 | 0.8333 | 0.3080 | 1.3565 | 0.8333 | 0.8183 | 0.1561 | 0.0850 |
| 0.0016 | 187.0 | 8146 | 1.0377 | 0.8333 | 0.3080 | 1.3568 | 0.8333 | 0.8183 | 0.1561 | 0.0846 |
| 0.0016 | 187.98 | 8189 | 1.0392 | 0.8333 | 0.3084 | 1.3577 | 0.8333 | 0.8183 | 0.1565 | 0.0846 |
| 0.0016 | 188.99 | 8233 | 1.0391 | 0.8333 | 0.3082 | 1.3564 | 0.8333 | 0.8183 | 0.1564 | 0.0848 |
| 0.0016 | 189.98 | 8276 | 1.0393 | 0.8333 | 0.3081 | 1.3561 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.0016 | 190.99 | 8320 | 1.0398 | 0.8333 | 0.3084 | 1.3582 | 0.8333 | 0.8183 | 0.1562 | 0.0846 |
| 0.0016 | 192.0 | 8364 | 1.0405 | 0.8333 | 0.3083 | 1.3558 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0016 | 192.99 | 8407 | 1.0401 | 0.8333 | 0.3082 | 1.3558 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0016 | 194.0 | 8451 | 1.0407 | 0.8333 | 0.3083 | 1.3564 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0016 | 194.98 | 8494 | 1.0414 | 0.8333 | 0.3086 | 1.3573 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0015 | 195.99 | 8538 | 1.0410 | 0.8333 | 0.3084 | 1.3567 | 0.8333 | 0.8183 | 0.1564 | 0.0848 |
| 0.0015 | 196.98 | 8581 | 1.0411 | 0.8333 | 0.3084 | 1.3568 | 0.8333 | 0.8183 | 0.1563 | 0.0846 |
| 0.0015 | 197.42 | 8600 | 1.0411 | 0.8333 | 0.3084 | 1.3568 | 0.8333 | 0.8183 | 0.1563 | 0.0847 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Isotonic/informal_to_formal
|
Isotonic
| 2023-07-12T22:55:28Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"style-transfer",
"seq2seq",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-01T05:59:36Z |
---
language: "en"
tags:
- style-transfer
- text2text-generation
- seq2seq
inference: true
---
# Formality Style Transfer
## Model description
T5 Model for Formality Style Transfer. Trained on the GYAFC dataset.
## How to use
PyTorch model available.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Isotonic/informal_to_formal")
model = AutoModelForSeq2SeqLM.from_pretrained("Isotonic/informal_to_formal")
sentence = "will you look into these two deals and let me know"
text = "Make the following sentence Formal: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
do_sample=True,
top_k=120,
top_p=0.95,
early_stopping=True,
num_return_sequences=5
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
print(line)
Output: "Would you look into the two deals in question, then let me know?"
```
|
komo-dono/rammatra_jp
|
komo-dono
| 2023-07-12T22:52:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-12T22:51:26Z |
---
license: openrail
language:
- ja
tags:
- music
rammatra jp 500 epoch
|
KingShmeeky/KingshmeekyRVC
|
KingShmeeky
| 2023-07-12T22:43:21Z | 0 | 0 | null |
[
"music",
"en",
"license:openrail",
"region:us"
] | null | 2023-07-12T22:30:27Z |
---
license: openrail
language:
- en
tags:
- music
---
|
nolanaatama/nglshdbhtcvbrnnknckrbckrgnshnmpctrvcv2150pchmklgn
|
nolanaatama
| 2023-07-12T22:40:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-12T22:08:55Z |
---
license: creativeml-openrail-m
---
|
lovelyxs/Pyramids
|
lovelyxs
| 2023-07-12T22:37:03Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-12T22:36:58Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lovelyxs/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
komo-dono/collei_jp
|
komo-dono
| 2023-07-12T22:28:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-12T22:27:51Z |
---
license: openrail
language:
- ja
tags:
- music
collei japanese 500 epoch
|
cworthingtonfujitsu/falcon-7b-instruct-jukebox
|
cworthingtonfujitsu
| 2023-07-12T21:58:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T21:58:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
lovelyxs/ppo-SnowballTarget
|
lovelyxs
| 2023-07-12T21:43:03Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-12T21:42:55Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lovelyxs/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
whywynn/q-FrozenLake-v1-4x4-noSlippery
|
whywynn
| 2023-07-12T21:24:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T21:24:49Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="whywynn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yanex0/cn-v1-1
|
yanex0
| 2023-07-12T21:20:20Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-12T21:14:00Z |
---
license: openrail
---
This is the model files for [ControlNet 1.1](https://github.com/lllyasviel/ControlNet-v1-1-nightly).
This model card will be filled in a more detailed way after 1.1 is officially merged into ControlNet.
|
SrPrieto/ppo-LunarLander-v2
|
SrPrieto
| 2023-07-12T21:14:49Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T21:14:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.18 +/- 13.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lovelyxs/Reinforce-Pixelcopter-PLE-v0
|
lovelyxs
| 2023-07-12T21:06:20Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T21:06:13Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 45.30 +/- 39.51
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
carbon225/byt5-abbreviations-pl
|
carbon225
| 2023-07-12T21:00:28Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pl",
"dataset:carbon225/poleval-abbreviation-disambiguation-wiki",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-09T21:40:24Z |
---
datasets:
- carbon225/poleval-abbreviation-disambiguation-wiki
language:
- pl
widget:
- text: "Kolejne 0,12 <mask>pkt. proc.</mask> wynika ze spadku popytu na polski eksport, a 0,08 z zaburzeń na rynku wewnętrznym"
example_title: "Example 1"
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MarcoIPolo/distilbert-base-uncased-finetuned-emotion
|
MarcoIPolo
| 2023-07-12T20:50:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T16:05:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9245630401134893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2193
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3295 | 0.899 | 0.8946 |
| No log | 2.0 | 500 | 0.2193 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
grace-pro/afriberta-small-finetuned-hausa-2e-4
|
grace-pro
| 2023-07-12T20:50:37Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-12T20:15:26Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-small-finetuned-hausa-2e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-small-finetuned-hausa-2e-4
This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2081
- Precision: 0.6383
- Recall: 0.4793
- F1: 0.5475
- Accuracy: 0.9589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1575 | 1.0 | 1312 | 0.1439 | 0.6452 | 0.3971 | 0.4917 | 0.9569 |
| 0.1201 | 2.0 | 2624 | 0.1371 | 0.6344 | 0.4451 | 0.5231 | 0.9578 |
| 0.0831 | 3.0 | 3936 | 0.1544 | 0.6444 | 0.4727 | 0.5454 | 0.9591 |
| 0.0523 | 4.0 | 5248 | 0.1836 | 0.6500 | 0.4683 | 0.5444 | 0.9592 |
| 0.0318 | 5.0 | 6560 | 0.2081 | 0.6383 | 0.4793 | 0.5475 | 0.9589 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
foreverip/q-FrozenLake-v1-4x4-noSlippery
|
foreverip
| 2023-07-12T20:49:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T20:49:56Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="foreverip/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kimnguyenwork/Taxi-v3
|
kimnguyenwork
| 2023-07-12T20:45:36Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T20:45:29Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kimnguyenwork/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
1aurent/diy-ppo-LunarLander-v2
|
1aurent
| 2023-07-12T20:42:54Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T20:03:20Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -41.63 +/- 67.80
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'repo_id': '1aurent/diy-ppo-LunarLander-v2'
'exp_name': 'hugggingface_colab'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.0003
'num_envs': 32
'num_steps': 64
'anneal_lr': True
'gae': True
'gamma': 0.999
'gae_lambda': 0.98
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'batch_size': 2048
'minibatch_size': 512}
```
|
kimnguyenwork/q-FrozenLake-v1
|
kimnguyenwork
| 2023-07-12T20:42:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T20:42:20Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kimnguyenwork/q-FrozenLake-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2-concat-guten-rarity-no-cut
|
NasimB
| 2023-07-12T20:33:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T18:48:47Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-guten-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-guten-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6869 | 0.29 | 500 | 5.6385 |
| 5.3235 | 0.59 | 1000 | 5.2015 |
| 4.9865 | 0.88 | 1500 | 4.9498 |
| 4.7068 | 1.18 | 2000 | 4.8080 |
| 4.5674 | 1.47 | 2500 | 4.6941 |
| 4.4601 | 1.76 | 3000 | 4.5872 |
| 4.3293 | 2.06 | 3500 | 4.5155 |
| 4.1497 | 2.35 | 4000 | 4.4676 |
| 4.1182 | 2.64 | 4500 | 4.4072 |
| 4.0826 | 2.94 | 5000 | 4.3514 |
| 3.8664 | 3.23 | 5500 | 4.3488 |
| 3.8272 | 3.53 | 6000 | 4.3168 |
| 3.8034 | 3.82 | 6500 | 4.2843 |
| 3.6795 | 4.11 | 7000 | 4.2836 |
| 3.5333 | 4.41 | 7500 | 4.2764 |
| 3.534 | 4.7 | 8000 | 4.2603 |
| 3.5182 | 4.99 | 8500 | 4.2478 |
| 3.3437 | 5.29 | 9000 | 4.2620 |
| 3.3384 | 5.58 | 9500 | 4.2601 |
| 3.3385 | 5.88 | 10000 | 4.2595 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V12.4
|
SHENMU007
| 2023-07-12T20:26:54Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-12T17:42:58Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
VK246/IC_ver5b_coco_swin_gpt2_01pc_1e
|
VK246
| 2023-07-12T20:14:54Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-12T19:47:07Z |
---
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
- bleu
model-index:
- name: IC_ver5b_coco_swin_gpt2_01pc_1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver5b_coco_swin_gpt2_01pc_1e
This model is a fine-tuned version of [VK246/IC_ver5a_coco_swin_gpt2_05pc_1e](https://huggingface.co/VK246/IC_ver5a_coco_swin_gpt2_05pc_1e) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1266
- Rouge1: 27.4772
- Rouge2: 5.9305
- Rougel: 25.1138
- Rougelsum: 25.1235
- Bleu: 2.437
- Gen Len: 11.1124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:------:|:-------:|
| 1.2093 | 0.42 | 25 | 1.1552 | 22.8898 | 3.6353 | 20.6781 | 20.6737 | 1.1554 | 11.1124 |
| 1.2149 | 0.85 | 50 | 1.1358 | 26.2857 | 5.2765 | 24.0266 | 24.0308 | 2.1954 | 11.1124 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jd06/TwoSentenceHorrorModel
|
jd06
| 2023-07-12T20:14:37Z | 211 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T20:51:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: TwoSentenceHorrorModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TwoSentenceHorrorModel
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 4.7786 |
| No log | 2.0 | 2 | 4.4930 |
| No log | 3.0 | 3 | 4.3563 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
odunola/transcriber-t5-v8
|
odunola
| 2023-07-12T20:08:13Z | 114 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-06T12:24:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: transcriber-t5-v8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# transcriber-t5-v8
This model is a fine-tuned version of [odunola/transcriber-t5-v7](https://huggingface.co/odunola/transcriber-t5-v7) on the [odunola/transcriberv3](https://huggingface.co/datasets/odunola/transcriberv3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1924
## Model description
This model, a fine-tuned T5, is designed to pinpoint and extract specific Bible scriptures' chapter and verse from a string of text. It's a major component of a broader project but is also versatile enough for your other applications.
The model's learning is driven by a unique dataset, painstakingly compiled from transcripts of sermons on YouTube and various online platforms. This handpicked, curated data equips the model with a specialized understanding of religious discourse and biblical references.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2215 | 0.23 | 500 | 0.2216 |
| 0.1814 | 0.47 | 1000 | 0.2197 |
| 0.2011 | 0.7 | 1500 | 0.2059 |
| 0.2595 | 0.94 | 2000 | 0.2009 |
| 0.2412 | 1.17 | 2500 | 0.2019 |
| 0.1785 | 1.41 | 3000 | 0.1970 |
| 0.1962 | 1.64 | 3500 | 0.1983 |
| 0.2009 | 1.88 | 4000 | 0.1963 |
| 0.2013 | 2.11 | 4500 | 0.1923 |
| 0.2715 | 2.35 | 5000 | 0.1929 |
| 0.2488 | 2.58 | 5500 | 0.1936 |
| 0.1185 | 2.81 | 6000 | 0.1924 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
newsrx/instructor-large
|
newsrx
| 2023-07-12T19:56:14Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"t5",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"prompt-retrieval",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"transformers",
"English",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2212.09741",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
sentence-similarity
| 2023-07-12T19:56:14Z |
---
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- prompt-retrieval
- text-reranking
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- t5
- English
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
language: en
inference: false
license: apache-2.0
model-index:
- name: INSTRUCTOR
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 88.13432835820896
- type: ap
value: 59.298209334395665
- type: f1
value: 83.31769058643586
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.526375
- type: ap
value: 88.16327709705504
- type: f1
value: 91.51095801287843
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.856
- type: f1
value: 45.41490917650942
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.223
- type: map_at_10
value: 47.947
- type: map_at_100
value: 48.742000000000004
- type: map_at_1000
value: 48.745
- type: map_at_3
value: 43.137
- type: map_at_5
value: 45.992
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 48.4
- type: mrr_at_100
value: 49.202
- type: mrr_at_1000
value: 49.205
- type: mrr_at_3
value: 43.551
- type: mrr_at_5
value: 46.467999999999996
- type: ndcg_at_1
value: 31.223
- type: ndcg_at_10
value: 57.045
- type: ndcg_at_100
value: 60.175
- type: ndcg_at_1000
value: 60.233000000000004
- type: ndcg_at_3
value: 47.171
- type: ndcg_at_5
value: 52.322
- type: precision_at_1
value: 31.223
- type: precision_at_10
value: 8.599
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.63
- type: precision_at_5
value: 14.282
- type: recall_at_1
value: 31.223
- type: recall_at_10
value: 85.989
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 58.89
- type: recall_at_5
value: 71.408
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 43.1621946393635
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.56417132407894
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.29539304390207
- type: mrr
value: 76.44484017060196
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 84.38746499431112
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.51298701298701
- type: f1
value: 77.49041754069235
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.61848554098577
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.32623280148178
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.803000000000004
- type: map_at_10
value: 48.848
- type: map_at_100
value: 50.5
- type: map_at_1000
value: 50.602999999999994
- type: map_at_3
value: 45.111000000000004
- type: map_at_5
value: 47.202
- type: mrr_at_1
value: 44.635000000000005
- type: mrr_at_10
value: 55.593
- type: mrr_at_100
value: 56.169999999999995
- type: mrr_at_1000
value: 56.19499999999999
- type: mrr_at_3
value: 53.361999999999995
- type: mrr_at_5
value: 54.806999999999995
- type: ndcg_at_1
value: 44.635000000000005
- type: ndcg_at_10
value: 55.899
- type: ndcg_at_100
value: 60.958
- type: ndcg_at_1000
value: 62.302
- type: ndcg_at_3
value: 51.051
- type: ndcg_at_5
value: 53.351000000000006
- type: precision_at_1
value: 44.635000000000005
- type: precision_at_10
value: 10.786999999999999
- type: precision_at_100
value: 1.6580000000000001
- type: precision_at_1000
value: 0.213
- type: precision_at_3
value: 24.893
- type: precision_at_5
value: 17.740000000000002
- type: recall_at_1
value: 35.803000000000004
- type: recall_at_10
value: 68.657
- type: recall_at_100
value: 89.77199999999999
- type: recall_at_1000
value: 97.67
- type: recall_at_3
value: 54.066
- type: recall_at_5
value: 60.788
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.706
- type: map_at_10
value: 44.896
- type: map_at_100
value: 46.299
- type: map_at_1000
value: 46.44
- type: map_at_3
value: 41.721000000000004
- type: map_at_5
value: 43.486000000000004
- type: mrr_at_1
value: 41.592
- type: mrr_at_10
value: 50.529
- type: mrr_at_100
value: 51.22
- type: mrr_at_1000
value: 51.258
- type: mrr_at_3
value: 48.205999999999996
- type: mrr_at_5
value: 49.528
- type: ndcg_at_1
value: 41.592
- type: ndcg_at_10
value: 50.77199999999999
- type: ndcg_at_100
value: 55.383
- type: ndcg_at_1000
value: 57.288
- type: ndcg_at_3
value: 46.324
- type: ndcg_at_5
value: 48.346000000000004
- type: precision_at_1
value: 41.592
- type: precision_at_10
value: 9.516
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 22.399
- type: precision_at_5
value: 15.770999999999999
- type: recall_at_1
value: 33.706
- type: recall_at_10
value: 61.353
- type: recall_at_100
value: 80.182
- type: recall_at_1000
value: 91.896
- type: recall_at_3
value: 48.204
- type: recall_at_5
value: 53.89699999999999
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 44.424
- type: map_at_10
value: 57.169000000000004
- type: map_at_100
value: 58.202
- type: map_at_1000
value: 58.242000000000004
- type: map_at_3
value: 53.825
- type: map_at_5
value: 55.714
- type: mrr_at_1
value: 50.470000000000006
- type: mrr_at_10
value: 60.489000000000004
- type: mrr_at_100
value: 61.096
- type: mrr_at_1000
value: 61.112
- type: mrr_at_3
value: 58.192
- type: mrr_at_5
value: 59.611999999999995
- type: ndcg_at_1
value: 50.470000000000006
- type: ndcg_at_10
value: 63.071999999999996
- type: ndcg_at_100
value: 66.964
- type: ndcg_at_1000
value: 67.659
- type: ndcg_at_3
value: 57.74399999999999
- type: ndcg_at_5
value: 60.367000000000004
- type: precision_at_1
value: 50.470000000000006
- type: precision_at_10
value: 10.019
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 25.558999999999997
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 44.424
- type: recall_at_10
value: 77.02
- type: recall_at_100
value: 93.738
- type: recall_at_1000
value: 98.451
- type: recall_at_3
value: 62.888
- type: recall_at_5
value: 69.138
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.294
- type: map_at_10
value: 34.503
- type: map_at_100
value: 35.641
- type: map_at_1000
value: 35.724000000000004
- type: map_at_3
value: 31.753999999999998
- type: map_at_5
value: 33.190999999999995
- type: mrr_at_1
value: 28.362
- type: mrr_at_10
value: 36.53
- type: mrr_at_100
value: 37.541000000000004
- type: mrr_at_1000
value: 37.602000000000004
- type: mrr_at_3
value: 33.917
- type: mrr_at_5
value: 35.358000000000004
- type: ndcg_at_1
value: 28.362
- type: ndcg_at_10
value: 39.513999999999996
- type: ndcg_at_100
value: 44.815
- type: ndcg_at_1000
value: 46.839
- type: ndcg_at_3
value: 34.02
- type: ndcg_at_5
value: 36.522
- type: precision_at_1
value: 28.362
- type: precision_at_10
value: 6.101999999999999
- type: precision_at_100
value: 0.9129999999999999
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.161999999999999
- type: precision_at_5
value: 9.966
- type: recall_at_1
value: 26.294
- type: recall_at_10
value: 53.098
- type: recall_at_100
value: 76.877
- type: recall_at_1000
value: 91.834
- type: recall_at_3
value: 38.266
- type: recall_at_5
value: 44.287
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.407
- type: map_at_10
value: 25.185999999999996
- type: map_at_100
value: 26.533
- type: map_at_1000
value: 26.657999999999998
- type: map_at_3
value: 22.201999999999998
- type: map_at_5
value: 23.923
- type: mrr_at_1
value: 20.522000000000002
- type: mrr_at_10
value: 29.522
- type: mrr_at_100
value: 30.644
- type: mrr_at_1000
value: 30.713
- type: mrr_at_3
value: 26.679000000000002
- type: mrr_at_5
value: 28.483000000000004
- type: ndcg_at_1
value: 20.522000000000002
- type: ndcg_at_10
value: 30.656
- type: ndcg_at_100
value: 36.864999999999995
- type: ndcg_at_1000
value: 39.675
- type: ndcg_at_3
value: 25.319000000000003
- type: ndcg_at_5
value: 27.992
- type: precision_at_1
value: 20.522000000000002
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 12.396
- type: precision_at_5
value: 9.328
- type: recall_at_1
value: 16.407
- type: recall_at_10
value: 43.164
- type: recall_at_100
value: 69.695
- type: recall_at_1000
value: 89.41900000000001
- type: recall_at_3
value: 28.634999999999998
- type: recall_at_5
value: 35.308
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.473
- type: map_at_10
value: 41.676
- type: map_at_100
value: 43.120999999999995
- type: map_at_1000
value: 43.230000000000004
- type: map_at_3
value: 38.306000000000004
- type: map_at_5
value: 40.355999999999995
- type: mrr_at_1
value: 37.536
- type: mrr_at_10
value: 47.643
- type: mrr_at_100
value: 48.508
- type: mrr_at_1000
value: 48.551
- type: mrr_at_3
value: 45.348
- type: mrr_at_5
value: 46.744
- type: ndcg_at_1
value: 37.536
- type: ndcg_at_10
value: 47.823
- type: ndcg_at_100
value: 53.395
- type: ndcg_at_1000
value: 55.271
- type: ndcg_at_3
value: 42.768
- type: ndcg_at_5
value: 45.373000000000005
- type: precision_at_1
value: 37.536
- type: precision_at_10
value: 8.681
- type: precision_at_100
value: 1.34
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.468
- type: precision_at_5
value: 14.495
- type: recall_at_1
value: 30.473
- type: recall_at_10
value: 60.092999999999996
- type: recall_at_100
value: 82.733
- type: recall_at_1000
value: 94.875
- type: recall_at_3
value: 45.734
- type: recall_at_5
value: 52.691
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.976000000000003
- type: map_at_10
value: 41.097
- type: map_at_100
value: 42.547000000000004
- type: map_at_1000
value: 42.659000000000006
- type: map_at_3
value: 37.251
- type: map_at_5
value: 39.493
- type: mrr_at_1
value: 37.557
- type: mrr_at_10
value: 46.605000000000004
- type: mrr_at_100
value: 47.487
- type: mrr_at_1000
value: 47.54
- type: mrr_at_3
value: 43.721
- type: mrr_at_5
value: 45.411
- type: ndcg_at_1
value: 37.557
- type: ndcg_at_10
value: 47.449000000000005
- type: ndcg_at_100
value: 53.052
- type: ndcg_at_1000
value: 55.010999999999996
- type: ndcg_at_3
value: 41.439
- type: ndcg_at_5
value: 44.292
- type: precision_at_1
value: 37.557
- type: precision_at_10
value: 8.847
- type: precision_at_100
value: 1.357
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 20.091
- type: precision_at_5
value: 14.384
- type: recall_at_1
value: 29.976000000000003
- type: recall_at_10
value: 60.99099999999999
- type: recall_at_100
value: 84.245
- type: recall_at_1000
value: 96.97200000000001
- type: recall_at_3
value: 43.794
- type: recall_at_5
value: 51.778999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.099166666666665
- type: map_at_10
value: 38.1365
- type: map_at_100
value: 39.44491666666667
- type: map_at_1000
value: 39.55858333333334
- type: map_at_3
value: 35.03641666666666
- type: map_at_5
value: 36.79833333333334
- type: mrr_at_1
value: 33.39966666666667
- type: mrr_at_10
value: 42.42583333333333
- type: mrr_at_100
value: 43.28575
- type: mrr_at_1000
value: 43.33741666666667
- type: mrr_at_3
value: 39.94975
- type: mrr_at_5
value: 41.41633333333334
- type: ndcg_at_1
value: 33.39966666666667
- type: ndcg_at_10
value: 43.81741666666667
- type: ndcg_at_100
value: 49.08166666666667
- type: ndcg_at_1000
value: 51.121166666666674
- type: ndcg_at_3
value: 38.73575
- type: ndcg_at_5
value: 41.18158333333333
- type: precision_at_1
value: 33.39966666666667
- type: precision_at_10
value: 7.738916666666667
- type: precision_at_100
value: 1.2265833333333331
- type: precision_at_1000
value: 0.15983333333333336
- type: precision_at_3
value: 17.967416666666665
- type: precision_at_5
value: 12.78675
- type: recall_at_1
value: 28.099166666666665
- type: recall_at_10
value: 56.27049999999999
- type: recall_at_100
value: 78.93291666666667
- type: recall_at_1000
value: 92.81608333333334
- type: recall_at_3
value: 42.09775
- type: recall_at_5
value: 48.42533333333334
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.663
- type: map_at_10
value: 30.377
- type: map_at_100
value: 31.426
- type: map_at_1000
value: 31.519000000000002
- type: map_at_3
value: 28.069
- type: map_at_5
value: 29.256999999999998
- type: mrr_at_1
value: 26.687
- type: mrr_at_10
value: 33.107
- type: mrr_at_100
value: 34.055
- type: mrr_at_1000
value: 34.117999999999995
- type: mrr_at_3
value: 31.058000000000003
- type: mrr_at_5
value: 32.14
- type: ndcg_at_1
value: 26.687
- type: ndcg_at_10
value: 34.615
- type: ndcg_at_100
value: 39.776
- type: ndcg_at_1000
value: 42.05
- type: ndcg_at_3
value: 30.322
- type: ndcg_at_5
value: 32.157000000000004
- type: precision_at_1
value: 26.687
- type: precision_at_10
value: 5.491
- type: precision_at_100
value: 0.877
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 13.139000000000001
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.663
- type: recall_at_10
value: 45.035
- type: recall_at_100
value: 68.554
- type: recall_at_1000
value: 85.077
- type: recall_at_3
value: 32.982
- type: recall_at_5
value: 37.688
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.403
- type: map_at_10
value: 25.197000000000003
- type: map_at_100
value: 26.355
- type: map_at_1000
value: 26.487
- type: map_at_3
value: 22.733
- type: map_at_5
value: 24.114
- type: mrr_at_1
value: 21.37
- type: mrr_at_10
value: 29.091
- type: mrr_at_100
value: 30.018
- type: mrr_at_1000
value: 30.096
- type: mrr_at_3
value: 26.887
- type: mrr_at_5
value: 28.157
- type: ndcg_at_1
value: 21.37
- type: ndcg_at_10
value: 30.026000000000003
- type: ndcg_at_100
value: 35.416
- type: ndcg_at_1000
value: 38.45
- type: ndcg_at_3
value: 25.764
- type: ndcg_at_5
value: 27.742
- type: precision_at_1
value: 21.37
- type: precision_at_10
value: 5.609
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 12.423
- type: precision_at_5
value: 9.009
- type: recall_at_1
value: 17.403
- type: recall_at_10
value: 40.573
- type: recall_at_100
value: 64.818
- type: recall_at_1000
value: 86.53699999999999
- type: recall_at_3
value: 28.493000000000002
- type: recall_at_5
value: 33.660000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.639
- type: map_at_10
value: 38.951
- type: map_at_100
value: 40.238
- type: map_at_1000
value: 40.327
- type: map_at_3
value: 35.842
- type: map_at_5
value: 37.617
- type: mrr_at_1
value: 33.769
- type: mrr_at_10
value: 43.088
- type: mrr_at_100
value: 44.03
- type: mrr_at_1000
value: 44.072
- type: mrr_at_3
value: 40.656
- type: mrr_at_5
value: 42.138999999999996
- type: ndcg_at_1
value: 33.769
- type: ndcg_at_10
value: 44.676
- type: ndcg_at_100
value: 50.416000000000004
- type: ndcg_at_1000
value: 52.227999999999994
- type: ndcg_at_3
value: 39.494
- type: ndcg_at_5
value: 42.013
- type: precision_at_1
value: 33.769
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.18
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.221
- type: precision_at_5
value: 12.966
- type: recall_at_1
value: 28.639
- type: recall_at_10
value: 57.687999999999995
- type: recall_at_100
value: 82.541
- type: recall_at_1000
value: 94.896
- type: recall_at_3
value: 43.651
- type: recall_at_5
value: 49.925999999999995
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.57
- type: map_at_10
value: 40.004
- type: map_at_100
value: 41.75
- type: map_at_1000
value: 41.97
- type: map_at_3
value: 36.788
- type: map_at_5
value: 38.671
- type: mrr_at_1
value: 35.375
- type: mrr_at_10
value: 45.121
- type: mrr_at_100
value: 45.994
- type: mrr_at_1000
value: 46.04
- type: mrr_at_3
value: 42.227
- type: mrr_at_5
value: 43.995
- type: ndcg_at_1
value: 35.375
- type: ndcg_at_10
value: 46.392
- type: ndcg_at_100
value: 52.196
- type: ndcg_at_1000
value: 54.274
- type: ndcg_at_3
value: 41.163
- type: ndcg_at_5
value: 43.813
- type: precision_at_1
value: 35.375
- type: precision_at_10
value: 8.676
- type: precision_at_100
value: 1.678
- type: precision_at_1000
value: 0.253
- type: precision_at_3
value: 19.104
- type: precision_at_5
value: 13.913
- type: recall_at_1
value: 29.57
- type: recall_at_10
value: 58.779
- type: recall_at_100
value: 83.337
- type: recall_at_1000
value: 95.979
- type: recall_at_3
value: 44.005
- type: recall_at_5
value: 50.975
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.832
- type: map_at_10
value: 29.733999999999998
- type: map_at_100
value: 30.727
- type: map_at_1000
value: 30.843999999999998
- type: map_at_3
value: 26.834999999999997
- type: map_at_5
value: 28.555999999999997
- type: mrr_at_1
value: 22.921
- type: mrr_at_10
value: 31.791999999999998
- type: mrr_at_100
value: 32.666000000000004
- type: mrr_at_1000
value: 32.751999999999995
- type: mrr_at_3
value: 29.144
- type: mrr_at_5
value: 30.622
- type: ndcg_at_1
value: 22.921
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.744
- type: ndcg_at_1000
value: 42.407000000000004
- type: ndcg_at_3
value: 29.421000000000003
- type: ndcg_at_5
value: 32.211
- type: precision_at_1
value: 22.921
- type: precision_at_10
value: 5.675
- type: precision_at_100
value: 0.872
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 12.753999999999998
- type: precision_at_5
value: 9.353
- type: recall_at_1
value: 20.832
- type: recall_at_10
value: 48.795
- type: recall_at_100
value: 70.703
- type: recall_at_1000
value: 90.187
- type: recall_at_3
value: 34.455000000000005
- type: recall_at_5
value: 40.967
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.334
- type: map_at_10
value: 19.009999999999998
- type: map_at_100
value: 21.129
- type: map_at_1000
value: 21.328
- type: map_at_3
value: 15.152
- type: map_at_5
value: 17.084
- type: mrr_at_1
value: 23.453
- type: mrr_at_10
value: 36.099
- type: mrr_at_100
value: 37.069
- type: mrr_at_1000
value: 37.104
- type: mrr_at_3
value: 32.096000000000004
- type: mrr_at_5
value: 34.451
- type: ndcg_at_1
value: 23.453
- type: ndcg_at_10
value: 27.739000000000004
- type: ndcg_at_100
value: 35.836
- type: ndcg_at_1000
value: 39.242
- type: ndcg_at_3
value: 21.263
- type: ndcg_at_5
value: 23.677
- type: precision_at_1
value: 23.453
- type: precision_at_10
value: 9.199
- type: precision_at_100
value: 1.791
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 16.2
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 10.334
- type: recall_at_10
value: 35.177
- type: recall_at_100
value: 63.009
- type: recall_at_1000
value: 81.938
- type: recall_at_3
value: 19.914
- type: recall_at_5
value: 26.077
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.212
- type: map_at_10
value: 17.386
- type: map_at_100
value: 24.234
- type: map_at_1000
value: 25.724999999999998
- type: map_at_3
value: 12.727
- type: map_at_5
value: 14.785
- type: mrr_at_1
value: 59.25
- type: mrr_at_10
value: 68.687
- type: mrr_at_100
value: 69.133
- type: mrr_at_1000
value: 69.14099999999999
- type: mrr_at_3
value: 66.917
- type: mrr_at_5
value: 67.742
- type: ndcg_at_1
value: 48.625
- type: ndcg_at_10
value: 36.675999999999995
- type: ndcg_at_100
value: 41.543
- type: ndcg_at_1000
value: 49.241
- type: ndcg_at_3
value: 41.373
- type: ndcg_at_5
value: 38.707
- type: precision_at_1
value: 59.25
- type: precision_at_10
value: 28.525
- type: precision_at_100
value: 9.027000000000001
- type: precision_at_1000
value: 1.8339999999999999
- type: precision_at_3
value: 44.833
- type: precision_at_5
value: 37.35
- type: recall_at_1
value: 8.212
- type: recall_at_10
value: 23.188
- type: recall_at_100
value: 48.613
- type: recall_at_1000
value: 73.093
- type: recall_at_3
value: 14.419
- type: recall_at_5
value: 17.798
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.725
- type: f1
value: 46.50743309855908
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.086
- type: map_at_10
value: 66.914
- type: map_at_100
value: 67.321
- type: map_at_1000
value: 67.341
- type: map_at_3
value: 64.75800000000001
- type: map_at_5
value: 66.189
- type: mrr_at_1
value: 59.28600000000001
- type: mrr_at_10
value: 71.005
- type: mrr_at_100
value: 71.304
- type: mrr_at_1000
value: 71.313
- type: mrr_at_3
value: 69.037
- type: mrr_at_5
value: 70.35
- type: ndcg_at_1
value: 59.28600000000001
- type: ndcg_at_10
value: 72.695
- type: ndcg_at_100
value: 74.432
- type: ndcg_at_1000
value: 74.868
- type: ndcg_at_3
value: 68.72200000000001
- type: ndcg_at_5
value: 71.081
- type: precision_at_1
value: 59.28600000000001
- type: precision_at_10
value: 9.499
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 27.503
- type: precision_at_5
value: 17.854999999999997
- type: recall_at_1
value: 55.086
- type: recall_at_10
value: 86.453
- type: recall_at_100
value: 94.028
- type: recall_at_1000
value: 97.052
- type: recall_at_3
value: 75.821
- type: recall_at_5
value: 81.6
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.262999999999998
- type: map_at_10
value: 37.488
- type: map_at_100
value: 39.498
- type: map_at_1000
value: 39.687
- type: map_at_3
value: 32.529
- type: map_at_5
value: 35.455
- type: mrr_at_1
value: 44.907000000000004
- type: mrr_at_10
value: 53.239000000000004
- type: mrr_at_100
value: 54.086
- type: mrr_at_1000
value: 54.122
- type: mrr_at_3
value: 51.235
- type: mrr_at_5
value: 52.415
- type: ndcg_at_1
value: 44.907000000000004
- type: ndcg_at_10
value: 45.446
- type: ndcg_at_100
value: 52.429
- type: ndcg_at_1000
value: 55.169000000000004
- type: ndcg_at_3
value: 41.882000000000005
- type: ndcg_at_5
value: 43.178
- type: precision_at_1
value: 44.907000000000004
- type: precision_at_10
value: 12.931999999999999
- type: precision_at_100
value: 2.025
- type: precision_at_1000
value: 0.248
- type: precision_at_3
value: 28.652
- type: precision_at_5
value: 21.204
- type: recall_at_1
value: 22.262999999999998
- type: recall_at_10
value: 52.447
- type: recall_at_100
value: 78.045
- type: recall_at_1000
value: 94.419
- type: recall_at_3
value: 38.064
- type: recall_at_5
value: 44.769
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.519
- type: map_at_10
value: 45.831
- type: map_at_100
value: 46.815
- type: map_at_1000
value: 46.899
- type: map_at_3
value: 42.836
- type: map_at_5
value: 44.65
- type: mrr_at_1
value: 65.037
- type: mrr_at_10
value: 72.16
- type: mrr_at_100
value: 72.51100000000001
- type: mrr_at_1000
value: 72.53
- type: mrr_at_3
value: 70.682
- type: mrr_at_5
value: 71.54599999999999
- type: ndcg_at_1
value: 65.037
- type: ndcg_at_10
value: 55.17999999999999
- type: ndcg_at_100
value: 58.888
- type: ndcg_at_1000
value: 60.648
- type: ndcg_at_3
value: 50.501
- type: ndcg_at_5
value: 52.977
- type: precision_at_1
value: 65.037
- type: precision_at_10
value: 11.530999999999999
- type: precision_at_100
value: 1.4460000000000002
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 31.483
- type: precision_at_5
value: 20.845
- type: recall_at_1
value: 32.519
- type: recall_at_10
value: 57.657000000000004
- type: recall_at_100
value: 72.30199999999999
- type: recall_at_1000
value: 84.024
- type: recall_at_3
value: 47.225
- type: recall_at_5
value: 52.113
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.3168
- type: ap
value: 83.80165516037135
- type: f1
value: 88.29942471066407
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 20.724999999999998
- type: map_at_10
value: 32.736
- type: map_at_100
value: 33.938
- type: map_at_1000
value: 33.991
- type: map_at_3
value: 28.788000000000004
- type: map_at_5
value: 31.016
- type: mrr_at_1
value: 21.361
- type: mrr_at_10
value: 33.323
- type: mrr_at_100
value: 34.471000000000004
- type: mrr_at_1000
value: 34.518
- type: mrr_at_3
value: 29.453000000000003
- type: mrr_at_5
value: 31.629
- type: ndcg_at_1
value: 21.361
- type: ndcg_at_10
value: 39.649
- type: ndcg_at_100
value: 45.481
- type: ndcg_at_1000
value: 46.775
- type: ndcg_at_3
value: 31.594
- type: ndcg_at_5
value: 35.543
- type: precision_at_1
value: 21.361
- type: precision_at_10
value: 6.3740000000000006
- type: precision_at_100
value: 0.931
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.514999999999999
- type: precision_at_5
value: 10.100000000000001
- type: recall_at_1
value: 20.724999999999998
- type: recall_at_10
value: 61.034
- type: recall_at_100
value: 88.062
- type: recall_at_1000
value: 97.86399999999999
- type: recall_at_3
value: 39.072
- type: recall_at_5
value: 48.53
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.8919288645691
- type: f1
value: 93.57059586398059
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.97993616051072
- type: f1
value: 48.244319183606535
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.90047074646941
- type: f1
value: 66.48999056063725
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.34566240753195
- type: f1
value: 73.54164154290658
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.21866934757011
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.000936217235534
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.68189362520352
- type: mrr
value: 32.69603637784303
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.078
- type: map_at_10
value: 12.671
- type: map_at_100
value: 16.291
- type: map_at_1000
value: 17.855999999999998
- type: map_at_3
value: 9.610000000000001
- type: map_at_5
value: 11.152
- type: mrr_at_1
value: 43.963
- type: mrr_at_10
value: 53.173
- type: mrr_at_100
value: 53.718999999999994
- type: mrr_at_1000
value: 53.756
- type: mrr_at_3
value: 50.980000000000004
- type: mrr_at_5
value: 52.42
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.086
- type: ndcg_at_100
value: 32.545
- type: ndcg_at_1000
value: 41.144999999999996
- type: ndcg_at_3
value: 39.434999999999995
- type: ndcg_at_5
value: 37.888
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.014999999999997
- type: precision_at_100
value: 8.594
- type: precision_at_1000
value: 2.169
- type: precision_at_3
value: 37.049
- type: precision_at_5
value: 33.065
- type: recall_at_1
value: 6.078
- type: recall_at_10
value: 16.17
- type: recall_at_100
value: 34.512
- type: recall_at_1000
value: 65.447
- type: recall_at_3
value: 10.706
- type: recall_at_5
value: 13.158
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.378000000000004
- type: map_at_10
value: 42.178
- type: map_at_100
value: 43.32
- type: map_at_1000
value: 43.358000000000004
- type: map_at_3
value: 37.474000000000004
- type: map_at_5
value: 40.333000000000006
- type: mrr_at_1
value: 30.823
- type: mrr_at_10
value: 44.626
- type: mrr_at_100
value: 45.494
- type: mrr_at_1000
value: 45.519
- type: mrr_at_3
value: 40.585
- type: mrr_at_5
value: 43.146
- type: ndcg_at_1
value: 30.794
- type: ndcg_at_10
value: 50.099000000000004
- type: ndcg_at_100
value: 54.900999999999996
- type: ndcg_at_1000
value: 55.69499999999999
- type: ndcg_at_3
value: 41.238
- type: ndcg_at_5
value: 46.081
- type: precision_at_1
value: 30.794
- type: precision_at_10
value: 8.549
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 18.926000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 27.378000000000004
- type: recall_at_10
value: 71.842
- type: recall_at_100
value: 92.565
- type: recall_at_1000
value: 98.402
- type: recall_at_3
value: 49.053999999999995
- type: recall_at_5
value: 60.207
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.557
- type: map_at_10
value: 84.729
- type: map_at_100
value: 85.369
- type: map_at_1000
value: 85.382
- type: map_at_3
value: 81.72
- type: map_at_5
value: 83.613
- type: mrr_at_1
value: 81.3
- type: mrr_at_10
value: 87.488
- type: mrr_at_100
value: 87.588
- type: mrr_at_1000
value: 87.589
- type: mrr_at_3
value: 86.53
- type: mrr_at_5
value: 87.18599999999999
- type: ndcg_at_1
value: 81.28999999999999
- type: ndcg_at_10
value: 88.442
- type: ndcg_at_100
value: 89.637
- type: ndcg_at_1000
value: 89.70700000000001
- type: ndcg_at_3
value: 85.55199999999999
- type: ndcg_at_5
value: 87.154
- type: precision_at_1
value: 81.28999999999999
- type: precision_at_10
value: 13.489999999999998
- type: precision_at_100
value: 1.54
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.708
- type: recall_at_1
value: 70.557
- type: recall_at_10
value: 95.645
- type: recall_at_100
value: 99.693
- type: recall_at_1000
value: 99.995
- type: recall_at_3
value: 87.359
- type: recall_at_5
value: 91.89699999999999
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 63.65060114776209
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.63271250680617
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.263
- type: map_at_10
value: 10.801
- type: map_at_100
value: 12.888
- type: map_at_1000
value: 13.224
- type: map_at_3
value: 7.362
- type: map_at_5
value: 9.149000000000001
- type: mrr_at_1
value: 21
- type: mrr_at_10
value: 31.416
- type: mrr_at_100
value: 32.513
- type: mrr_at_1000
value: 32.58
- type: mrr_at_3
value: 28.116999999999997
- type: mrr_at_5
value: 29.976999999999997
- type: ndcg_at_1
value: 21
- type: ndcg_at_10
value: 18.551000000000002
- type: ndcg_at_100
value: 26.657999999999998
- type: ndcg_at_1000
value: 32.485
- type: ndcg_at_3
value: 16.834
- type: ndcg_at_5
value: 15.204999999999998
- type: precision_at_1
value: 21
- type: precision_at_10
value: 9.84
- type: precision_at_100
value: 2.16
- type: precision_at_1000
value: 0.35500000000000004
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 13.62
- type: recall_at_1
value: 4.263
- type: recall_at_10
value: 19.922
- type: recall_at_100
value: 43.808
- type: recall_at_1000
value: 72.14500000000001
- type: recall_at_3
value: 9.493
- type: recall_at_5
value: 13.767999999999999
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 81.27446313317233
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 76.27963301217527
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 88.18495048450949
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 81.91982338692046
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 89.00896818385291
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 85.48814644586132
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 90.30116926966582
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 67.74132963032342
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 86.87741355780479
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.0019012295875
- type: mrr
value: 94.70267024188593
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 50.05
- type: map_at_10
value: 59.36
- type: map_at_100
value: 59.967999999999996
- type: map_at_1000
value: 60.023
- type: map_at_3
value: 56.515
- type: map_at_5
value: 58.272999999999996
- type: mrr_at_1
value: 53
- type: mrr_at_10
value: 61.102000000000004
- type: mrr_at_100
value: 61.476
- type: mrr_at_1000
value: 61.523
- type: mrr_at_3
value: 58.778
- type: mrr_at_5
value: 60.128
- type: ndcg_at_1
value: 53
- type: ndcg_at_10
value: 64.43100000000001
- type: ndcg_at_100
value: 66.73599999999999
- type: ndcg_at_1000
value: 68.027
- type: ndcg_at_3
value: 59.279
- type: ndcg_at_5
value: 61.888
- type: precision_at_1
value: 53
- type: precision_at_10
value: 8.767
- type: precision_at_100
value: 1.01
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 23.444000000000003
- type: precision_at_5
value: 15.667
- type: recall_at_1
value: 50.05
- type: recall_at_10
value: 78.511
- type: recall_at_100
value: 88.5
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 64.117
- type: recall_at_5
value: 70.867
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72178217821782
- type: cos_sim_ap
value: 93.0728601593541
- type: cos_sim_f1
value: 85.6727976766699
- type: cos_sim_precision
value: 83.02063789868667
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.72178217821782
- type: dot_ap
value: 93.07287396168348
- type: dot_f1
value: 85.6727976766699
- type: dot_precision
value: 83.02063789868667
- type: dot_recall
value: 88.5
- type: euclidean_accuracy
value: 99.72178217821782
- type: euclidean_ap
value: 93.07285657982895
- type: euclidean_f1
value: 85.6727976766699
- type: euclidean_precision
value: 83.02063789868667
- type: euclidean_recall
value: 88.5
- type: manhattan_accuracy
value: 99.72475247524753
- type: manhattan_ap
value: 93.02792973059809
- type: manhattan_f1
value: 85.7727737973388
- type: manhattan_precision
value: 87.84067085953879
- type: manhattan_recall
value: 83.8
- type: max_accuracy
value: 99.72475247524753
- type: max_ap
value: 93.07287396168348
- type: max_f1
value: 85.7727737973388
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.77583615550819
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.151636938606956
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.16607939471187
- type: mrr
value: 52.95172046091163
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.314646669495666
- type: cos_sim_spearman
value: 31.83562491439455
- type: dot_pearson
value: 31.314590842874157
- type: dot_spearman
value: 31.83363065810437
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.198
- type: map_at_10
value: 1.3010000000000002
- type: map_at_100
value: 7.2139999999999995
- type: map_at_1000
value: 20.179
- type: map_at_3
value: 0.528
- type: map_at_5
value: 0.8019999999999999
- type: mrr_at_1
value: 72
- type: mrr_at_10
value: 83.39999999999999
- type: mrr_at_100
value: 83.39999999999999
- type: mrr_at_1000
value: 83.39999999999999
- type: mrr_at_3
value: 81.667
- type: mrr_at_5
value: 83.06700000000001
- type: ndcg_at_1
value: 66
- type: ndcg_at_10
value: 58.059000000000005
- type: ndcg_at_100
value: 44.316
- type: ndcg_at_1000
value: 43.147000000000006
- type: ndcg_at_3
value: 63.815999999999995
- type: ndcg_at_5
value: 63.005
- type: precision_at_1
value: 72
- type: precision_at_10
value: 61.4
- type: precision_at_100
value: 45.62
- type: precision_at_1000
value: 19.866
- type: precision_at_3
value: 70
- type: precision_at_5
value: 68.8
- type: recall_at_1
value: 0.198
- type: recall_at_10
value: 1.517
- type: recall_at_100
value: 10.587
- type: recall_at_1000
value: 41.233
- type: recall_at_3
value: 0.573
- type: recall_at_5
value: 0.907
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.894
- type: map_at_10
value: 8.488999999999999
- type: map_at_100
value: 14.445
- type: map_at_1000
value: 16.078
- type: map_at_3
value: 4.589
- type: map_at_5
value: 6.019
- type: mrr_at_1
value: 22.448999999999998
- type: mrr_at_10
value: 39.82
- type: mrr_at_100
value: 40.752
- type: mrr_at_1000
value: 40.771
- type: mrr_at_3
value: 34.354
- type: mrr_at_5
value: 37.721
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 21.563
- type: ndcg_at_100
value: 33.857
- type: ndcg_at_1000
value: 46.199
- type: ndcg_at_3
value: 22.296
- type: ndcg_at_5
value: 21.770999999999997
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.142999999999999
- type: precision_at_1000
value: 1.541
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 22.448999999999998
- type: recall_at_1
value: 1.894
- type: recall_at_10
value: 14.931
- type: recall_at_100
value: 45.524
- type: recall_at_1000
value: 83.243
- type: recall_at_3
value: 5.712
- type: recall_at_5
value: 8.386000000000001
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.049
- type: ap
value: 13.85116971310922
- type: f1
value: 54.37504302487686
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.1312959818902
- type: f1
value: 64.11413877009383
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 54.13103431861502
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.327889372355
- type: cos_sim_ap
value: 77.42059895975699
- type: cos_sim_f1
value: 71.02706903250873
- type: cos_sim_precision
value: 69.75324344950394
- type: cos_sim_recall
value: 72.34828496042216
- type: dot_accuracy
value: 87.327889372355
- type: dot_ap
value: 77.4209479346677
- type: dot_f1
value: 71.02706903250873
- type: dot_precision
value: 69.75324344950394
- type: dot_recall
value: 72.34828496042216
- type: euclidean_accuracy
value: 87.327889372355
- type: euclidean_ap
value: 77.42096495861037
- type: euclidean_f1
value: 71.02706903250873
- type: euclidean_precision
value: 69.75324344950394
- type: euclidean_recall
value: 72.34828496042216
- type: manhattan_accuracy
value: 87.31000774870358
- type: manhattan_ap
value: 77.38930750711619
- type: manhattan_f1
value: 71.07935314027831
- type: manhattan_precision
value: 67.70957726295677
- type: manhattan_recall
value: 74.80211081794195
- type: max_accuracy
value: 87.327889372355
- type: max_ap
value: 77.42096495861037
- type: max_f1
value: 71.07935314027831
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.58939729110878
- type: cos_sim_ap
value: 87.17594155025475
- type: cos_sim_f1
value: 79.21146953405018
- type: cos_sim_precision
value: 76.8918527109307
- type: cos_sim_recall
value: 81.67539267015707
- type: dot_accuracy
value: 89.58939729110878
- type: dot_ap
value: 87.17593963273593
- type: dot_f1
value: 79.21146953405018
- type: dot_precision
value: 76.8918527109307
- type: dot_recall
value: 81.67539267015707
- type: euclidean_accuracy
value: 89.58939729110878
- type: euclidean_ap
value: 87.17592466925834
- type: euclidean_f1
value: 79.21146953405018
- type: euclidean_precision
value: 76.8918527109307
- type: euclidean_recall
value: 81.67539267015707
- type: manhattan_accuracy
value: 89.62626615438352
- type: manhattan_ap
value: 87.16589873161546
- type: manhattan_f1
value: 79.25143598295348
- type: manhattan_precision
value: 76.39494177323712
- type: manhattan_recall
value: 82.32984293193716
- type: max_accuracy
value: 89.62626615438352
- type: max_ap
value: 87.17594155025475
- type: max_f1
value: 79.25143598295348
duplicated_from: hkunlp/instructor-large
---
# hkunlp/instructor-large
We introduce **Instructor**👨🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨 achieves sota on 70 diverse embedding tasks ([MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard))!
The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)!
**************************** **Updates** ****************************
* 12/28: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-large) trained with hard negatives, which gives better performance.
* 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-large) and [project page](https://instructor-embedding.github.io/)! Check them out!
## Quick start
<hr />
## Installation
```bash
pip install InstructorEmbedding
```
## Compute your customized embeddings
Then you can use the model like this to calculate domain-specific and task-aware embeddings:
```python
from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-large')
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
instruction = "Represent the Science title:"
embeddings = model.encode([[instruction,sentence]])
print(embeddings)
```
## Use cases
<hr />
## Calculate embeddings for your customized texts
If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions:
Represent the `domain` `text_type` for `task_objective`:
* `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc.
* `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc.
* `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc.
## Calculate Sentence similarities
You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**.
```python
from sklearn.metrics.pairwise import cosine_similarity
sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'],
['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']]
sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'],
['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']]
embeddings_a = model.encode(sentences_a)
embeddings_b = model.encode(sentences_b)
similarities = cosine_similarity(embeddings_a,embeddings_b)
print(similarities)
```
## Information Retrieval
You can also use **customized embeddings** for information retrieval.
```python
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']]
corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'],
['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"],
['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']]
query_embeddings = model.encode(query)
corpus_embeddings = model.encode(corpus)
similarities = cosine_similarity(query_embeddings,corpus_embeddings)
retrieved_doc_id = np.argmax(similarities)
print(retrieved_doc_id)
```
## Clustering
Use **customized embeddings** for clustering texts in groups.
```python
import sklearn.cluster
sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'],
['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'],
['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'],
['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"],
['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']]
embeddings = model.encode(sentences)
clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2)
clustering_model.fit(embeddings)
cluster_assignment = clustering_model.labels_
print(cluster_assignment)
```
|
chh6/v0TaxiAttempt
|
chh6
| 2023-07-12T19:47:12Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T19:47:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: v0TaxiAttempt
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="chh6/v0TaxiAttempt", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
veluchs/dqn-SpaceInvadersNoFrameskip-v4-4
|
veluchs
| 2023-07-12T19:41:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T19:40:57Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 264.50 +/- 87.36
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga veluchs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga veluchs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga veluchs
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ImanAndrea/BART-eng-to-ro
|
ImanAndrea
| 2023-07-12T19:33:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16-en-ro-pre-processed",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T06:56:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16-en-ro-pre-processed
model-index:
- name: BART-eng-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART-eng-to-ro
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the wmt16-en-ro-pre-processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.10.1
- Tokenizers 0.13.3
|
vuiseng9/baseline-ft-mrpc-IRoberta-b-8bit
|
vuiseng9
| 2023-07-12T19:21:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"ibert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T18:39:16Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: baseline-ft-mrpc-IRoberta-b-8bit
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8970588235294118
- name: F1
type: f1
value: 0.9257950530035336
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline-ft-mrpc-IRoberta-b-8bit
This model is a fine-tuned version of [vuiseng9/baseline-ft-mrpc-IRoberta-b-unquantized](https://huggingface.co/vuiseng9/baseline-ft-mrpc-IRoberta-b-unquantized) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3871
- Accuracy: 0.8971
- F1: 0.9258
- Combined Score: 0.9114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.0021 | 1.0 | 230 | 0.4017 | 0.8848 | 0.9147 | 0.8998 |
| 0.0026 | 2.0 | 460 | 0.4105 | 0.8873 | 0.9173 | 0.9023 |
| 0.0026 | 3.0 | 690 | 0.3707 | 0.8946 | 0.9236 | 0.9091 |
| 0.0037 | 4.0 | 920 | 0.3893 | 0.8946 | 0.9228 | 0.9087 |
| 1.324 | 5.0 | 1150 | 0.3871 | 0.8897 | 0.9204 | 0.9050 |
| 0.0227 | 6.0 | 1380 | 0.3951 | 0.8897 | 0.9201 | 0.9049 |
| 0.0081 | 7.0 | 1610 | 0.3818 | 0.8824 | 0.9155 | 0.8989 |
| 0.0054 | 8.0 | 1840 | 0.3902 | 0.8873 | 0.9181 | 0.9027 |
| 0.0383 | 9.0 | 2070 | 0.3659 | 0.8922 | 0.9225 | 0.9073 |
| 0.3861 | 10.0 | 2300 | 0.4260 | 0.8652 | 0.9030 | 0.8841 |
| 0.0028 | 11.0 | 2530 | 0.3619 | 0.8946 | 0.9234 | 0.9090 |
| 0.0957 | 12.0 | 2760 | 0.3871 | 0.8971 | 0.9258 | 0.9114 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
4bit/WizardLM-13B-V1.1-GPTQ
|
4bit
| 2023-07-12T19:06:41Z | 8 | 3 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"arxiv:2304.12244",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-12T19:03:12Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# WizardLM's WizardLM 13B V1.1 GPTQ
These files are GPTQ model files for [WizardLM's WizardLM 13B V1.1](https://huggingface.co/WizardLM/WizardLM-13B-V1.1).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: {prompt}
ASSISTANT:
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order androup size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
| gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/WizardLM-13B-V1.1-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-13B-V1.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/WizardLM-13B-V1.1-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `WizardLM-13B-V1.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/WizardLM-13B-V1.1-GPTQ"
model_basename = "wizardlm-13b-v1.1-GPTQ-4bit-128g.no-act.order"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: WizardLM's WizardLM 13B V1.1
This is the **Full-Weight** of WizardLM-13B V1.1 model.
**Repository**: https://github.com/nlpxucan/WizardLM
**Twitter**: https://twitter.com/WizardLM_AI/status/1677282955490918401
- 🔥🔥🔥 [7/7/2023] We released **WizardLM V1.1** models. The **WizardLM-13B-V1.1** is here ([Demo_13B-V1.1](https://e8a06366ccd1c4d1.gradio.app), [Demo_13B-V1.1_bak-1](https://59da107262a25764.gradio.app), [Demo_13B-V1.1_bak-2](https://dfc5113f66739c80.gradio.app), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)). **WizardLM-7B-V1.1**, **WizardLM-30B-V1.1**, and **WizardLM-65B-V1.1** are coming soon. Please checkout the [Full Model Weights](https://huggingface.co/WizardLM) and [paper](https://arxiv.org/abs/2304.12244).
- 🔥🔥🔥 [7/7/2023] The **WizardLM-13B-V1.1** achieves **6.74** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **86.32%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **99.3%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
|
ramymohamed/ppo-LunarLander-v2
|
ramymohamed
| 2023-07-12T19:05:33Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T19:05:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.71 +/- 24.27
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gmurillo/set-fit-goup-6-f
|
gmurillo
| 2023-07-12T19:00:20Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bart",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-12T18:59:06Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# gmurillo/set-fit-goup-6-f
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("gmurillo/set-fit-goup-6-f")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
gmurillo/set-fit-goup-4-f
|
gmurillo
| 2023-07-12T18:57:16Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bart",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-12T18:55:50Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# gmurillo/set-fit-goup-4-f
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("gmurillo/set-fit-goup-4-f")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
gmurillo/set-fit-goup-1-f
|
gmurillo
| 2023-07-12T18:51:50Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bart",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-12T18:50:32Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# gmurillo/set-fit-goup-1-f
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("gmurillo/set-fit-goup-1-f")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
komo-dono/dashiegames
|
komo-dono
| 2023-07-12T18:44:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-12T18:42:38Z |
---
license: openrail
language:
- en
tags:
- music
dashiegames 500 epoch
|
tyavika/LR1E4-BS16-Distilbert-QA-Pytorch-FULL
|
tyavika
| 2023-07-12T18:39:38Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-07T04:59:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LR1E4-BS16-Distilbert-QA-Pytorch-FULL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E4-BS16-Distilbert-QA-Pytorch-FULL
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4071 | 1.0 | 3290 | 1.2792 |
| 1.0123 | 2.0 | 6580 | 1.2843 |
| 0.6916 | 3.0 | 9870 | 1.3888 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vuiseng9/baseline-ft-mrpc-IRoberta-b-unquantized
|
vuiseng9
| 2023-07-12T18:33:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"ibert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T18:24:52Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: baseline-ft-mrpc-IRoberta-b-unquantized
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8995098039215687
- name: F1
type: f1
value: 0.9266547406082289
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline-ft-mrpc-IRoberta-b-unquantized
This model is a fine-tuned version of [kssteven/ibert-roberta-base](https://huggingface.co/kssteven/ibert-roberta-base) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5354
- Accuracy: 0.8995
- F1: 0.9267
- Combined Score: 0.9131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.1212 | 1.0 | 230 | 0.3401 | 0.8799 | 0.9136 | 0.8967 |
| 0.0347 | 2.0 | 460 | 0.3085 | 0.8676 | 0.9059 | 0.8868 |
| 0.0495 | 3.0 | 690 | 0.3552 | 0.8848 | 0.9174 | 0.9011 |
| 0.0024 | 4.0 | 920 | 0.4960 | 0.8824 | 0.9158 | 0.8991 |
| 0.0046 | 5.0 | 1150 | 0.5354 | 0.8995 | 0.9267 | 0.9131 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Taki3d/CrackDetectionLowRes
|
Taki3d
| 2023-07-12T18:24:18Z | 202 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-11T22:00:17Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: CrackDetectionLowRes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9940476190476191
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CrackDetectionLowRes
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9940
- Loss: 0.0183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.0126 | 1.0 | 992 | 0.9879 | 0.0344 |
| 0.0788 | 2.0 | 1904 | 0.9933 | 0.0220 |
| 0.1336 | 3.0 | 2856 | 0.9933 | 0.0222 |
| 0.0066 | 4.0 | 3808 | 0.9933 | 0.0190 |
| 0.0528 | 5.0 | 4760 | 0.9940 | 0.0183 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hsultanbey/autocomplete_trainer
|
hsultanbey
| 2023-07-12T18:23:42Z | 143 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T18:22:39Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: autocomplete_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# autocomplete_trainer
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
wenbowen/BundleSDF
|
wenbowen
| 2023-07-12T18:21:31Z | 0 | 0 | null |
[
"arxiv:2303.14158",
"region:us"
] | null | 2023-07-12T18:19:41Z |
Paper https://arxiv.org/abs/2303.14158
Project page https://bundlesdf.github.io/
|
asrimanth/person-thumbs-up-lora
|
asrimanth
| 2023-07-12T18:19:11Z | 2 | 3 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-12T18:18:41Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - asrimanth/person-thumbs-up-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Custom dataset dataset. You can find some example images in the following.




|
Danish-summarisation/DanSumT5-pilot
|
Danish-summarisation
| 2023-07-12T18:12:28Z | 122 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"da",
"arxiv:1804.11283",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-07-05T10:06:53Z |
---
language:
- da
tags:
- summarization
widget:
- text: "De strejkende SAS-piloter melder sig nu klar til gøre en undtagelse fra strejken for at hente strandede chartergæster hjem fra flere ferieområder.
Undtagelsen skal gælde nogle uger frem, men piloterne vil under ingen omstændigheder have nye gæster med sig ned til de samme destinationer.
Det skriver SAS Pilot Group i en pressemeddelelse.
- Vi forstår, at det er uundgåeligt, at vores passagerer bliver ramt af strejken. Men vi piloter er altid fokuseret på at opføre os ansvarligt med passagersikkerheden som højeste prioritet, siger Martin Lindgren, der er formand for SAS Pilot Group i Norden.
Men for at hjælpe strandede gæster kræver de strejkende piloter samtidig, at SAS' trækker sin lockout af piloterne tilbage.
Samtidig ser SAS Pilot Group det som en forudsætning, at SAS ikke får hjælp fra andre flyselskaber til at flyve nye passagerer til de samme destinationer, som piloterne tilbyder at flyve gæster hjem fra, skriver fagforeningen."
example_title: "Example 1"
- text: "Mere end 21.000 krigsforbrydelser. Så mange efterforsker de ukrainske myndigheder lige nu ifølge den ukrainske rigsadvokat, Iryna Venediktova.
Hun oplyser til britiske BBC, at der bliver anmeldt mellem 200 og 300 nye sager om dagen.
Forbrydelserne er ifølge Venediktova svære at efterforske, fordi det kan være vanskeligt at komme frem til de relevante områder og mennesker.
Men hun understreger overfor BBC, at russiske soldater, der har dræbt, tortureret eller voldtaget civile, bør forstå, at det kun er et spørgsmål om tid, før de alle vil komme for retten.
Rusland er blevet anklaget for en lang række krigsforbrydelser, siden landet invaderede Ukraine den 24. februar, men afviser alle anklager."
example_title: "Example 2"
- text: "Det nye studie Cognitive Science på Aarhus Universitet, som i år havde Østjyllands højeste adgangskrav på 11,7 i karaktergennemsnit, udklækker det første hold bachelorer til sommer.
Men når de skal læse videre på kandidaten må de til udlandet, hvis ikke de vil skifte til et andet fag. Aarhus Universitet kan nemlig ikke nå at oprette en kandidat i Cognitive Science til næste sommer, hvor det første hold bachelorer er færdige.
Det rammer blandt andre Julie Sohn, der startede på uddannelsen i sommeren 2015, og derfor kun mangler et år, før hun er bachelor.
- Jeg synes, at det er ærgerligt, at vi som nye studerende på et populært studie ikke kan tage en kandidat i Danmark, siger hun.
Bacheloruddannelsen i Cognitive Science blev oprettet af Aarhus Universitet i 2015, og uddannelsen kombinerer viden om menneskelig adfærd med avanceret statistik. Da der endnu ikke er oprettet en kandidatuddannelse indenfor dette område, har Julie Sohn i stedet mulighed for at læse en kandidatgrad i for eksempel informationsvidenskab.
Hun vil dog hellere fortsætte på Cognitive Science, og derfor overvejer hun nu at læse videre i udlandet.
- Det ser ud til, at det er den eneste mulighed, hvis man gerne vil læse videre på noget, der faktisk passer ind til vores studie, siger hun.
Nye regler giver forsinkelse
På Aarhus Universitet havde man håbet på at have kandidatuddannelsen klar, når det første hold bachelorer bliver færdige til sommer. Arbejdet er dog blevet forsinket, fordi der er kommet nye regler for, hvornår man må oprette en uddannelse, fortæller Niels Lehmann, prodekan på fakultetet Arts, som Cognitive Science hører under.
Det er nogle meget dygtige studerende, der kommer ind på uddannelsen, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast.
NIELS LEHMANN, PRODEKAN, AARHUS UNIVERSITET
Tidligere skulle Danmarks Akkrediteringsinstitution se alle nye uddannelser efter i sømmene for at sikre, at kvaliteten var i orden. Nu skal uddannelsesinstitutionerne selv stå for det kvalitetstjek.
Men det tjek har Aarhus Universitet endnu ikke fået grønt lys til selv at udføre, fortæller prodekanen.
- Vi ville meget gerne have kunnet nå at få et udbud på kandidaten i gang i 2018, men så længe man er under institutionsakkreditering, så kan man ikke ansøge om nye uddannelser, siger han.
Det er endnu usikkert, hvornår Aarhus Universitet kan oprette kandidaten i Cognitive Science. Hvis de får alle de nødvendige godkendelser, kan den tidligst være klar i 2019.
Prodekan Niels Lehmann frygter, at Danmark kommer til at miste nogle af landets skarpeste studerende, hvis de er nødt til at rejse til udlandet for at gøre deres uddannelse færdig.
- Det er nogle meget, meget dygtige studerende, der kommer ind på denne uddannelse, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast, siger han.
Hos Danmarks Akkrediteringsinstitution forstår man godt, at universitets ansatte og studenrede ærgrer sig.
- Jeg kan godt forstå, at Aarhus Universitet ærgrer sig over, at det trækker ud, og at der går noget tid, før man får mulighed for at oprette nye uddannelser, og at man ikke har fået den genvej til at oprette nye uddannelser, som ville være fuldt med, hvis man havde opnået en positiv institutionsakkreditering, siger kommunikationsansvarlig Daniel Sebastian Larsen.
I år var Cognitive Science i Aarhus den uddannelse i Danmark, der havde det fjerde højeste karakterkrav - det højeste var 'AP Graduate in Marketing Management' på Erhvervsakademi Sjælland med et krav på 12,3."
example_title: "Example 3"
---
# mT5-base fine-tuned for News article Summarisation ✏️🧾
[Google's mT5](https://aclanthology.org/2021.naacl-main.41/) for **summarisation** downstream task.
# Model summary
This repository contains a model for Danish abstractive summarisation of news articles. The summariser is based on a language-specific mT5-base, where the vocabulary is condensed to include tokens used in Danish and English. The model is fine-tuned using an abstractive subset of the DaNewsroom dataset (Varab & Schluter, 2020), according to the binned density categories employed in Newsroom (Grusky et al., 2019).
# References
Grusky, M., Naaman, M., & Artzi, Y. (2018). Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies. ArXiv:1804.11283 [Cs]. http://arxiv.org/abs/1804.11283
Varab, D., & Schluter, N. (2020). DaNewsroom: A Large-scale Danish Summarisation Dataset. Proceedings of the 12th Language Resources and Evaluation Conference, 6731–6739. https://aclanthology.org/2020.lrec-1.831
|
arstep/q-FrozenLake-v1-4x4-noSlippery
|
arstep
| 2023-07-12T18:12:13Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T18:12:10Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="arstep/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ayanban011/vit-base_tobacco_wr_0.01_wd_0.2
|
ayanban011
| 2023-07-12T18:09:56Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-12T15:22:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco_wr_0.01_wd_0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco_wr_0.01_wd_0.2
This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9831
- Accuracy: 0.755
- Brier Loss: 0.3936
- Nll: 1.4572
- F1 Micro: 0.755
- F1 Macro: 0.7245
- Ece: 0.2101
- Aurc: 0.1059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 12 | 0.7384 | 0.815 | 0.3056 | 1.1863 | 0.815 | 0.8021 | 0.2211 | 0.0789 |
| No log | 2.0 | 25 | 0.7504 | 0.795 | 0.3109 | 1.2594 | 0.795 | 0.7691 | 0.2235 | 0.0789 |
| No log | 2.96 | 37 | 0.7415 | 0.765 | 0.3109 | 1.0445 | 0.765 | 0.7422 | 0.2323 | 0.0762 |
| No log | 4.0 | 50 | 0.7222 | 0.805 | 0.2989 | 1.3200 | 0.805 | 0.7951 | 0.2028 | 0.0682 |
| No log | 4.96 | 62 | 0.7160 | 0.815 | 0.2989 | 1.2307 | 0.815 | 0.8067 | 0.2381 | 0.0688 |
| No log | 6.0 | 75 | 0.7149 | 0.825 | 0.2950 | 1.3097 | 0.825 | 0.8127 | 0.2110 | 0.0809 |
| No log | 6.96 | 87 | 0.7157 | 0.815 | 0.2986 | 1.2310 | 0.815 | 0.8007 | 0.2195 | 0.0779 |
| No log | 8.0 | 100 | 0.7245 | 0.81 | 0.3020 | 1.3030 | 0.81 | 0.7949 | 0.1997 | 0.0834 |
| No log | 8.96 | 112 | 0.7111 | 0.815 | 0.2972 | 1.2310 | 0.815 | 0.8009 | 0.2090 | 0.0793 |
| No log | 10.0 | 125 | 0.7344 | 0.81 | 0.3087 | 1.2974 | 0.81 | 0.7956 | 0.2240 | 0.0913 |
| No log | 10.96 | 137 | 0.7264 | 0.81 | 0.3057 | 1.3216 | 0.81 | 0.7907 | 0.2074 | 0.0879 |
| No log | 12.0 | 150 | 0.7301 | 0.805 | 0.3056 | 1.2448 | 0.805 | 0.7919 | 0.2112 | 0.0932 |
| No log | 12.96 | 162 | 0.7224 | 0.805 | 0.3020 | 1.2950 | 0.805 | 0.7899 | 0.1930 | 0.0915 |
| No log | 14.0 | 175 | 0.7235 | 0.81 | 0.3045 | 1.2965 | 0.81 | 0.7956 | 0.2025 | 0.0902 |
| No log | 14.96 | 187 | 0.7239 | 0.81 | 0.3039 | 1.2970 | 0.81 | 0.7956 | 0.2156 | 0.0906 |
| No log | 16.0 | 200 | 0.7293 | 0.81 | 0.3060 | 1.4336 | 0.81 | 0.7924 | 0.2029 | 0.0955 |
| No log | 16.96 | 212 | 0.7253 | 0.805 | 0.3044 | 1.2984 | 0.805 | 0.7853 | 0.1946 | 0.0911 |
| No log | 18.0 | 225 | 0.7293 | 0.805 | 0.3076 | 1.1852 | 0.805 | 0.7861 | 0.2015 | 0.0916 |
| No log | 18.96 | 237 | 0.7328 | 0.8 | 0.3050 | 1.3844 | 0.8000 | 0.7836 | 0.1900 | 0.0986 |
| No log | 20.0 | 250 | 0.7263 | 0.8 | 0.3058 | 1.2943 | 0.8000 | 0.7754 | 0.2003 | 0.0900 |
| No log | 20.96 | 262 | 0.7370 | 0.805 | 0.3102 | 1.3612 | 0.805 | 0.7818 | 0.1998 | 0.0939 |
| No log | 22.0 | 275 | 0.7412 | 0.795 | 0.3118 | 1.3673 | 0.795 | 0.7714 | 0.1952 | 0.0957 |
| No log | 22.96 | 287 | 0.7326 | 0.795 | 0.3077 | 1.3572 | 0.795 | 0.7666 | 0.1935 | 0.0925 |
| No log | 24.0 | 300 | 0.7308 | 0.805 | 0.3076 | 1.4265 | 0.805 | 0.7818 | 0.1968 | 0.0906 |
| No log | 24.96 | 312 | 0.7424 | 0.8 | 0.3122 | 1.3666 | 0.8000 | 0.7774 | 0.1853 | 0.0960 |
| No log | 26.0 | 325 | 0.7383 | 0.8 | 0.3097 | 1.3644 | 0.8000 | 0.7774 | 0.1911 | 0.0952 |
| No log | 26.96 | 337 | 0.7468 | 0.805 | 0.3143 | 1.4295 | 0.805 | 0.7829 | 0.1952 | 0.0971 |
| No log | 28.0 | 350 | 0.7484 | 0.795 | 0.3135 | 1.4327 | 0.795 | 0.7668 | 0.1906 | 0.0989 |
| No log | 28.96 | 362 | 0.7459 | 0.8 | 0.3135 | 1.3624 | 0.8000 | 0.7704 | 0.1945 | 0.0944 |
| No log | 30.0 | 375 | 0.7513 | 0.8 | 0.3160 | 1.4282 | 0.8000 | 0.7698 | 0.1991 | 0.0953 |
| No log | 30.96 | 387 | 0.7535 | 0.795 | 0.3168 | 1.4336 | 0.795 | 0.7656 | 0.1799 | 0.0980 |
| No log | 32.0 | 400 | 0.7540 | 0.8 | 0.3171 | 1.4469 | 0.8000 | 0.7774 | 0.1943 | 0.0959 |
| No log | 32.96 | 412 | 0.7566 | 0.8 | 0.3180 | 1.3772 | 0.8000 | 0.7704 | 0.1809 | 0.0972 |
| No log | 34.0 | 425 | 0.7641 | 0.795 | 0.3220 | 1.3878 | 0.795 | 0.7691 | 0.1937 | 0.1021 |
| No log | 34.96 | 437 | 0.7602 | 0.795 | 0.3192 | 1.3748 | 0.795 | 0.7656 | 0.1908 | 0.0973 |
| No log | 36.0 | 450 | 0.7577 | 0.8 | 0.3186 | 1.4397 | 0.8000 | 0.7704 | 0.1858 | 0.0951 |
| No log | 36.96 | 462 | 0.7724 | 0.795 | 0.3233 | 1.3800 | 0.795 | 0.7656 | 0.1796 | 0.1002 |
| No log | 38.0 | 475 | 0.7675 | 0.795 | 0.3223 | 1.3659 | 0.795 | 0.7654 | 0.1862 | 0.0976 |
| No log | 38.96 | 487 | 0.7772 | 0.79 | 0.3261 | 1.4406 | 0.79 | 0.7633 | 0.1877 | 0.1062 |
| 0.0949 | 40.0 | 500 | 0.7631 | 0.795 | 0.3211 | 1.4302 | 0.795 | 0.7654 | 0.1896 | 0.0953 |
| 0.0949 | 40.96 | 512 | 0.7878 | 0.79 | 0.3304 | 1.5689 | 0.79 | 0.7631 | 0.1770 | 0.1014 |
| 0.0949 | 42.0 | 525 | 0.7639 | 0.815 | 0.3198 | 1.5524 | 0.815 | 0.7879 | 0.1890 | 0.0938 |
| 0.0949 | 42.96 | 537 | 0.7967 | 0.785 | 0.3350 | 1.4516 | 0.785 | 0.7606 | 0.1809 | 0.1040 |
| 0.0949 | 44.0 | 550 | 0.7735 | 0.81 | 0.3237 | 1.4915 | 0.81 | 0.7784 | 0.2079 | 0.0963 |
| 0.0949 | 44.96 | 562 | 0.7859 | 0.79 | 0.3300 | 1.5076 | 0.79 | 0.7594 | 0.1851 | 0.0991 |
| 0.0949 | 46.0 | 575 | 0.7917 | 0.79 | 0.3323 | 1.5062 | 0.79 | 0.7631 | 0.1944 | 0.1015 |
| 0.0949 | 46.96 | 587 | 0.7879 | 0.795 | 0.3310 | 1.4304 | 0.795 | 0.7656 | 0.1761 | 0.0987 |
| 0.0949 | 48.0 | 600 | 0.7917 | 0.79 | 0.3320 | 1.4382 | 0.79 | 0.7629 | 0.1828 | 0.0994 |
| 0.0949 | 48.96 | 612 | 0.7950 | 0.79 | 0.3333 | 1.4929 | 0.79 | 0.7549 | 0.1811 | 0.0995 |
| 0.0949 | 50.0 | 625 | 0.8005 | 0.79 | 0.3363 | 1.4351 | 0.79 | 0.7631 | 0.1953 | 0.1009 |
| 0.0949 | 50.96 | 637 | 0.8019 | 0.79 | 0.3363 | 1.4891 | 0.79 | 0.7631 | 0.1837 | 0.0998 |
| 0.0949 | 52.0 | 650 | 0.8021 | 0.79 | 0.3354 | 1.4899 | 0.79 | 0.7586 | 0.1990 | 0.0994 |
| 0.0949 | 52.96 | 662 | 0.8012 | 0.795 | 0.3346 | 1.4913 | 0.795 | 0.7648 | 0.1878 | 0.0984 |
| 0.0949 | 54.0 | 675 | 0.8078 | 0.785 | 0.3388 | 1.4920 | 0.785 | 0.7559 | 0.1938 | 0.1000 |
| 0.0949 | 54.96 | 687 | 0.8092 | 0.785 | 0.3391 | 1.5003 | 0.785 | 0.7561 | 0.1878 | 0.1004 |
| 0.0949 | 56.0 | 700 | 0.8095 | 0.795 | 0.3378 | 1.4914 | 0.795 | 0.7634 | 0.1879 | 0.0994 |
| 0.0949 | 56.96 | 712 | 0.8124 | 0.785 | 0.3396 | 1.4926 | 0.785 | 0.7555 | 0.1996 | 0.1005 |
| 0.0949 | 58.0 | 725 | 0.8120 | 0.78 | 0.3391 | 1.4405 | 0.78 | 0.7469 | 0.1924 | 0.1005 |
| 0.0949 | 58.96 | 737 | 0.8185 | 0.785 | 0.3406 | 1.5007 | 0.785 | 0.7523 | 0.1910 | 0.1013 |
| 0.0949 | 60.0 | 750 | 0.8182 | 0.785 | 0.3421 | 1.4371 | 0.785 | 0.7555 | 0.1727 | 0.1010 |
| 0.0949 | 60.96 | 762 | 0.8224 | 0.78 | 0.3443 | 1.4404 | 0.78 | 0.7475 | 0.1941 | 0.1019 |
| 0.0949 | 62.0 | 775 | 0.8267 | 0.78 | 0.3463 | 1.4995 | 0.78 | 0.7535 | 0.1927 | 0.1016 |
| 0.0949 | 62.96 | 787 | 0.8252 | 0.775 | 0.3447 | 1.4965 | 0.775 | 0.7465 | 0.1798 | 0.1016 |
| 0.0949 | 64.0 | 800 | 0.8286 | 0.78 | 0.3446 | 1.4987 | 0.78 | 0.7475 | 0.1911 | 0.1014 |
| 0.0949 | 64.96 | 812 | 0.8308 | 0.78 | 0.3467 | 1.4438 | 0.78 | 0.7469 | 0.1866 | 0.1019 |
| 0.0949 | 66.0 | 825 | 0.8346 | 0.775 | 0.3476 | 1.4592 | 0.775 | 0.7444 | 0.1878 | 0.1034 |
| 0.0949 | 66.96 | 837 | 0.8348 | 0.78 | 0.3484 | 1.4423 | 0.78 | 0.7515 | 0.1821 | 0.1021 |
| 0.0949 | 68.0 | 850 | 0.8376 | 0.78 | 0.3481 | 1.4970 | 0.78 | 0.7469 | 0.1922 | 0.1015 |
| 0.0949 | 68.96 | 862 | 0.8379 | 0.78 | 0.3497 | 1.4388 | 0.78 | 0.7515 | 0.1850 | 0.1018 |
| 0.0949 | 70.0 | 875 | 0.8452 | 0.78 | 0.3512 | 1.4974 | 0.78 | 0.7525 | 0.1828 | 0.1036 |
| 0.0949 | 70.96 | 887 | 0.8437 | 0.78 | 0.3511 | 1.4519 | 0.78 | 0.7469 | 0.2076 | 0.1032 |
| 0.0949 | 72.0 | 900 | 0.8485 | 0.775 | 0.3535 | 1.5163 | 0.775 | 0.7444 | 0.1863 | 0.1033 |
| 0.0949 | 72.96 | 912 | 0.8516 | 0.77 | 0.3555 | 1.5015 | 0.7700 | 0.7404 | 0.1868 | 0.1030 |
| 0.0949 | 74.0 | 925 | 0.8507 | 0.77 | 0.3541 | 1.4407 | 0.7700 | 0.7404 | 0.1996 | 0.1026 |
| 0.0949 | 74.96 | 937 | 0.8508 | 0.77 | 0.3540 | 1.4424 | 0.7700 | 0.7400 | 0.1996 | 0.1025 |
| 0.0949 | 76.0 | 950 | 0.8559 | 0.77 | 0.3558 | 1.4487 | 0.7700 | 0.7400 | 0.2001 | 0.1031 |
| 0.0949 | 76.96 | 962 | 0.8564 | 0.77 | 0.3568 | 1.4389 | 0.7700 | 0.7400 | 0.1804 | 0.1024 |
| 0.0949 | 78.0 | 975 | 0.8611 | 0.775 | 0.3569 | 1.4966 | 0.775 | 0.7440 | 0.1831 | 0.1031 |
| 0.0949 | 78.96 | 987 | 0.8573 | 0.77 | 0.3557 | 1.4946 | 0.7700 | 0.7400 | 0.1933 | 0.1013 |
| 0.0366 | 80.0 | 1000 | 0.8640 | 0.775 | 0.3587 | 1.4998 | 0.775 | 0.7440 | 0.1871 | 0.1032 |
| 0.0366 | 80.96 | 1012 | 0.8640 | 0.77 | 0.3585 | 1.4983 | 0.7700 | 0.7400 | 0.1921 | 0.1032 |
| 0.0366 | 82.0 | 1025 | 0.8696 | 0.77 | 0.3608 | 1.5032 | 0.7700 | 0.7400 | 0.2033 | 0.1036 |
| 0.0366 | 82.96 | 1037 | 0.8702 | 0.77 | 0.3613 | 1.4987 | 0.7700 | 0.7400 | 0.2022 | 0.1029 |
| 0.0366 | 84.0 | 1050 | 0.8686 | 0.77 | 0.3597 | 1.4446 | 0.7700 | 0.7400 | 0.1887 | 0.1028 |
| 0.0366 | 84.96 | 1062 | 0.8700 | 0.77 | 0.3607 | 1.4365 | 0.7700 | 0.7400 | 0.1900 | 0.1025 |
| 0.0366 | 86.0 | 1075 | 0.8756 | 0.765 | 0.3621 | 1.5009 | 0.765 | 0.7308 | 0.1983 | 0.1040 |
| 0.0366 | 86.96 | 1087 | 0.8768 | 0.76 | 0.3623 | 1.5035 | 0.76 | 0.7282 | 0.1956 | 0.1040 |
| 0.0366 | 88.0 | 1100 | 0.8762 | 0.765 | 0.3618 | 1.4409 | 0.765 | 0.7308 | 0.1957 | 0.1033 |
| 0.0366 | 88.96 | 1112 | 0.8777 | 0.765 | 0.3629 | 1.4427 | 0.765 | 0.7308 | 0.1974 | 0.1036 |
| 0.0366 | 90.0 | 1125 | 0.8854 | 0.76 | 0.3661 | 1.4681 | 0.76 | 0.7282 | 0.2023 | 0.1049 |
| 0.0366 | 90.96 | 1137 | 0.8867 | 0.76 | 0.3660 | 1.5057 | 0.76 | 0.7282 | 0.1995 | 0.1049 |
| 0.0366 | 92.0 | 1150 | 0.8849 | 0.765 | 0.3648 | 1.5011 | 0.765 | 0.7308 | 0.1953 | 0.1039 |
| 0.0366 | 92.96 | 1162 | 0.8898 | 0.76 | 0.3670 | 1.5077 | 0.76 | 0.7282 | 0.2083 | 0.1045 |
| 0.0366 | 94.0 | 1175 | 0.8891 | 0.765 | 0.3662 | 1.4520 | 0.765 | 0.7308 | 0.2091 | 0.1040 |
| 0.0366 | 94.96 | 1187 | 0.8910 | 0.755 | 0.3679 | 1.4460 | 0.755 | 0.7247 | 0.2039 | 0.1043 |
| 0.0366 | 96.0 | 1200 | 0.8935 | 0.76 | 0.3684 | 1.4435 | 0.76 | 0.7273 | 0.1929 | 0.1041 |
| 0.0366 | 96.96 | 1212 | 0.8964 | 0.755 | 0.3689 | 1.4526 | 0.755 | 0.7247 | 0.1980 | 0.1048 |
| 0.0366 | 98.0 | 1225 | 0.8979 | 0.755 | 0.3701 | 1.4507 | 0.755 | 0.7247 | 0.2017 | 0.1045 |
| 0.0366 | 98.96 | 1237 | 0.8965 | 0.755 | 0.3692 | 1.4474 | 0.755 | 0.7247 | 0.2057 | 0.1044 |
| 0.0366 | 100.0 | 1250 | 0.9019 | 0.755 | 0.3716 | 1.4526 | 0.755 | 0.7247 | 0.2051 | 0.1047 |
| 0.0366 | 100.96 | 1262 | 0.8994 | 0.755 | 0.3694 | 1.4485 | 0.755 | 0.7247 | 0.1979 | 0.1041 |
| 0.0366 | 102.0 | 1275 | 0.9023 | 0.755 | 0.3715 | 1.4465 | 0.755 | 0.7247 | 0.2140 | 0.1044 |
| 0.0366 | 102.96 | 1287 | 0.9048 | 0.755 | 0.3720 | 1.4472 | 0.755 | 0.7247 | 0.2066 | 0.1045 |
| 0.0366 | 104.0 | 1300 | 0.9060 | 0.755 | 0.3719 | 1.4565 | 0.755 | 0.7247 | 0.2003 | 0.1044 |
| 0.0366 | 104.96 | 1312 | 0.9105 | 0.755 | 0.3735 | 1.4625 | 0.755 | 0.7247 | 0.2094 | 0.1053 |
| 0.0366 | 106.0 | 1325 | 0.9099 | 0.76 | 0.3738 | 1.4463 | 0.76 | 0.7273 | 0.2050 | 0.1044 |
| 0.0366 | 106.96 | 1337 | 0.9111 | 0.755 | 0.3751 | 1.4486 | 0.755 | 0.7247 | 0.2173 | 0.1041 |
| 0.0366 | 108.0 | 1350 | 0.9149 | 0.755 | 0.3745 | 1.5081 | 0.755 | 0.7247 | 0.2062 | 0.1052 |
| 0.0366 | 108.96 | 1362 | 0.9146 | 0.755 | 0.3744 | 1.4513 | 0.755 | 0.7247 | 0.2073 | 0.1047 |
| 0.0366 | 110.0 | 1375 | 0.9157 | 0.755 | 0.3764 | 1.4486 | 0.755 | 0.7247 | 0.2094 | 0.1046 |
| 0.0366 | 110.96 | 1387 | 0.9201 | 0.755 | 0.3767 | 1.4529 | 0.755 | 0.7247 | 0.2093 | 0.1051 |
| 0.0366 | 112.0 | 1400 | 0.9190 | 0.755 | 0.3759 | 1.4533 | 0.755 | 0.7247 | 0.2020 | 0.1049 |
| 0.0366 | 112.96 | 1412 | 0.9230 | 0.755 | 0.3782 | 1.4529 | 0.755 | 0.7247 | 0.2054 | 0.1050 |
| 0.0366 | 114.0 | 1425 | 0.9234 | 0.755 | 0.3778 | 1.4505 | 0.755 | 0.7247 | 0.2009 | 0.1049 |
| 0.0366 | 114.96 | 1437 | 0.9238 | 0.755 | 0.3780 | 1.4469 | 0.755 | 0.7247 | 0.1974 | 0.1051 |
| 0.0366 | 116.0 | 1450 | 0.9264 | 0.755 | 0.3786 | 1.4538 | 0.755 | 0.7247 | 0.2016 | 0.1051 |
| 0.0366 | 116.96 | 1462 | 0.9275 | 0.755 | 0.3787 | 1.4553 | 0.755 | 0.7247 | 0.2012 | 0.1054 |
| 0.0366 | 118.0 | 1475 | 0.9306 | 0.755 | 0.3798 | 1.4595 | 0.755 | 0.7247 | 0.2139 | 0.1057 |
| 0.0366 | 118.96 | 1487 | 0.9291 | 0.76 | 0.3789 | 1.4529 | 0.76 | 0.7273 | 0.2014 | 0.1054 |
| 0.0214 | 120.0 | 1500 | 0.9318 | 0.755 | 0.3804 | 1.4520 | 0.755 | 0.7247 | 0.2017 | 0.1050 |
| 0.0214 | 120.96 | 1512 | 0.9325 | 0.755 | 0.3796 | 1.4535 | 0.755 | 0.7247 | 0.2087 | 0.1053 |
| 0.0214 | 122.0 | 1525 | 0.9339 | 0.755 | 0.3804 | 1.4534 | 0.755 | 0.7247 | 0.2068 | 0.1056 |
| 0.0214 | 122.96 | 1537 | 0.9342 | 0.755 | 0.3807 | 1.4519 | 0.755 | 0.7247 | 0.1986 | 0.1052 |
| 0.0214 | 124.0 | 1550 | 0.9357 | 0.755 | 0.3808 | 1.4524 | 0.755 | 0.7247 | 0.2058 | 0.1054 |
| 0.0214 | 124.96 | 1562 | 0.9360 | 0.755 | 0.3808 | 1.4514 | 0.755 | 0.7247 | 0.2061 | 0.1052 |
| 0.0214 | 126.0 | 1575 | 0.9409 | 0.755 | 0.3828 | 1.4557 | 0.755 | 0.7247 | 0.2044 | 0.1058 |
| 0.0214 | 126.96 | 1587 | 0.9390 | 0.755 | 0.3819 | 1.4523 | 0.755 | 0.7247 | 0.2062 | 0.1052 |
| 0.0214 | 128.0 | 1600 | 0.9425 | 0.755 | 0.3833 | 1.4559 | 0.755 | 0.7247 | 0.2049 | 0.1058 |
| 0.0214 | 128.96 | 1612 | 0.9421 | 0.755 | 0.3824 | 1.4534 | 0.755 | 0.7247 | 0.1978 | 0.1055 |
| 0.0214 | 130.0 | 1625 | 0.9433 | 0.755 | 0.3831 | 1.4530 | 0.755 | 0.7247 | 0.2069 | 0.1056 |
| 0.0214 | 130.96 | 1637 | 0.9463 | 0.755 | 0.3842 | 1.4535 | 0.755 | 0.7247 | 0.2127 | 0.1058 |
| 0.0214 | 132.0 | 1650 | 0.9462 | 0.755 | 0.3835 | 1.4546 | 0.755 | 0.7247 | 0.1975 | 0.1057 |
| 0.0214 | 132.96 | 1662 | 0.9479 | 0.755 | 0.3842 | 1.4562 | 0.755 | 0.7247 | 0.2012 | 0.1059 |
| 0.0214 | 134.0 | 1675 | 0.9493 | 0.755 | 0.3852 | 1.4547 | 0.755 | 0.7247 | 0.2161 | 0.1057 |
| 0.0214 | 134.96 | 1687 | 0.9484 | 0.755 | 0.3842 | 1.4563 | 0.755 | 0.7247 | 0.2046 | 0.1055 |
| 0.0214 | 136.0 | 1700 | 0.9500 | 0.755 | 0.3849 | 1.4528 | 0.755 | 0.7247 | 0.2081 | 0.1058 |
| 0.0214 | 136.96 | 1712 | 0.9510 | 0.755 | 0.3854 | 1.4510 | 0.755 | 0.7247 | 0.1976 | 0.1053 |
| 0.0214 | 138.0 | 1725 | 0.9519 | 0.755 | 0.3855 | 1.4516 | 0.755 | 0.7247 | 0.2090 | 0.1056 |
| 0.0214 | 138.96 | 1737 | 0.9535 | 0.755 | 0.3857 | 1.4537 | 0.755 | 0.7247 | 0.2091 | 0.1057 |
| 0.0214 | 140.0 | 1750 | 0.9546 | 0.755 | 0.3862 | 1.4557 | 0.755 | 0.7247 | 0.2020 | 0.1057 |
| 0.0214 | 140.96 | 1762 | 0.9558 | 0.755 | 0.3867 | 1.4530 | 0.755 | 0.7247 | 0.2018 | 0.1059 |
| 0.0214 | 142.0 | 1775 | 0.9567 | 0.755 | 0.3870 | 1.4522 | 0.755 | 0.7247 | 0.2059 | 0.1055 |
| 0.0214 | 142.96 | 1787 | 0.9589 | 0.755 | 0.3876 | 1.4555 | 0.755 | 0.7247 | 0.2063 | 0.1060 |
| 0.0214 | 144.0 | 1800 | 0.9584 | 0.755 | 0.3872 | 1.4568 | 0.755 | 0.7245 | 0.2162 | 0.1056 |
| 0.0214 | 144.96 | 1812 | 0.9613 | 0.755 | 0.3881 | 1.4594 | 0.755 | 0.7247 | 0.2085 | 0.1063 |
| 0.0214 | 146.0 | 1825 | 0.9604 | 0.755 | 0.3873 | 1.5132 | 0.755 | 0.7247 | 0.2047 | 0.1056 |
| 0.0214 | 146.96 | 1837 | 0.9627 | 0.755 | 0.3887 | 1.4573 | 0.755 | 0.7247 | 0.2107 | 0.1059 |
| 0.0214 | 148.0 | 1850 | 0.9643 | 0.755 | 0.3890 | 1.4570 | 0.755 | 0.7247 | 0.2041 | 0.1065 |
| 0.0214 | 148.96 | 1862 | 0.9633 | 0.755 | 0.3886 | 1.4526 | 0.755 | 0.7247 | 0.2085 | 0.1055 |
| 0.0214 | 150.0 | 1875 | 0.9637 | 0.755 | 0.3887 | 1.4551 | 0.755 | 0.7245 | 0.2096 | 0.1054 |
| 0.0214 | 150.96 | 1887 | 0.9645 | 0.755 | 0.3886 | 1.4534 | 0.755 | 0.7247 | 0.2100 | 0.1058 |
| 0.0214 | 152.0 | 1900 | 0.9661 | 0.755 | 0.3891 | 1.4550 | 0.755 | 0.7247 | 0.2072 | 0.1060 |
| 0.0214 | 152.96 | 1912 | 0.9665 | 0.755 | 0.3894 | 1.4543 | 0.755 | 0.7247 | 0.2092 | 0.1057 |
| 0.0214 | 154.0 | 1925 | 0.9667 | 0.755 | 0.3894 | 1.4570 | 0.755 | 0.7245 | 0.2019 | 0.1056 |
| 0.0214 | 154.96 | 1937 | 0.9681 | 0.755 | 0.3900 | 1.4537 | 0.755 | 0.7245 | 0.2098 | 0.1059 |
| 0.0214 | 156.0 | 1950 | 0.9689 | 0.755 | 0.3903 | 1.4533 | 0.755 | 0.7245 | 0.2022 | 0.1056 |
| 0.0214 | 156.96 | 1962 | 0.9689 | 0.755 | 0.3898 | 1.4553 | 0.755 | 0.7247 | 0.2074 | 0.1058 |
| 0.0214 | 158.0 | 1975 | 0.9705 | 0.755 | 0.3901 | 1.4566 | 0.755 | 0.7247 | 0.2072 | 0.1062 |
| 0.0214 | 158.96 | 1987 | 0.9721 | 0.755 | 0.3911 | 1.4570 | 0.755 | 0.7245 | 0.2055 | 0.1062 |
| 0.0155 | 160.0 | 2000 | 0.9712 | 0.755 | 0.3904 | 1.4551 | 0.755 | 0.7245 | 0.2070 | 0.1060 |
| 0.0155 | 160.96 | 2012 | 0.9721 | 0.755 | 0.3906 | 1.4563 | 0.755 | 0.7245 | 0.2108 | 0.1059 |
| 0.0155 | 162.0 | 2025 | 0.9734 | 0.755 | 0.3914 | 1.4550 | 0.755 | 0.7245 | 0.2071 | 0.1057 |
| 0.0155 | 162.96 | 2037 | 0.9740 | 0.755 | 0.3913 | 1.4565 | 0.755 | 0.7245 | 0.2073 | 0.1062 |
| 0.0155 | 164.0 | 2050 | 0.9744 | 0.755 | 0.3915 | 1.4563 | 0.755 | 0.7245 | 0.2101 | 0.1059 |
| 0.0155 | 164.96 | 2062 | 0.9752 | 0.755 | 0.3917 | 1.4558 | 0.755 | 0.7245 | 0.2102 | 0.1059 |
| 0.0155 | 166.0 | 2075 | 0.9755 | 0.755 | 0.3916 | 1.4566 | 0.755 | 0.7245 | 0.2102 | 0.1062 |
| 0.0155 | 166.96 | 2087 | 0.9755 | 0.755 | 0.3917 | 1.4542 | 0.755 | 0.7245 | 0.2100 | 0.1057 |
| 0.0155 | 168.0 | 2100 | 0.9768 | 0.755 | 0.3921 | 1.4578 | 0.755 | 0.7245 | 0.2103 | 0.1059 |
| 0.0155 | 168.96 | 2112 | 0.9767 | 0.755 | 0.3919 | 1.4575 | 0.755 | 0.7245 | 0.2106 | 0.1057 |
| 0.0155 | 170.0 | 2125 | 0.9776 | 0.755 | 0.3922 | 1.4567 | 0.755 | 0.7245 | 0.2101 | 0.1058 |
| 0.0155 | 170.96 | 2137 | 0.9783 | 0.755 | 0.3925 | 1.4560 | 0.755 | 0.7245 | 0.2080 | 0.1062 |
| 0.0155 | 172.0 | 2150 | 0.9793 | 0.755 | 0.3927 | 1.4578 | 0.755 | 0.7245 | 0.2102 | 0.1064 |
| 0.0155 | 172.96 | 2162 | 0.9783 | 0.755 | 0.3923 | 1.4557 | 0.755 | 0.7245 | 0.2098 | 0.1058 |
| 0.0155 | 174.0 | 2175 | 0.9790 | 0.755 | 0.3927 | 1.4550 | 0.755 | 0.7245 | 0.2087 | 0.1057 |
| 0.0155 | 174.96 | 2187 | 0.9801 | 0.755 | 0.3929 | 1.4577 | 0.755 | 0.7245 | 0.2100 | 0.1059 |
| 0.0155 | 176.0 | 2200 | 0.9800 | 0.755 | 0.3928 | 1.4566 | 0.755 | 0.7245 | 0.2081 | 0.1058 |
| 0.0155 | 176.96 | 2212 | 0.9809 | 0.755 | 0.3932 | 1.4568 | 0.755 | 0.7245 | 0.2101 | 0.1058 |
| 0.0155 | 178.0 | 2225 | 0.9808 | 0.755 | 0.3930 | 1.4571 | 0.755 | 0.7245 | 0.2101 | 0.1058 |
| 0.0155 | 178.96 | 2237 | 0.9808 | 0.755 | 0.3930 | 1.4566 | 0.755 | 0.7245 | 0.2082 | 0.1057 |
| 0.0155 | 180.0 | 2250 | 0.9813 | 0.755 | 0.3931 | 1.4567 | 0.755 | 0.7245 | 0.2082 | 0.1058 |
| 0.0155 | 180.96 | 2262 | 0.9817 | 0.755 | 0.3933 | 1.4571 | 0.755 | 0.7245 | 0.2032 | 0.1059 |
| 0.0155 | 182.0 | 2275 | 0.9819 | 0.755 | 0.3933 | 1.4570 | 0.755 | 0.7245 | 0.2103 | 0.1058 |
| 0.0155 | 182.96 | 2287 | 0.9822 | 0.755 | 0.3934 | 1.4567 | 0.755 | 0.7245 | 0.2101 | 0.1058 |
| 0.0155 | 184.0 | 2300 | 0.9824 | 0.755 | 0.3934 | 1.4572 | 0.755 | 0.7245 | 0.2101 | 0.1059 |
| 0.0155 | 184.96 | 2312 | 0.9825 | 0.755 | 0.3935 | 1.4569 | 0.755 | 0.7245 | 0.2081 | 0.1060 |
| 0.0155 | 186.0 | 2325 | 0.9825 | 0.755 | 0.3934 | 1.4574 | 0.755 | 0.7245 | 0.2101 | 0.1059 |
| 0.0155 | 186.96 | 2337 | 0.9829 | 0.755 | 0.3935 | 1.4571 | 0.755 | 0.7245 | 0.2101 | 0.1059 |
| 0.0155 | 188.0 | 2350 | 0.9830 | 0.755 | 0.3936 | 1.4569 | 0.755 | 0.7245 | 0.2081 | 0.1060 |
| 0.0155 | 188.96 | 2362 | 0.9831 | 0.755 | 0.3936 | 1.4572 | 0.755 | 0.7245 | 0.2082 | 0.1059 |
| 0.0155 | 190.0 | 2375 | 0.9830 | 0.755 | 0.3935 | 1.4575 | 0.755 | 0.7245 | 0.2101 | 0.1059 |
| 0.0155 | 190.96 | 2387 | 0.9831 | 0.755 | 0.3936 | 1.4572 | 0.755 | 0.7245 | 0.2101 | 0.1059 |
| 0.0155 | 192.0 | 2400 | 0.9831 | 0.755 | 0.3936 | 1.4572 | 0.755 | 0.7245 | 0.2101 | 0.1059 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BlueAvenir/model_growth_restructuring_V_0_3
|
BlueAvenir
| 2023-07-12T18:03:50Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-12T18:03:20Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 153 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 153,
"warmup_steps": 16,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
VK246/IC_ver5a_coco_swin_gpt2_05pc_1e
|
VK246
| 2023-07-12T18:03:19Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-12T17:53:07Z |
---
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
- bleu
model-index:
- name: IC_ver5a_coco_swin_gpt2_05pc_1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver5a_coco_swin_gpt2_05pc_1e
This model is a fine-tuned version of [](https://huggingface.co/) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2245
- Rouge1: 22.0956
- Rouge2: 2.7707
- Rougel: 20.6806
- Rougelsum: 20.6915
- Bleu: 1.2101
- Gen Len: 11.1124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:------:|:-------:|
| 2.1625 | 0.42 | 25 | 1.2744 | 22.7707 | 2.4514 | 20.6431 | 20.6829 | 0.0 | 11.1124 |
| 1.3438 | 0.85 | 50 | 1.2374 | 21.3271 | 2.4116 | 19.7122 | 19.7385 | 0.7474 | 11.1124 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-mod-datasets1-iorder-rarity-all-5p5k
|
NasimB
| 2023-07-12T18:03:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T16:28:12Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-mod-datasets1-iorder-rarity-all-5p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-mod-datasets1-iorder-rarity-all-5p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7791 | 0.32 | 500 | 5.6460 |
| 5.4436 | 0.63 | 1000 | 5.2273 |
| 5.0944 | 0.95 | 1500 | 4.9700 |
| 4.809 | 1.26 | 2000 | 4.8294 |
| 4.6757 | 1.58 | 2500 | 4.7121 |
| 4.5526 | 1.9 | 3000 | 4.6044 |
| 4.3523 | 2.21 | 3500 | 4.5477 |
| 4.2579 | 2.53 | 4000 | 4.4848 |
| 4.2187 | 2.84 | 4500 | 4.4262 |
| 4.0519 | 3.16 | 5000 | 4.4090 |
| 3.9359 | 3.47 | 5500 | 4.3808 |
| 3.9128 | 3.79 | 6000 | 4.3460 |
| 3.8052 | 4.11 | 6500 | 4.3424 |
| 3.6404 | 4.42 | 7000 | 4.3367 |
| 3.6307 | 4.74 | 7500 | 4.3246 |
| 3.5885 | 5.05 | 8000 | 4.3236 |
| 3.4332 | 5.37 | 8500 | 4.3296 |
| 3.4351 | 5.69 | 9000 | 4.3281 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
tyavika/lr1e5-layer1-bs16-Distil-CNN128LSTM128NoBi
|
tyavika
| 2023-07-12T17:59:27Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-12T15:42:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: lr1e5-layer1-bs16-Distil-CNN128LSTM128NoBi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lr1e5-layer1-bs16-Distil-CNN128LSTM128NoBi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.5317 | 1.0 | 3290 | 1.3385 |
| 1.0853 | 2.0 | 6580 | 1.1885 |
| 0.7993 | 3.0 | 9870 | 1.2330 |
| 0.5808 | 4.0 | 13160 | 1.3813 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rmcl7/sd-class-butterflies-32
|
rmcl7
| 2023-07-12T17:54:58Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-12T17:53:25Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('rmcl7/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
DipanAI/falcon_law_7Bat
|
DipanAI
| 2023-07-12T17:53:31Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"region:us"
] | null | 2023-07-12T17:49:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
ayanban011/vit-base_tobacco_lr5e-6_e_200
|
ayanban011
| 2023-07-12T17:42:48Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-12T15:04:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco_lr5e-6_e_200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco_lr5e-6_e_200
This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7893
- Accuracy: 0.78
- Brier Loss: 0.3364
- Nll: 1.4430
- F1 Micro: 0.78
- F1 Macro: 0.7488
- Ece: 0.1922
- Aurc: 0.1018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 12 | 0.7450 | 0.815 | 0.3079 | 1.1882 | 0.815 | 0.7942 | 0.2383 | 0.0734 |
| No log | 2.0 | 25 | 0.7440 | 0.815 | 0.3074 | 1.1877 | 0.815 | 0.7922 | 0.2303 | 0.0734 |
| No log | 2.96 | 37 | 0.7429 | 0.81 | 0.3071 | 1.1883 | 0.81 | 0.7914 | 0.2367 | 0.0714 |
| No log | 4.0 | 50 | 0.7463 | 0.815 | 0.3083 | 1.1912 | 0.815 | 0.7942 | 0.2334 | 0.0768 |
| No log | 4.96 | 62 | 0.7453 | 0.815 | 0.3080 | 1.1927 | 0.815 | 0.7922 | 0.2224 | 0.0751 |
| No log | 6.0 | 75 | 0.7416 | 0.81 | 0.3067 | 1.1870 | 0.81 | 0.7914 | 0.2232 | 0.0716 |
| No log | 6.96 | 87 | 0.7420 | 0.81 | 0.3070 | 1.1858 | 0.81 | 0.7914 | 0.2309 | 0.0730 |
| No log | 8.0 | 100 | 0.7409 | 0.81 | 0.3062 | 1.1879 | 0.81 | 0.7871 | 0.2186 | 0.0749 |
| No log | 8.96 | 112 | 0.7444 | 0.815 | 0.3079 | 1.1984 | 0.815 | 0.8059 | 0.2342 | 0.0766 |
| No log | 10.0 | 125 | 0.7445 | 0.81 | 0.3079 | 1.1918 | 0.81 | 0.7894 | 0.2353 | 0.0775 |
| No log | 10.96 | 137 | 0.7451 | 0.81 | 0.3085 | 1.1888 | 0.81 | 0.7871 | 0.2319 | 0.0784 |
| No log | 12.0 | 150 | 0.7411 | 0.805 | 0.3060 | 1.1914 | 0.805 | 0.7829 | 0.2186 | 0.0779 |
| No log | 12.96 | 162 | 0.7406 | 0.815 | 0.3075 | 1.1967 | 0.815 | 0.8057 | 0.2472 | 0.0746 |
| No log | 14.0 | 175 | 0.7360 | 0.805 | 0.3048 | 1.2494 | 0.805 | 0.7841 | 0.2383 | 0.0770 |
| No log | 14.96 | 187 | 0.7331 | 0.81 | 0.3037 | 1.1896 | 0.81 | 0.7912 | 0.2306 | 0.0721 |
| No log | 16.0 | 200 | 0.7345 | 0.81 | 0.3042 | 1.2488 | 0.81 | 0.7956 | 0.2064 | 0.0794 |
| No log | 16.96 | 212 | 0.7329 | 0.815 | 0.3018 | 1.2595 | 0.815 | 0.8043 | 0.2331 | 0.0797 |
| No log | 18.0 | 225 | 0.7354 | 0.795 | 0.3055 | 1.1743 | 0.795 | 0.7765 | 0.2154 | 0.0742 |
| No log | 18.96 | 237 | 0.7282 | 0.805 | 0.3001 | 1.1920 | 0.805 | 0.7844 | 0.2444 | 0.0748 |
| No log | 20.0 | 250 | 0.7225 | 0.805 | 0.2981 | 1.1804 | 0.805 | 0.7874 | 0.2041 | 0.0744 |
| No log | 20.96 | 262 | 0.7250 | 0.81 | 0.3000 | 1.2516 | 0.81 | 0.7900 | 0.2148 | 0.0761 |
| No log | 22.0 | 275 | 0.7252 | 0.81 | 0.3009 | 1.3033 | 0.81 | 0.7954 | 0.2375 | 0.0782 |
| No log | 22.96 | 287 | 0.7293 | 0.8 | 0.3016 | 1.2557 | 0.8000 | 0.7796 | 0.2243 | 0.0810 |
| No log | 24.0 | 300 | 0.7344 | 0.805 | 0.3057 | 1.3122 | 0.805 | 0.7853 | 0.2096 | 0.0815 |
| No log | 24.96 | 312 | 0.7314 | 0.81 | 0.3044 | 1.2501 | 0.81 | 0.7909 | 0.2068 | 0.0808 |
| No log | 26.0 | 325 | 0.7293 | 0.81 | 0.3036 | 1.3066 | 0.81 | 0.7909 | 0.2015 | 0.0804 |
| No log | 26.96 | 337 | 0.7323 | 0.805 | 0.3042 | 1.3147 | 0.805 | 0.7853 | 0.2242 | 0.0827 |
| No log | 28.0 | 350 | 0.7288 | 0.805 | 0.3029 | 1.3109 | 0.805 | 0.7853 | 0.1976 | 0.0826 |
| No log | 28.96 | 362 | 0.7343 | 0.805 | 0.3058 | 1.3108 | 0.805 | 0.7853 | 0.2128 | 0.0851 |
| No log | 30.0 | 375 | 0.7351 | 0.8 | 0.3062 | 1.3129 | 0.8000 | 0.7733 | 0.2088 | 0.0845 |
| No log | 30.96 | 387 | 0.7301 | 0.8 | 0.3044 | 1.2452 | 0.8000 | 0.7733 | 0.2272 | 0.0836 |
| No log | 32.0 | 400 | 0.7340 | 0.8 | 0.3055 | 1.3769 | 0.8000 | 0.7806 | 0.2207 | 0.0855 |
| No log | 32.96 | 412 | 0.7322 | 0.805 | 0.3052 | 1.3132 | 0.805 | 0.7784 | 0.2470 | 0.0865 |
| No log | 34.0 | 425 | 0.7301 | 0.8 | 0.3045 | 1.2543 | 0.8000 | 0.7733 | 0.2021 | 0.0863 |
| No log | 34.96 | 437 | 0.7297 | 0.8 | 0.3046 | 1.3056 | 0.8000 | 0.7731 | 0.1886 | 0.0836 |
| No log | 36.0 | 450 | 0.7326 | 0.805 | 0.3056 | 1.3102 | 0.805 | 0.7784 | 0.2256 | 0.0892 |
| No log | 36.96 | 462 | 0.7311 | 0.8 | 0.3046 | 1.3681 | 0.8000 | 0.7733 | 0.2065 | 0.0880 |
| No log | 38.0 | 475 | 0.7356 | 0.8 | 0.3077 | 1.3666 | 0.8000 | 0.7733 | 0.2034 | 0.0895 |
| No log | 38.96 | 487 | 0.7302 | 0.8 | 0.3042 | 1.3114 | 0.8000 | 0.7733 | 0.1996 | 0.0894 |
| 0.1528 | 40.0 | 500 | 0.7289 | 0.805 | 0.3035 | 1.3669 | 0.805 | 0.7797 | 0.1951 | 0.0874 |
| 0.1528 | 40.96 | 512 | 0.7292 | 0.8 | 0.3037 | 1.3685 | 0.8000 | 0.7737 | 0.2103 | 0.0878 |
| 0.1528 | 42.0 | 525 | 0.7347 | 0.805 | 0.3077 | 1.3640 | 0.805 | 0.7784 | 0.2005 | 0.0894 |
| 0.1528 | 42.96 | 537 | 0.7343 | 0.805 | 0.3063 | 1.3684 | 0.805 | 0.7784 | 0.2055 | 0.0899 |
| 0.1528 | 44.0 | 550 | 0.7283 | 0.805 | 0.3038 | 1.3660 | 0.805 | 0.7797 | 0.2142 | 0.0881 |
| 0.1528 | 44.96 | 562 | 0.7325 | 0.805 | 0.3061 | 1.3658 | 0.805 | 0.7784 | 0.1967 | 0.0896 |
| 0.1528 | 46.0 | 575 | 0.7303 | 0.81 | 0.3049 | 1.3659 | 0.81 | 0.7844 | 0.1985 | 0.0883 |
| 0.1528 | 46.96 | 587 | 0.7321 | 0.805 | 0.3054 | 1.3657 | 0.805 | 0.7797 | 0.1925 | 0.0888 |
| 0.1528 | 48.0 | 600 | 0.7322 | 0.8 | 0.3054 | 1.3680 | 0.8000 | 0.7704 | 0.1837 | 0.0903 |
| 0.1528 | 48.96 | 612 | 0.7368 | 0.8 | 0.3079 | 1.3234 | 0.8000 | 0.7704 | 0.2111 | 0.0905 |
| 0.1528 | 50.0 | 625 | 0.7358 | 0.8 | 0.3074 | 1.3682 | 0.8000 | 0.7704 | 0.1971 | 0.0899 |
| 0.1528 | 50.96 | 637 | 0.7318 | 0.8 | 0.3054 | 1.3661 | 0.8000 | 0.7704 | 0.2111 | 0.0889 |
| 0.1528 | 52.0 | 650 | 0.7473 | 0.795 | 0.3133 | 1.3707 | 0.795 | 0.7660 | 0.2032 | 0.0954 |
| 0.1528 | 52.96 | 662 | 0.7299 | 0.805 | 0.3050 | 1.3631 | 0.805 | 0.7797 | 0.1977 | 0.0891 |
| 0.1528 | 54.0 | 675 | 0.7427 | 0.795 | 0.3116 | 1.3673 | 0.795 | 0.7644 | 0.2181 | 0.0923 |
| 0.1528 | 54.96 | 687 | 0.7371 | 0.8 | 0.3087 | 1.3648 | 0.8000 | 0.7704 | 0.1984 | 0.0902 |
| 0.1528 | 56.0 | 700 | 0.7399 | 0.8 | 0.3100 | 1.3671 | 0.8000 | 0.7704 | 0.2178 | 0.0920 |
| 0.1528 | 56.96 | 712 | 0.7420 | 0.8 | 0.3114 | 1.3671 | 0.8000 | 0.7689 | 0.2035 | 0.0925 |
| 0.1528 | 58.0 | 725 | 0.7380 | 0.8 | 0.3088 | 1.3646 | 0.8000 | 0.7704 | 0.2016 | 0.0916 |
| 0.1528 | 58.96 | 737 | 0.7390 | 0.8 | 0.3091 | 1.3671 | 0.8000 | 0.7704 | 0.2005 | 0.0921 |
| 0.1528 | 60.0 | 750 | 0.7370 | 0.8 | 0.3085 | 1.3651 | 0.8000 | 0.7704 | 0.1891 | 0.0908 |
| 0.1528 | 60.96 | 762 | 0.7388 | 0.8 | 0.3097 | 1.3622 | 0.8000 | 0.7704 | 0.2010 | 0.0906 |
| 0.1528 | 62.0 | 775 | 0.7403 | 0.795 | 0.3102 | 1.3691 | 0.795 | 0.7654 | 0.2142 | 0.0910 |
| 0.1528 | 62.96 | 787 | 0.7390 | 0.8 | 0.3088 | 1.3950 | 0.8000 | 0.7684 | 0.2058 | 0.0910 |
| 0.1528 | 64.0 | 800 | 0.7431 | 0.795 | 0.3110 | 1.3722 | 0.795 | 0.7656 | 0.1959 | 0.0914 |
| 0.1528 | 64.96 | 812 | 0.7429 | 0.8 | 0.3117 | 1.3673 | 0.8000 | 0.7704 | 0.1933 | 0.0925 |
| 0.1528 | 66.0 | 825 | 0.7419 | 0.8 | 0.3115 | 1.3663 | 0.8000 | 0.7704 | 0.1986 | 0.0913 |
| 0.1528 | 66.96 | 837 | 0.7434 | 0.8 | 0.3120 | 1.3672 | 0.8000 | 0.7704 | 0.1929 | 0.0927 |
| 0.1528 | 68.0 | 850 | 0.7414 | 0.8 | 0.3115 | 1.3649 | 0.8000 | 0.7704 | 0.1988 | 0.0923 |
| 0.1528 | 68.96 | 862 | 0.7448 | 0.8 | 0.3129 | 1.3685 | 0.8000 | 0.7704 | 0.2251 | 0.0928 |
| 0.1528 | 70.0 | 875 | 0.7450 | 0.8 | 0.3130 | 1.3657 | 0.8000 | 0.7704 | 0.1969 | 0.0934 |
| 0.1528 | 70.96 | 887 | 0.7464 | 0.8 | 0.3132 | 1.3686 | 0.8000 | 0.7704 | 0.1988 | 0.0946 |
| 0.1528 | 72.0 | 900 | 0.7465 | 0.8 | 0.3138 | 1.3682 | 0.8000 | 0.7707 | 0.2015 | 0.0935 |
| 0.1528 | 72.96 | 912 | 0.7471 | 0.8 | 0.3142 | 1.3685 | 0.8000 | 0.7696 | 0.2093 | 0.0936 |
| 0.1528 | 74.0 | 925 | 0.7476 | 0.8 | 0.3145 | 1.3685 | 0.8000 | 0.7704 | 0.2120 | 0.0942 |
| 0.1528 | 74.96 | 937 | 0.7433 | 0.8 | 0.3126 | 1.3655 | 0.8000 | 0.7696 | 0.1907 | 0.0918 |
| 0.1528 | 76.0 | 950 | 0.7519 | 0.8 | 0.3166 | 1.3683 | 0.8000 | 0.7704 | 0.2111 | 0.0949 |
| 0.1528 | 76.96 | 962 | 0.7485 | 0.8 | 0.3152 | 1.3679 | 0.8000 | 0.7696 | 0.2035 | 0.0942 |
| 0.1528 | 78.0 | 975 | 0.7496 | 0.8 | 0.3155 | 1.3716 | 0.8000 | 0.7704 | 0.1931 | 0.0940 |
| 0.1528 | 78.96 | 987 | 0.7513 | 0.8 | 0.3163 | 1.3684 | 0.8000 | 0.7704 | 0.1888 | 0.0952 |
| 0.1059 | 80.0 | 1000 | 0.7490 | 0.8 | 0.3157 | 1.3707 | 0.8000 | 0.7691 | 0.2118 | 0.0943 |
| 0.1059 | 80.96 | 1012 | 0.7482 | 0.8 | 0.3151 | 1.3687 | 0.8000 | 0.7696 | 0.2060 | 0.0945 |
| 0.1059 | 82.0 | 1025 | 0.7516 | 0.8 | 0.3163 | 1.3682 | 0.8000 | 0.7704 | 0.2248 | 0.0950 |
| 0.1059 | 82.96 | 1037 | 0.7527 | 0.8 | 0.3174 | 1.3660 | 0.8000 | 0.7707 | 0.2204 | 0.0948 |
| 0.1059 | 84.0 | 1050 | 0.7495 | 0.8 | 0.3156 | 1.3663 | 0.8000 | 0.7704 | 0.2067 | 0.0939 |
| 0.1059 | 84.96 | 1062 | 0.7506 | 0.8 | 0.3161 | 1.3659 | 0.8000 | 0.7707 | 0.2086 | 0.0947 |
| 0.1059 | 86.0 | 1075 | 0.7537 | 0.8 | 0.3179 | 1.3687 | 0.8000 | 0.7698 | 0.2001 | 0.0955 |
| 0.1059 | 86.96 | 1087 | 0.7525 | 0.8 | 0.3172 | 1.3693 | 0.8000 | 0.7696 | 0.2096 | 0.0947 |
| 0.1059 | 88.0 | 1100 | 0.7535 | 0.8 | 0.3175 | 1.3675 | 0.8000 | 0.7704 | 0.2091 | 0.0951 |
| 0.1059 | 88.96 | 1112 | 0.7525 | 0.8 | 0.3172 | 1.3680 | 0.8000 | 0.7696 | 0.2009 | 0.0946 |
| 0.1059 | 90.0 | 1125 | 0.7566 | 0.8 | 0.3190 | 1.3725 | 0.8000 | 0.7698 | 0.1881 | 0.0964 |
| 0.1059 | 90.96 | 1137 | 0.7578 | 0.8 | 0.3195 | 1.3726 | 0.8000 | 0.7704 | 0.1880 | 0.0968 |
| 0.1059 | 92.0 | 1150 | 0.7560 | 0.8 | 0.3186 | 1.3715 | 0.8000 | 0.7707 | 0.2095 | 0.0969 |
| 0.1059 | 92.96 | 1162 | 0.7623 | 0.795 | 0.3219 | 1.3765 | 0.795 | 0.7681 | 0.1907 | 0.0979 |
| 0.1059 | 94.0 | 1175 | 0.7567 | 0.8 | 0.3192 | 1.3709 | 0.8000 | 0.7698 | 0.2000 | 0.0953 |
| 0.1059 | 94.96 | 1187 | 0.7538 | 0.8 | 0.3181 | 1.3708 | 0.8000 | 0.7691 | 0.1986 | 0.0958 |
| 0.1059 | 96.0 | 1200 | 0.7530 | 0.8 | 0.3175 | 1.3725 | 0.8000 | 0.7693 | 0.2122 | 0.0958 |
| 0.1059 | 96.96 | 1212 | 0.7607 | 0.8 | 0.3207 | 1.3730 | 0.8000 | 0.7709 | 0.1906 | 0.0972 |
| 0.1059 | 98.0 | 1225 | 0.7647 | 0.79 | 0.3229 | 1.3411 | 0.79 | 0.7633 | 0.1889 | 0.0984 |
| 0.1059 | 98.96 | 1237 | 0.7568 | 0.79 | 0.3191 | 1.4425 | 0.79 | 0.7644 | 0.1911 | 0.0976 |
| 0.1059 | 100.0 | 1250 | 0.7586 | 0.8 | 0.3200 | 1.4486 | 0.8000 | 0.7696 | 0.2009 | 0.0952 |
| 0.1059 | 100.96 | 1262 | 0.7552 | 0.8 | 0.3192 | 1.3728 | 0.8000 | 0.7696 | 0.1962 | 0.0953 |
| 0.1059 | 102.0 | 1275 | 0.7601 | 0.8 | 0.3217 | 1.4309 | 0.8000 | 0.7691 | 0.2071 | 0.0969 |
| 0.1059 | 102.96 | 1287 | 0.7608 | 0.795 | 0.3213 | 1.3702 | 0.795 | 0.7668 | 0.2068 | 0.0967 |
| 0.1059 | 104.0 | 1300 | 0.7590 | 0.795 | 0.3202 | 1.3748 | 0.795 | 0.7668 | 0.1922 | 0.0967 |
| 0.1059 | 104.96 | 1312 | 0.7626 | 0.795 | 0.3222 | 1.3775 | 0.795 | 0.7673 | 0.1917 | 0.0974 |
| 0.1059 | 106.0 | 1325 | 0.7632 | 0.795 | 0.3228 | 1.3765 | 0.795 | 0.7664 | 0.2072 | 0.0977 |
| 0.1059 | 106.96 | 1337 | 0.7612 | 0.795 | 0.3223 | 1.3764 | 0.795 | 0.7666 | 0.2011 | 0.0974 |
| 0.1059 | 108.0 | 1350 | 0.7669 | 0.79 | 0.3246 | 1.3777 | 0.79 | 0.7639 | 0.2005 | 0.0981 |
| 0.1059 | 108.96 | 1362 | 0.7658 | 0.795 | 0.3238 | 1.3782 | 0.795 | 0.7673 | 0.2099 | 0.0983 |
| 0.1059 | 110.0 | 1375 | 0.7632 | 0.79 | 0.3232 | 1.3766 | 0.79 | 0.7639 | 0.2002 | 0.0978 |
| 0.1059 | 110.96 | 1387 | 0.7651 | 0.79 | 0.3236 | 1.3758 | 0.79 | 0.7580 | 0.2134 | 0.0980 |
| 0.1059 | 112.0 | 1400 | 0.7649 | 0.79 | 0.3235 | 1.3765 | 0.79 | 0.7583 | 0.1975 | 0.0982 |
| 0.1059 | 112.96 | 1412 | 0.7683 | 0.79 | 0.3253 | 1.3759 | 0.79 | 0.7639 | 0.1913 | 0.0986 |
| 0.1059 | 114.0 | 1425 | 0.7682 | 0.795 | 0.3253 | 1.3778 | 0.795 | 0.7668 | 0.1986 | 0.0984 |
| 0.1059 | 114.96 | 1437 | 0.7666 | 0.79 | 0.3245 | 1.3771 | 0.79 | 0.7639 | 0.1838 | 0.0976 |
| 0.1059 | 116.0 | 1450 | 0.7685 | 0.79 | 0.3255 | 1.3802 | 0.79 | 0.7639 | 0.1962 | 0.0988 |
| 0.1059 | 116.96 | 1462 | 0.7676 | 0.785 | 0.3249 | 1.3783 | 0.785 | 0.7554 | 0.2011 | 0.0991 |
| 0.1059 | 118.0 | 1475 | 0.7704 | 0.785 | 0.3262 | 1.3789 | 0.785 | 0.7516 | 0.2098 | 0.0995 |
| 0.1059 | 118.96 | 1487 | 0.7701 | 0.785 | 0.3262 | 1.3806 | 0.785 | 0.7554 | 0.2167 | 0.0991 |
| 0.0842 | 120.0 | 1500 | 0.7708 | 0.79 | 0.3267 | 1.3815 | 0.79 | 0.7639 | 0.1934 | 0.0990 |
| 0.0842 | 120.96 | 1512 | 0.7710 | 0.785 | 0.3265 | 1.3790 | 0.785 | 0.7516 | 0.1928 | 0.0995 |
| 0.0842 | 122.0 | 1525 | 0.7728 | 0.79 | 0.3274 | 1.3830 | 0.79 | 0.7639 | 0.1917 | 0.0996 |
| 0.0842 | 122.96 | 1537 | 0.7708 | 0.785 | 0.3267 | 1.3826 | 0.785 | 0.7554 | 0.2057 | 0.0992 |
| 0.0842 | 124.0 | 1550 | 0.7698 | 0.785 | 0.3262 | 1.3777 | 0.785 | 0.7554 | 0.2021 | 0.0996 |
| 0.0842 | 124.96 | 1562 | 0.7706 | 0.785 | 0.3267 | 1.3825 | 0.785 | 0.7554 | 0.1949 | 0.0994 |
| 0.0842 | 126.0 | 1575 | 0.7751 | 0.79 | 0.3290 | 1.3827 | 0.79 | 0.7639 | 0.1945 | 0.1002 |
| 0.0842 | 126.96 | 1587 | 0.7734 | 0.79 | 0.3282 | 1.3855 | 0.79 | 0.7639 | 0.1882 | 0.0998 |
| 0.0842 | 128.0 | 1600 | 0.7753 | 0.785 | 0.3290 | 1.3842 | 0.785 | 0.7554 | 0.1926 | 0.1006 |
| 0.0842 | 128.96 | 1612 | 0.7731 | 0.78 | 0.3278 | 1.3808 | 0.78 | 0.7488 | 0.2115 | 0.0994 |
| 0.0842 | 130.0 | 1625 | 0.7723 | 0.78 | 0.3276 | 1.3830 | 0.78 | 0.7486 | 0.2005 | 0.0994 |
| 0.0842 | 130.96 | 1637 | 0.7746 | 0.78 | 0.3287 | 1.3810 | 0.78 | 0.7488 | 0.2103 | 0.0999 |
| 0.0842 | 132.0 | 1650 | 0.7758 | 0.78 | 0.3291 | 1.3848 | 0.78 | 0.7488 | 0.2138 | 0.1004 |
| 0.0842 | 132.96 | 1662 | 0.7771 | 0.78 | 0.3299 | 1.3833 | 0.78 | 0.7488 | 0.2080 | 0.1007 |
| 0.0842 | 134.0 | 1675 | 0.7757 | 0.78 | 0.3295 | 1.3837 | 0.78 | 0.7488 | 0.2060 | 0.0999 |
| 0.0842 | 134.96 | 1687 | 0.7754 | 0.78 | 0.3292 | 1.3836 | 0.78 | 0.7488 | 0.2070 | 0.1001 |
| 0.0842 | 136.0 | 1700 | 0.7755 | 0.78 | 0.3294 | 1.3834 | 0.78 | 0.7488 | 0.2057 | 0.0999 |
| 0.0842 | 136.96 | 1712 | 0.7755 | 0.78 | 0.3295 | 1.3874 | 0.78 | 0.7488 | 0.1999 | 0.0998 |
| 0.0842 | 138.0 | 1725 | 0.7759 | 0.78 | 0.3296 | 1.4380 | 0.78 | 0.7488 | 0.2007 | 0.1002 |
| 0.0842 | 138.96 | 1737 | 0.7776 | 0.78 | 0.3303 | 1.3868 | 0.78 | 0.7488 | 0.2091 | 0.1002 |
| 0.0842 | 140.0 | 1750 | 0.7780 | 0.78 | 0.3304 | 1.3868 | 0.78 | 0.7488 | 0.2086 | 0.1005 |
| 0.0842 | 140.96 | 1762 | 0.7780 | 0.78 | 0.3306 | 1.3855 | 0.78 | 0.7488 | 0.2065 | 0.1001 |
| 0.0842 | 142.0 | 1775 | 0.7789 | 0.78 | 0.3311 | 1.3855 | 0.78 | 0.7488 | 0.2067 | 0.1003 |
| 0.0842 | 142.96 | 1787 | 0.7798 | 0.78 | 0.3314 | 1.3856 | 0.78 | 0.7488 | 0.2083 | 0.1010 |
| 0.0842 | 144.0 | 1800 | 0.7799 | 0.78 | 0.3315 | 1.3914 | 0.78 | 0.7488 | 0.2161 | 0.1004 |
| 0.0842 | 144.96 | 1812 | 0.7806 | 0.78 | 0.3317 | 1.3857 | 0.78 | 0.7488 | 0.2024 | 0.1008 |
| 0.0842 | 146.0 | 1825 | 0.7817 | 0.78 | 0.3322 | 1.3947 | 0.78 | 0.7488 | 0.2043 | 0.1009 |
| 0.0842 | 146.96 | 1837 | 0.7815 | 0.78 | 0.3324 | 1.3898 | 0.78 | 0.7488 | 0.2118 | 0.1006 |
| 0.0842 | 148.0 | 1850 | 0.7820 | 0.78 | 0.3326 | 1.3874 | 0.78 | 0.7488 | 0.2114 | 0.1008 |
| 0.0842 | 148.96 | 1862 | 0.7821 | 0.78 | 0.3327 | 1.4391 | 0.78 | 0.7488 | 0.2087 | 0.1006 |
| 0.0842 | 150.0 | 1875 | 0.7816 | 0.78 | 0.3324 | 1.4410 | 0.78 | 0.7488 | 0.2160 | 0.1006 |
| 0.0842 | 150.96 | 1887 | 0.7826 | 0.78 | 0.3328 | 1.3927 | 0.78 | 0.7488 | 0.2011 | 0.1007 |
| 0.0842 | 152.0 | 1900 | 0.7834 | 0.78 | 0.3332 | 1.4411 | 0.78 | 0.7488 | 0.1994 | 0.1009 |
| 0.0842 | 152.96 | 1912 | 0.7830 | 0.78 | 0.3331 | 1.4409 | 0.78 | 0.7488 | 0.1967 | 0.1008 |
| 0.0842 | 154.0 | 1925 | 0.7825 | 0.78 | 0.3329 | 1.4412 | 0.78 | 0.7488 | 0.2070 | 0.1008 |
| 0.0842 | 154.96 | 1937 | 0.7827 | 0.78 | 0.3332 | 1.4399 | 0.78 | 0.7488 | 0.2097 | 0.1007 |
| 0.0842 | 156.0 | 1950 | 0.7822 | 0.78 | 0.3329 | 1.4391 | 0.78 | 0.7488 | 0.1986 | 0.1006 |
| 0.0842 | 156.96 | 1962 | 0.7838 | 0.78 | 0.3335 | 1.4414 | 0.78 | 0.7488 | 0.1967 | 0.1007 |
| 0.0842 | 158.0 | 1975 | 0.7845 | 0.78 | 0.3337 | 1.3899 | 0.78 | 0.7488 | 0.2087 | 0.1008 |
| 0.0842 | 158.96 | 1987 | 0.7846 | 0.78 | 0.3340 | 1.4419 | 0.78 | 0.7488 | 0.2002 | 0.1007 |
| 0.0731 | 160.0 | 2000 | 0.7853 | 0.78 | 0.3343 | 1.4419 | 0.78 | 0.7488 | 0.1967 | 0.1010 |
| 0.0731 | 160.96 | 2012 | 0.7853 | 0.78 | 0.3343 | 1.4425 | 0.78 | 0.7488 | 0.1963 | 0.1012 |
| 0.0731 | 162.0 | 2025 | 0.7846 | 0.78 | 0.3341 | 1.4411 | 0.78 | 0.7488 | 0.1998 | 0.1008 |
| 0.0731 | 162.96 | 2037 | 0.7856 | 0.78 | 0.3345 | 1.4419 | 0.78 | 0.7488 | 0.2000 | 0.1011 |
| 0.0731 | 164.0 | 2050 | 0.7863 | 0.78 | 0.3348 | 1.4425 | 0.78 | 0.7488 | 0.1991 | 0.1014 |
| 0.0731 | 164.96 | 2062 | 0.7863 | 0.78 | 0.3349 | 1.4414 | 0.78 | 0.7488 | 0.1969 | 0.1013 |
| 0.0731 | 166.0 | 2075 | 0.7865 | 0.78 | 0.3349 | 1.4425 | 0.78 | 0.7488 | 0.1994 | 0.1015 |
| 0.0731 | 166.96 | 2087 | 0.7863 | 0.78 | 0.3349 | 1.4412 | 0.78 | 0.7488 | 0.1966 | 0.1015 |
| 0.0731 | 168.0 | 2100 | 0.7870 | 0.78 | 0.3352 | 1.4431 | 0.78 | 0.7488 | 0.1944 | 0.1016 |
| 0.0731 | 168.96 | 2112 | 0.7868 | 0.78 | 0.3351 | 1.4425 | 0.78 | 0.7488 | 0.1919 | 0.1017 |
| 0.0731 | 170.0 | 2125 | 0.7873 | 0.78 | 0.3354 | 1.4424 | 0.78 | 0.7488 | 0.1965 | 0.1017 |
| 0.0731 | 170.96 | 2137 | 0.7870 | 0.78 | 0.3352 | 1.4418 | 0.78 | 0.7488 | 0.2081 | 0.1014 |
| 0.0731 | 172.0 | 2150 | 0.7879 | 0.78 | 0.3356 | 1.4431 | 0.78 | 0.7488 | 0.1922 | 0.1017 |
| 0.0731 | 172.96 | 2162 | 0.7874 | 0.78 | 0.3355 | 1.4421 | 0.78 | 0.7488 | 0.1967 | 0.1015 |
| 0.0731 | 174.0 | 2175 | 0.7872 | 0.78 | 0.3355 | 1.4418 | 0.78 | 0.7488 | 0.1965 | 0.1016 |
| 0.0731 | 174.96 | 2187 | 0.7882 | 0.78 | 0.3358 | 1.4433 | 0.78 | 0.7488 | 0.1945 | 0.1017 |
| 0.0731 | 176.0 | 2200 | 0.7882 | 0.78 | 0.3358 | 1.4423 | 0.78 | 0.7488 | 0.1969 | 0.1016 |
| 0.0731 | 176.96 | 2212 | 0.7885 | 0.78 | 0.3361 | 1.4431 | 0.78 | 0.7488 | 0.1948 | 0.1017 |
| 0.0731 | 178.0 | 2225 | 0.7883 | 0.78 | 0.3359 | 1.4428 | 0.78 | 0.7488 | 0.1946 | 0.1017 |
| 0.0731 | 178.96 | 2237 | 0.7882 | 0.78 | 0.3359 | 1.4426 | 0.78 | 0.7488 | 0.1920 | 0.1017 |
| 0.0731 | 180.0 | 2250 | 0.7884 | 0.78 | 0.3360 | 1.4425 | 0.78 | 0.7488 | 0.2000 | 0.1016 |
| 0.0731 | 180.96 | 2262 | 0.7886 | 0.78 | 0.3361 | 1.4431 | 0.78 | 0.7488 | 0.1946 | 0.1017 |
| 0.0731 | 182.0 | 2275 | 0.7888 | 0.78 | 0.3362 | 1.4428 | 0.78 | 0.7488 | 0.1977 | 0.1016 |
| 0.0731 | 182.96 | 2287 | 0.7889 | 0.78 | 0.3362 | 1.4428 | 0.78 | 0.7488 | 0.1922 | 0.1017 |
| 0.0731 | 184.0 | 2300 | 0.7889 | 0.78 | 0.3362 | 1.4431 | 0.78 | 0.7488 | 0.1946 | 0.1017 |
| 0.0731 | 184.96 | 2312 | 0.7889 | 0.78 | 0.3362 | 1.4427 | 0.78 | 0.7488 | 0.1946 | 0.1017 |
| 0.0731 | 186.0 | 2325 | 0.7889 | 0.78 | 0.3362 | 1.4431 | 0.78 | 0.7488 | 0.1922 | 0.1018 |
| 0.0731 | 186.96 | 2337 | 0.7892 | 0.78 | 0.3364 | 1.4432 | 0.78 | 0.7488 | 0.1921 | 0.1017 |
| 0.0731 | 188.0 | 2350 | 0.7890 | 0.78 | 0.3363 | 1.4427 | 0.78 | 0.7488 | 0.1921 | 0.1017 |
| 0.0731 | 188.96 | 2362 | 0.7892 | 0.78 | 0.3364 | 1.4429 | 0.78 | 0.7488 | 0.1922 | 0.1017 |
| 0.0731 | 190.0 | 2375 | 0.7893 | 0.78 | 0.3364 | 1.4433 | 0.78 | 0.7488 | 0.1945 | 0.1017 |
| 0.0731 | 190.96 | 2387 | 0.7893 | 0.78 | 0.3364 | 1.4430 | 0.78 | 0.7488 | 0.1921 | 0.1017 |
| 0.0731 | 192.0 | 2400 | 0.7893 | 0.78 | 0.3364 | 1.4430 | 0.78 | 0.7488 | 0.1922 | 0.1018 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
asrimanth/person-thumbs-up-lora-no-cap
|
asrimanth
| 2023-07-12T17:31:28Z | 5 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-10T21:45:20Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - asrimanth/person-thumbs-up-lora-no-cap
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Custom dataset dataset. You can find some example images in the following.




|
zblaaa/t5-base-finetuned-ner_docred_30
|
zblaaa
| 2023-07-12T17:30:08Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T11:00:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-ner_docred_30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-ner_docred_30
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1900
- Rouge1: 6.698
- Rouge2: 5.261
- Rougel: 6.6835
- Rougelsum: 6.6818
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 125 | 0.5156 | 6.5406 | 4.9855 | 6.4905 | 6.494 | 20.0 |
| No log | 2.0 | 250 | 0.3949 | 6.5113 | 4.9122 | 6.4534 | 6.4453 | 20.0 |
| No log | 3.0 | 375 | 0.3280 | 6.5165 | 4.9088 | 6.4537 | 6.451 | 20.0 |
| 0.7311 | 4.0 | 500 | 0.2949 | 6.424 | 4.7298 | 6.3672 | 6.3627 | 20.0 |
| 0.7311 | 5.0 | 625 | 0.2764 | 6.6189 | 5.1219 | 6.5651 | 6.5672 | 20.0 |
| 0.7311 | 6.0 | 750 | 0.2633 | 6.628 | 5.1335 | 6.5664 | 6.5721 | 20.0 |
| 0.7311 | 7.0 | 875 | 0.2547 | 6.5591 | 4.9979 | 6.5075 | 6.5057 | 20.0 |
| 0.3331 | 8.0 | 1000 | 0.2482 | 6.6612 | 5.1918 | 6.5987 | 6.6068 | 20.0 |
| 0.3331 | 9.0 | 1125 | 0.2413 | 6.6093 | 5.0954 | 6.5515 | 6.5553 | 20.0 |
| 0.3331 | 10.0 | 1250 | 0.2357 | 6.6264 | 5.1201 | 6.5681 | 6.5723 | 20.0 |
| 0.3331 | 11.0 | 1375 | 0.2300 | 6.6487 | 5.1525 | 6.6176 | 6.6177 | 20.0 |
| 0.2788 | 12.0 | 1500 | 0.2226 | 6.6858 | 5.2325 | 6.6745 | 6.6762 | 20.0 |
| 0.2788 | 13.0 | 1625 | 0.2166 | 6.6495 | 5.1531 | 6.6378 | 6.6377 | 20.0 |
| 0.2788 | 14.0 | 1750 | 0.2108 | 6.6807 | 5.2212 | 6.6653 | 6.6664 | 20.0 |
| 0.2788 | 15.0 | 1875 | 0.2068 | 6.6811 | 5.2248 | 6.6699 | 6.6697 | 20.0 |
| 0.2435 | 16.0 | 2000 | 0.2030 | 6.6701 | 5.2077 | 6.652 | 6.6492 | 20.0 |
| 0.2435 | 17.0 | 2125 | 0.1997 | 6.6845 | 5.2334 | 6.6647 | 6.6624 | 20.0 |
| 0.2435 | 18.0 | 2250 | 0.1978 | 6.6762 | 5.2202 | 6.6571 | 6.6559 | 20.0 |
| 0.2435 | 19.0 | 2375 | 0.1964 | 6.684 | 5.2358 | 6.6695 | 6.6683 | 20.0 |
| 0.2188 | 20.0 | 2500 | 0.1957 | 6.6882 | 5.2426 | 6.675 | 6.6735 | 20.0 |
| 0.2188 | 21.0 | 2625 | 0.1942 | 6.6882 | 5.2426 | 6.675 | 6.6735 | 20.0 |
| 0.2188 | 22.0 | 2750 | 0.1932 | 6.6935 | 5.2513 | 6.6784 | 6.6762 | 20.0 |
| 0.2188 | 23.0 | 2875 | 0.1924 | 6.6935 | 5.2513 | 6.6784 | 6.6762 | 20.0 |
| 0.2052 | 24.0 | 3000 | 0.1918 | 6.6882 | 5.2426 | 6.675 | 6.6735 | 20.0 |
| 0.2052 | 25.0 | 3125 | 0.1915 | 6.6935 | 5.2513 | 6.6784 | 6.6762 | 20.0 |
| 0.2052 | 26.0 | 3250 | 0.1908 | 6.698 | 5.261 | 6.6835 | 6.6818 | 20.0 |
| 0.2052 | 27.0 | 3375 | 0.1905 | 6.698 | 5.261 | 6.6835 | 6.6818 | 20.0 |
| 0.1977 | 28.0 | 3500 | 0.1901 | 6.698 | 5.261 | 6.6835 | 6.6818 | 20.0 |
| 0.1977 | 29.0 | 3625 | 0.1900 | 6.698 | 5.261 | 6.6835 | 6.6818 | 20.0 |
| 0.1977 | 30.0 | 3750 | 0.1900 | 6.698 | 5.261 | 6.6835 | 6.6818 | 20.0 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.1.0.dev20230611+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V12.3
|
SHENMU007
| 2023-07-12T17:27:42Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-12T14:29:43Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sl-alex/llama-7b-alpaca-stepwise-lora
|
sl-alex
| 2023-07-12T17:22:09Z | 0 | 0 | null |
[
"en",
"dataset:sl-alex/openai-prm800k-solutions-only",
"license:apache-2.0",
"region:us"
] | null | 2023-07-12T15:00:40Z |
---
license: apache-2.0
language:
- en
datasets:
- sl-alex/openai-prm800k-solutions-only
---
Finetunes Llama-7b+Alpaca to solve problems via stepwise reasoning (OpenAI [PRM800k dataset](https://github.com/openai/prm800k), or rather our postprocessed version, [`sl-alex/openai-prm800k-solutions-only`](https://huggingface.co/datasets/sl-alex/openai-prm800k-solutions-only)).
## Model description
This is a fork of [`llama-7b`](https://huggingface.co/huggyllama/llama-7b) + [`tloen/alpaca-lora-7b`](https://huggingface.co/tloen/alpaca-lora-7b).
That is: we loaded Llama-7b, we applied Alpaca LoRA, expanded vocabulary, then QLoRA 4-bit finetuned from there.
Parts:
- base model [`llama-7b`](https://huggingface.co/huggyllama/llama-7b)
- LoRA 0 [`tloen/alpaca-lora-7b`](https://huggingface.co/tloen/alpaca-lora-7b)
- LoRA 1
- `adapter_config.json`
- `adapter_model.bin`
- tokenizer (this is just [`llama-7b`](https://huggingface.co/huggyllama/llama-7b)'s tokenizer, plus a `[PAD]` token for which no embedding was learned. in practice you can inference a batch-of-1 using the original [`llama-7b`](https://huggingface.co/huggyllama/llama-7b) tokenizer instead)
- `added_tokens.json`
- `special_tokens_map.json`
- `tokenizer.model`
- `tokenizer_config.json`
## Training
Trained using [`qlora.py`](https://github.com/scottlogic-alex/qlora/blob/stepwise/qlora.py) from our [`stepwise`](https://github.com/scottlogic-alex/qlora/tree/stepwise) branch of [qlora](https://github.com/artidoro/qlora).
Known-good as of commit [`3a86919`](https://github.com/scottlogic-alex/qlora/blob/3a8691986b6718562bcd8e3522447b52842c1d9a/qlora.py).
`python -m qlora --model_name_or_path huggyllama/llama-7b --lora_name_or_path tloen/alpaca-lora-7b --dataset prm800k-solutions --dataset_format prm800k-solutions --bf16 --max_memory_MB 24000 --use_bos_token_in_prompt --truncate_toward_center --source_max_len 184 --target_max_len 998 --gradient_accumulation_steps 4 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --learning_rate 0.0002 --run_name 13b_alpaca_special_tokens_long --report_to wandb --save_steps 64 --save_total_limit 3 --max_steps 1664 --evaluation_strategy steps --eval_steps 64 --generate_steps 16`
## Usage
You can load using [`evaluate.py`](https://github.com/scottlogic-alex/qlora/blob/stepwise/evaluate.py#L209-L278) from our [`stepwise`](https://github.com/scottlogic-alex/qlora/tree/stepwise) branch of [qlora](https://github.com/artidoro/qlora).
Known-good as of commit [`3a86919`](https://github.com/scottlogic-alex/qlora/blob/3a8691986b6718562bcd8e3522447b52842c1d9a/evaluate.py).
```bash
git clone -b stepwise https://github.com/scottlogic-alex/qlora.git
cd qlora
python -m evaluate \
--model_name_or_path huggyllama/llama-7b \
--base_lora_model_name_or_path tloen/alpaca-lora-7b \
--tokenizer_model_name_or_path huggyllama/llama-7b \
--lora_model_name_or_path sl-alex/llama-7b-alpaca-stepwise-lora \
--use_bos_token_in_prompt \
--overrun_countermeasures False
```
## Prompt style
The model is finetuned for instruction-response.
`prompt = '<s>Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nHow do I feed a family of 5 without ingredients?\n\n### Response:\n<|step_start|>'`
## Example output:
`$ How do I drink a matcha latte?`
`I’ve never drunk a matcha latte before, but I’ve seen people do it online. It looks like they dump the entire contents of a little teabag into a frothed milk drink and then mix it well. I think I could do that.<|step_end|><|step_start|>I’ll have to buy a matcha latte kit first, because I don’t know what they have in the teabags. I’ll see if my favorite coffee shop sells them.<|step_end|><|step_start|>I’ll also need a whisk or a mixer. I’ll look online for the best one. Which one do you recommend?<|step_end|><|step_start|>I’ll need to heat up the milk first. Which will you use -- the microwave or the stovetop?<|step_end|><|step_start|>I’ll warm up 16 ounces of milk in the microwave. For the stovetop, I’ll just put it on the burner for two minutes. I don’t want it to boil.`
`$ What is the third Fibonacci number?`
`I need to find the third Fibonacci number, which is the number after the second, which is F(2).<|step_end|><|step_start|>There are two ways to get the third Fibonacci number. I could either wait one more day and get F(3) using the Fibonacci sequence, or I could do something more clever and faster by using a computer. <|step_end|><|step_start|>Computers are great for mathematics, so I'll see if I can get F(3) quicker with a computer.<|step_end|><|step_start|>First, though, I'll explain what Fibonacci means. It's named after a 13th-century Italian accountant named Leonardo Fibonacci who created a sequence of numbers where each following number is the sum of the two previous ones. <|step_end|><|step_start|>The Fibonacci sequence begins 0, 1, 1, 2, 3, 5, 8, 13, 21, and so on. <|step_end|><|`
`$ How do I make a cup of tea?`
`First, I find a teaspoonful of loose-leaf Assam tea in a teapot. Then, I fill the teapot with water that has been warming up over a stovetop burner.<|step_end|><|step_start|>Once the water has reached a rolling boil, I pour it over the tea, cover the pot, and steam it for at least two minutes. Uncover the pot, remove the lid, and serve. That's how to make a cup of tea.`
`<|response_start|>I've always thought Assam tea was an Indian tea, but this web site says it comes from the north-center part of Bangladesh. I wonder if there's a difference between Assam and Bangladesh tea.<|Response_End|><|step_start|>BANGLADESH tea has all the qualities of Assam, but the difference in the final cup lies in the aroma and the lingering aftertaste.`
`<|step_end|><|response_start|>The Assam tea comes in very large, flat pieces of processed leaves. They are cut from the tree and rolled`
`$ How do I open a jar?`
`I need to find a jar opener at my school. I can't just use my mouth because it's too dangerous to choke on the lid, and it would ruin my school uniform. I could bring a bottle opener from home, but it might not work. I also can't use a knife, because that could get me into trouble with the headmaster. The solution is to ask a friendly student to open the jar for me.</|step_end|><|step_start|>I find a student who is willing to help. We both hold the jar with our hands on the inside and outside of the rim, and she uses her nails to pry the lid off. She has been doing this for years, and it looks easy, but I should be careful not to scratch myself.</|step_end|><|step_start|>We set the jar down and all the pieces are there: the lid, the jar, and the plastic bag. I can never open another jar without feeling proud of myself. My classmates are impressed by my skills as a consumer.</|step_end|><|step_start|>I should keep the lid as a souvenir, but`
|
henda/bert-multilingual-ner
|
henda
| 2023-07-12T17:17:04Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-12T08:39:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-multilingual-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-multilingual-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0678
- Precision: 0.7057
- Recall: 0.7305
- F1: 0.7179
- Accuracy: 0.9734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0946 | 0.06 | 10000 | 0.0974 | 0.5971 | 0.6368 | 0.6163 | 0.9612 |
| 0.0935 | 0.12 | 20000 | 0.0892 | 0.6258 | 0.6633 | 0.6440 | 0.9652 |
| 0.0829 | 0.17 | 30000 | 0.0913 | 0.6393 | 0.6557 | 0.6474 | 0.9653 |
| 0.086 | 0.23 | 40000 | 0.0829 | 0.6868 | 0.6131 | 0.6478 | 0.9679 |
| 0.0836 | 0.29 | 50000 | 0.0806 | 0.6450 | 0.7019 | 0.6722 | 0.9673 |
| 0.0813 | 0.35 | 60000 | 0.0784 | 0.6718 | 0.6823 | 0.6770 | 0.9694 |
| 0.0756 | 0.41 | 70000 | 0.0771 | 0.6559 | 0.7217 | 0.6872 | 0.9696 |
| 0.0753 | 0.47 | 80000 | 0.0733 | 0.6944 | 0.6660 | 0.6799 | 0.9706 |
| 0.0716 | 0.52 | 90000 | 0.0781 | 0.6698 | 0.7151 | 0.6917 | 0.9704 |
| 0.0785 | 0.58 | 100000 | 0.0723 | 0.6936 | 0.6960 | 0.6948 | 0.9714 |
| 0.0707 | 0.64 | 110000 | 0.0729 | 0.6943 | 0.7098 | 0.7020 | 0.9718 |
| 0.0699 | 0.7 | 120000 | 0.0714 | 0.6928 | 0.7200 | 0.7061 | 0.9717 |
| 0.0729 | 0.76 | 130000 | 0.0715 | 0.6887 | 0.7305 | 0.7090 | 0.9722 |
| 0.0696 | 0.82 | 140000 | 0.0703 | 0.6826 | 0.7479 | 0.7137 | 0.9722 |
| 0.0654 | 0.87 | 150000 | 0.0697 | 0.6851 | 0.7500 | 0.7161 | 0.9727 |
| 0.0636 | 0.93 | 160000 | 0.0684 | 0.6968 | 0.7410 | 0.7183 | 0.9731 |
| 0.0641 | 0.99 | 170000 | 0.0679 | 0.7047 | 0.7315 | 0.7179 | 0.9734 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
asrimanth/person-thumbs-up-plain-lora
|
asrimanth
| 2023-07-12T17:12:05Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-11T22:22:40Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - asrimanth/person-thumbs-up-plain-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Custom dataset dataset. You can find some example images in the following.




|
Winmodel/poca-SoccerTwos
|
Winmodel
| 2023-07-12T17:03:16Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-12T17:02:13Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Winmodel/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jmarcolan/joao_funciona2
|
jmarcolan
| 2023-07-12T17:03:08Z | 2 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-12T16:50:45Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
veluchs/q-FrozenLake-v1-4x4-noSlippery
|
veluchs
| 2023-07-12T16:47:45Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T16:47:43Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="veluchs/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Rui31415/ppo-Huggy
|
Rui31415
| 2023-07-12T16:26:21Z | 25 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-12T16:26:17Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Rui31415/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Hinata197/videomae-base-finetuned-ucf101-subset
|
Hinata197
| 2023-07-12T16:20:32Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-07-12T16:02:00Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1006
- Accuracy: 0.9714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6882 | 0.25 | 150 | 1.3392 | 0.4714 |
| 0.3556 | 1.25 | 300 | 0.5520 | 0.7857 |
| 0.1375 | 2.25 | 450 | 0.2829 | 0.8857 |
| 0.0361 | 3.25 | 600 | 0.1006 | 0.9714 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
0sunfire0/q-FrozenLake-v1-4x4-noSlippery
|
0sunfire0
| 2023-07-12T16:13:41Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T16:13:39Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="0sunfire0/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
YanJiangJerry/covid-augment-tweet-roberta-large-e4
|
YanJiangJerry
| 2023-07-12T16:11:00Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T14:31:14Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: covid-augment-tweet-roberta-large-e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-augment-tweet-roberta-large-e4
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5214
- Accuracy: 0.9510
- F1: 0.8707
- Precision: 0.8562
- Recall: 0.8858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1484 | 1.0 | 4089 | 0.4570 | 0.9265 | 0.8273 | 0.7358 | 0.9446 |
| 0.0738 | 2.0 | 8178 | 0.4873 | 0.9368 | 0.8414 | 0.7903 | 0.8997 |
| 0.0265 | 3.0 | 12267 | 0.4970 | 0.9484 | 0.8587 | 0.8773 | 0.8408 |
| 0.0043 | 4.0 | 16356 | 0.5214 | 0.9510 | 0.8707 | 0.8562 | 0.8858 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
phoenix-glr/wav2vec2-base-finetuned-ks
|
phoenix-glr
| 2023-07-12T15:57:35Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-12T13:54:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0855
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6422 | 1.0 | 399 | 0.5417 | 0.9622 |
| 0.2815 | 2.0 | 798 | 0.1712 | 0.9771 |
| 0.2034 | 3.0 | 1197 | 0.1125 | 0.9815 |
| 0.1486 | 4.0 | 1597 | 0.0961 | 0.9809 |
| 0.1262 | 5.0 | 1995 | 0.0855 | 0.9844 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
LarryAIDraw/brj-saori-V1
|
LarryAIDraw
| 2023-07-12T15:55:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-12T15:27:13Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/81104/saori-blue-archive
|
LarryAIDraw/silverwolf-000002
|
LarryAIDraw
| 2023-07-12T15:55:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-12T15:26:29Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/106319/lora-honkai-star-rail-silverwolf
|
LarryAIDraw/rezerowitches-25
|
LarryAIDraw
| 2023-07-12T15:54:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-12T15:23:05Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/107626/rezero-witches-pack-7-in-1-echidna-satella-minerva-daphne-typhon-carmilla-sekhmet
|
Ruxun/yolov8-player
|
Ruxun
| 2023-07-12T15:35:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-05T21:17:20Z |
Finetuned Model based on yolov8 for harvard basketball team player detection
|
bofenghuang/vigogne-7b-chat
|
bofenghuang
| 2023-07-12T15:33:38Z | 1,478 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"LLM",
"fr",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-15T14:07:53Z |
---
license: openrail
language:
- fr
pipeline_tag: text-generation
library_name: transformers
tags:
- llama
- LLM
inference: false
---
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-7b-chat/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-7B-Chat: A French Chat LLaMA Model
Vigogne-7B-Chat is a LLaMA-7B model fine-tuned to conduct multi-turn dialogues in 🇫🇷 French between human user and AI assistant.
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
**Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Vigogne is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from vigogne.preprocess import generate_inference_chat_prompt
model_name_or_path = "bofenghuang/vigogne-7b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
user_query = "Expliquez la différence entre DoS et phishing."
prompt = generate_inference_chat_prompt([[user_query, ""]], tokenizer=tokenizer)
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=0.1,
do_sample=True,
repetition_penalty=1.0,
max_new_tokens=512,
),
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(generated_text)
```
You can infer this model by using the following Google Colab Notebook.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
|
jordyvl/vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t2.5_a0.7
|
jordyvl
| 2023-07-12T15:32:48Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-12T14:55:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t2.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t2.5_a0.7
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7789
- Accuracy: 0.565
- Brier Loss: 0.5798
- Nll: 2.3548
- F1 Micro: 0.565
- F1 Macro: 0.5569
- Ece: 0.1677
- Aurc: 0.2032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 4.7719 | 0.04 | 1.0712 | 7.3456 | 0.04 | 0.0315 | 0.2816 | 0.9543 |
| No log | 2.0 | 14 | 3.7636 | 0.1 | 0.9486 | 5.7989 | 0.1000 | 0.0863 | 0.1625 | 0.8854 |
| No log | 3.0 | 21 | 3.3871 | 0.235 | 0.8838 | 5.3315 | 0.235 | 0.1857 | 0.1589 | 0.6274 |
| No log | 4.0 | 28 | 2.8780 | 0.2975 | 0.7900 | 3.5467 | 0.2975 | 0.2884 | 0.1672 | 0.4712 |
| No log | 5.0 | 35 | 2.5223 | 0.3875 | 0.7164 | 3.0188 | 0.3875 | 0.3630 | 0.1596 | 0.3495 |
| No log | 6.0 | 42 | 2.3634 | 0.43 | 0.6927 | 3.1378 | 0.4300 | 0.4031 | 0.1877 | 0.3055 |
| No log | 7.0 | 49 | 2.3487 | 0.445 | 0.7009 | 3.0084 | 0.445 | 0.4123 | 0.2026 | 0.3003 |
| No log | 8.0 | 56 | 2.2521 | 0.47 | 0.6857 | 2.8634 | 0.47 | 0.4607 | 0.1977 | 0.2918 |
| No log | 9.0 | 63 | 2.3597 | 0.4775 | 0.6955 | 2.6768 | 0.4775 | 0.4642 | 0.2215 | 0.2787 |
| No log | 10.0 | 70 | 2.3524 | 0.46 | 0.7130 | 2.7830 | 0.46 | 0.4441 | 0.2429 | 0.2941 |
| No log | 11.0 | 77 | 2.4233 | 0.46 | 0.7108 | 3.0867 | 0.46 | 0.4418 | 0.2247 | 0.2828 |
| No log | 12.0 | 84 | 2.2723 | 0.485 | 0.6901 | 2.7313 | 0.485 | 0.4742 | 0.2293 | 0.2774 |
| No log | 13.0 | 91 | 2.2818 | 0.49 | 0.7007 | 2.5829 | 0.49 | 0.4859 | 0.2473 | 0.2791 |
| No log | 14.0 | 98 | 2.1682 | 0.4975 | 0.6695 | 2.8292 | 0.4975 | 0.4815 | 0.2170 | 0.2653 |
| No log | 15.0 | 105 | 2.0652 | 0.52 | 0.6520 | 2.4319 | 0.52 | 0.5120 | 0.2079 | 0.2617 |
| No log | 16.0 | 112 | 2.0524 | 0.5225 | 0.6384 | 2.5273 | 0.5225 | 0.5128 | 0.1980 | 0.2391 |
| No log | 17.0 | 119 | 2.1736 | 0.48 | 0.6765 | 2.4887 | 0.48 | 0.4809 | 0.2112 | 0.2834 |
| No log | 18.0 | 126 | 2.0096 | 0.515 | 0.6496 | 2.4230 | 0.515 | 0.5215 | 0.1957 | 0.2513 |
| No log | 19.0 | 133 | 2.0207 | 0.5175 | 0.6417 | 2.4591 | 0.5175 | 0.5104 | 0.2011 | 0.2526 |
| No log | 20.0 | 140 | 1.9152 | 0.5425 | 0.6044 | 2.5069 | 0.5425 | 0.5405 | 0.1826 | 0.2147 |
| No log | 21.0 | 147 | 1.9600 | 0.52 | 0.6332 | 2.5016 | 0.52 | 0.5204 | 0.1871 | 0.2427 |
| No log | 22.0 | 154 | 1.9325 | 0.515 | 0.6226 | 2.4981 | 0.515 | 0.5078 | 0.1860 | 0.2323 |
| No log | 23.0 | 161 | 1.9172 | 0.53 | 0.6144 | 2.4601 | 0.53 | 0.5221 | 0.1941 | 0.2311 |
| No log | 24.0 | 168 | 1.8891 | 0.5425 | 0.6091 | 2.4653 | 0.5425 | 0.5399 | 0.1933 | 0.2260 |
| No log | 25.0 | 175 | 1.9460 | 0.5175 | 0.6214 | 2.4785 | 0.5175 | 0.5168 | 0.1694 | 0.2335 |
| No log | 26.0 | 182 | 1.9060 | 0.5525 | 0.5970 | 2.4789 | 0.5525 | 0.5430 | 0.1934 | 0.2137 |
| No log | 27.0 | 189 | 1.9421 | 0.5375 | 0.6205 | 2.4814 | 0.5375 | 0.5313 | 0.2135 | 0.2402 |
| No log | 28.0 | 196 | 2.0195 | 0.545 | 0.6187 | 2.5330 | 0.545 | 0.5256 | 0.1800 | 0.2360 |
| No log | 29.0 | 203 | 1.9428 | 0.535 | 0.6167 | 2.4894 | 0.535 | 0.5111 | 0.1862 | 0.2322 |
| No log | 30.0 | 210 | 1.8996 | 0.5225 | 0.6207 | 2.4810 | 0.5225 | 0.5137 | 0.1994 | 0.2330 |
| No log | 31.0 | 217 | 1.8462 | 0.54 | 0.6119 | 2.4201 | 0.54 | 0.5341 | 0.1817 | 0.2190 |
| No log | 32.0 | 224 | 1.8324 | 0.55 | 0.5988 | 2.4230 | 0.55 | 0.5427 | 0.1888 | 0.2171 |
| No log | 33.0 | 231 | 1.8393 | 0.545 | 0.5977 | 2.3943 | 0.545 | 0.5339 | 0.1838 | 0.2172 |
| No log | 34.0 | 238 | 1.8704 | 0.5475 | 0.6081 | 2.4488 | 0.5475 | 0.5427 | 0.1768 | 0.2200 |
| No log | 35.0 | 245 | 1.8546 | 0.54 | 0.6022 | 2.3273 | 0.54 | 0.5316 | 0.1847 | 0.2226 |
| No log | 36.0 | 252 | 1.8608 | 0.53 | 0.5972 | 2.5153 | 0.53 | 0.5139 | 0.1810 | 0.2202 |
| No log | 37.0 | 259 | 1.8663 | 0.5325 | 0.6057 | 2.4642 | 0.5325 | 0.5243 | 0.1836 | 0.2205 |
| No log | 38.0 | 266 | 1.8300 | 0.545 | 0.5954 | 2.5101 | 0.545 | 0.5418 | 0.1890 | 0.2141 |
| No log | 39.0 | 273 | 1.8121 | 0.5625 | 0.5853 | 2.4397 | 0.5625 | 0.5550 | 0.1704 | 0.2110 |
| No log | 40.0 | 280 | 1.7916 | 0.54 | 0.5884 | 2.3565 | 0.54 | 0.5361 | 0.1685 | 0.2135 |
| No log | 41.0 | 287 | 1.8353 | 0.5575 | 0.5929 | 2.4252 | 0.5575 | 0.5451 | 0.1823 | 0.2116 |
| No log | 42.0 | 294 | 1.7999 | 0.5675 | 0.5839 | 2.4820 | 0.5675 | 0.5631 | 0.1729 | 0.2045 |
| No log | 43.0 | 301 | 1.8622 | 0.52 | 0.6106 | 2.4823 | 0.52 | 0.5028 | 0.1948 | 0.2270 |
| No log | 44.0 | 308 | 1.7892 | 0.55 | 0.5892 | 2.3342 | 0.55 | 0.5450 | 0.1798 | 0.2126 |
| No log | 45.0 | 315 | 1.7978 | 0.545 | 0.5868 | 2.4345 | 0.545 | 0.5439 | 0.1894 | 0.2094 |
| No log | 46.0 | 322 | 1.7697 | 0.56 | 0.5772 | 2.4272 | 0.56 | 0.5585 | 0.1601 | 0.1997 |
| No log | 47.0 | 329 | 1.7754 | 0.5475 | 0.5835 | 2.3977 | 0.5475 | 0.5438 | 0.1759 | 0.2059 |
| No log | 48.0 | 336 | 1.7922 | 0.545 | 0.5929 | 2.4119 | 0.545 | 0.5390 | 0.1891 | 0.2131 |
| No log | 49.0 | 343 | 1.8055 | 0.5625 | 0.5872 | 2.3654 | 0.5625 | 0.5497 | 0.1759 | 0.2073 |
| No log | 50.0 | 350 | 1.7972 | 0.56 | 0.5894 | 2.3366 | 0.56 | 0.5487 | 0.1803 | 0.2083 |
| No log | 51.0 | 357 | 1.7890 | 0.555 | 0.5815 | 2.3858 | 0.555 | 0.5501 | 0.1693 | 0.2067 |
| No log | 52.0 | 364 | 1.7958 | 0.5475 | 0.5883 | 2.4244 | 0.5475 | 0.5355 | 0.1910 | 0.2105 |
| No log | 53.0 | 371 | 1.7881 | 0.5675 | 0.5834 | 2.4135 | 0.5675 | 0.5603 | 0.1836 | 0.2028 |
| No log | 54.0 | 378 | 1.7675 | 0.555 | 0.5766 | 2.4043 | 0.555 | 0.5563 | 0.1653 | 0.2047 |
| No log | 55.0 | 385 | 1.7688 | 0.55 | 0.5843 | 2.3641 | 0.55 | 0.5505 | 0.1729 | 0.2092 |
| No log | 56.0 | 392 | 1.7796 | 0.55 | 0.5861 | 2.3404 | 0.55 | 0.5458 | 0.1808 | 0.2114 |
| No log | 57.0 | 399 | 1.7861 | 0.54 | 0.5885 | 2.3460 | 0.54 | 0.5323 | 0.1902 | 0.2073 |
| No log | 58.0 | 406 | 1.7746 | 0.56 | 0.5818 | 2.3715 | 0.56 | 0.5557 | 0.1643 | 0.2034 |
| No log | 59.0 | 413 | 1.7828 | 0.5575 | 0.5868 | 2.3086 | 0.5575 | 0.5526 | 0.1956 | 0.2088 |
| No log | 60.0 | 420 | 1.7735 | 0.565 | 0.5825 | 2.3405 | 0.565 | 0.5619 | 0.1696 | 0.2058 |
| No log | 61.0 | 427 | 1.7651 | 0.5675 | 0.5760 | 2.4771 | 0.5675 | 0.5636 | 0.1847 | 0.2027 |
| No log | 62.0 | 434 | 1.7751 | 0.5575 | 0.5834 | 2.3727 | 0.5575 | 0.5524 | 0.1638 | 0.2052 |
| No log | 63.0 | 441 | 1.7900 | 0.56 | 0.5834 | 2.3635 | 0.56 | 0.5502 | 0.1789 | 0.2061 |
| No log | 64.0 | 448 | 1.7729 | 0.56 | 0.5821 | 2.3797 | 0.56 | 0.5554 | 0.1676 | 0.2046 |
| No log | 65.0 | 455 | 1.7743 | 0.5625 | 0.5826 | 2.4174 | 0.5625 | 0.5581 | 0.1538 | 0.2052 |
| No log | 66.0 | 462 | 1.7749 | 0.5625 | 0.5801 | 2.3799 | 0.5625 | 0.5592 | 0.1709 | 0.2036 |
| No log | 67.0 | 469 | 1.7795 | 0.5625 | 0.5814 | 2.3169 | 0.5625 | 0.5533 | 0.1883 | 0.2037 |
| No log | 68.0 | 476 | 1.7773 | 0.5675 | 0.5794 | 2.3588 | 0.5675 | 0.5622 | 0.1779 | 0.2013 |
| No log | 69.0 | 483 | 1.7762 | 0.56 | 0.5793 | 2.3514 | 0.56 | 0.5566 | 0.1707 | 0.2039 |
| No log | 70.0 | 490 | 1.7762 | 0.5625 | 0.5787 | 2.3620 | 0.5625 | 0.5529 | 0.1607 | 0.2017 |
| No log | 71.0 | 497 | 1.7740 | 0.5675 | 0.5798 | 2.3235 | 0.5675 | 0.5612 | 0.1637 | 0.2046 |
| 0.4215 | 72.0 | 504 | 1.7739 | 0.56 | 0.5790 | 2.3235 | 0.56 | 0.5542 | 0.1583 | 0.2023 |
| 0.4215 | 73.0 | 511 | 1.7783 | 0.56 | 0.5806 | 2.4187 | 0.56 | 0.5545 | 0.1674 | 0.2040 |
| 0.4215 | 74.0 | 518 | 1.7785 | 0.56 | 0.5805 | 2.3302 | 0.56 | 0.5544 | 0.1748 | 0.2033 |
| 0.4215 | 75.0 | 525 | 1.7777 | 0.5625 | 0.5795 | 2.3321 | 0.5625 | 0.5548 | 0.1754 | 0.2029 |
| 0.4215 | 76.0 | 532 | 1.7785 | 0.565 | 0.5799 | 2.3249 | 0.565 | 0.5586 | 0.1696 | 0.2023 |
| 0.4215 | 77.0 | 539 | 1.7763 | 0.565 | 0.5790 | 2.3561 | 0.565 | 0.5573 | 0.1574 | 0.2022 |
| 0.4215 | 78.0 | 546 | 1.7767 | 0.565 | 0.5790 | 2.3296 | 0.565 | 0.5572 | 0.1633 | 0.2024 |
| 0.4215 | 79.0 | 553 | 1.7763 | 0.565 | 0.5790 | 2.3555 | 0.565 | 0.5580 | 0.1687 | 0.2016 |
| 0.4215 | 80.0 | 560 | 1.7783 | 0.565 | 0.5800 | 2.3254 | 0.565 | 0.5576 | 0.1752 | 0.2032 |
| 0.4215 | 81.0 | 567 | 1.7773 | 0.5675 | 0.5796 | 2.3530 | 0.5675 | 0.5605 | 0.1519 | 0.2023 |
| 0.4215 | 82.0 | 574 | 1.7774 | 0.5625 | 0.5797 | 2.3253 | 0.5625 | 0.5549 | 0.1911 | 0.2028 |
| 0.4215 | 83.0 | 581 | 1.7784 | 0.5625 | 0.5794 | 2.3554 | 0.5625 | 0.5544 | 0.1659 | 0.2030 |
| 0.4215 | 84.0 | 588 | 1.7769 | 0.565 | 0.5793 | 2.3527 | 0.565 | 0.5585 | 0.1588 | 0.2024 |
| 0.4215 | 85.0 | 595 | 1.7787 | 0.565 | 0.5799 | 2.3549 | 0.565 | 0.5576 | 0.1687 | 0.2032 |
| 0.4215 | 86.0 | 602 | 1.7778 | 0.565 | 0.5795 | 2.3548 | 0.565 | 0.5574 | 0.1577 | 0.2029 |
| 0.4215 | 87.0 | 609 | 1.7787 | 0.5625 | 0.5798 | 2.3545 | 0.5625 | 0.5549 | 0.1643 | 0.2032 |
| 0.4215 | 88.0 | 616 | 1.7786 | 0.565 | 0.5796 | 2.3554 | 0.565 | 0.5574 | 0.1667 | 0.2031 |
| 0.4215 | 89.0 | 623 | 1.7785 | 0.565 | 0.5799 | 2.3546 | 0.565 | 0.5574 | 0.1691 | 0.2032 |
| 0.4215 | 90.0 | 630 | 1.7784 | 0.565 | 0.5797 | 2.3548 | 0.565 | 0.5574 | 0.1656 | 0.2031 |
| 0.4215 | 91.0 | 637 | 1.7784 | 0.565 | 0.5797 | 2.3550 | 0.565 | 0.5569 | 0.1753 | 0.2032 |
| 0.4215 | 92.0 | 644 | 1.7786 | 0.565 | 0.5797 | 2.3545 | 0.565 | 0.5574 | 0.1744 | 0.2030 |
| 0.4215 | 93.0 | 651 | 1.7785 | 0.565 | 0.5797 | 2.3545 | 0.565 | 0.5574 | 0.1709 | 0.2031 |
| 0.4215 | 94.0 | 658 | 1.7787 | 0.565 | 0.5797 | 2.3543 | 0.565 | 0.5574 | 0.1704 | 0.2032 |
| 0.4215 | 95.0 | 665 | 1.7787 | 0.565 | 0.5798 | 2.3545 | 0.565 | 0.5574 | 0.1713 | 0.2032 |
| 0.4215 | 96.0 | 672 | 1.7788 | 0.565 | 0.5798 | 2.3549 | 0.565 | 0.5569 | 0.1777 | 0.2031 |
| 0.4215 | 97.0 | 679 | 1.7789 | 0.565 | 0.5798 | 2.3550 | 0.565 | 0.5569 | 0.1677 | 0.2032 |
| 0.4215 | 98.0 | 686 | 1.7789 | 0.565 | 0.5798 | 2.3549 | 0.565 | 0.5569 | 0.1648 | 0.2032 |
| 0.4215 | 99.0 | 693 | 1.7789 | 0.565 | 0.5798 | 2.3548 | 0.565 | 0.5569 | 0.1728 | 0.2032 |
| 0.4215 | 100.0 | 700 | 1.7789 | 0.565 | 0.5798 | 2.3548 | 0.565 | 0.5569 | 0.1677 | 0.2032 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
banden/dqn-SpaceInvadersNoFrameskip-v1
|
banden
| 2023-07-12T15:31:37Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T15:30:56Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 517.00 +/- 204.56
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga banden -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga banden -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga banden
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
yashwantk/finetuning-sentiment-model-3000-samples
|
yashwantk
| 2023-07-12T15:26:18Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-30T05:27:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8646864686468646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3200
- Accuracy: 0.8633
- F1: 0.8647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Phips/dqn-BeamRiderNoFrameskip-v4
|
Phips
| 2023-07-12T15:26:02Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BeamRiderNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T15:23:51Z |
---
library_name: stable-baselines3
tags:
- BeamRiderNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRiderNoFrameskip-v4
type: BeamRiderNoFrameskip-v4
metrics:
- type: mean_reward
value: 4036.80 +/- 1681.31
name: mean_reward
verified: false
---
# **DQN** Agent playing **BeamRiderNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BeamRiderNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga Phips -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga Phips -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/ -orga Phips
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ChaimaaBouafoud/arabicSent-ChamaBert
|
ChaimaaBouafoud
| 2023-07-12T15:24:41Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"ar",
"fr",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T13:38:21Z |
---
language:
- ar
- fr
- en
metrics:
- accuracy
- f1
- roc_auc
pipeline_tag: text-classification
---
# arabicSent-ChamaBert
This model is a fine-tuned version of the <u> aubmindlab/bert-base-arabertv02-twitter </u> model on a webscraped dataset of Arabic comments. It has been trained specifically for sentiment classification tasks in Moroccan Arabic, covering both Standard Arabic and dialectal variations.
The model's performance on the evaluation set is as follows:
- Loss: 0.1626
- Accuracy: 0.9073
- F1: 0.9129
- Roc Auc: 0.9337
# Dataset
The dataset used for training and evaluation consists of a collection of Moroccan Arabic comments specifically focused on sentiments towards the effects of vaccines. It contains a total of 81,971 comments, with sentiment labels of "Negative" and "Positive". The dataset provides ground truth annotations that enable the model to learn the association between the language used in comments and the corresponding sentiment expressed.
The data collection process adhered to ethical considerations, respecting user privacy and complying with applicable data protection regulations. Measures were taken to ensure the anonymization of user identities and the removal of any personally identifiable information.
# Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jrosenzw/autotrain-diabetes-detection-74053139456
|
jrosenzw
| 2023-07-12T15:23:51Z | 4 | 0 |
transformers
|
[
"transformers",
"joblib",
"xgboost",
"autotrain",
"tabular",
"classification",
"tabular-classification",
"dataset:jrosenzw/autotrain-data-diabetes-detection",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
tabular-classification
| 2023-07-12T15:12:14Z |
---
tags:
- autotrain
- tabular
- classification
- tabular-classification
datasets:
- jrosenzw/autotrain-data-diabetes-detection
co2_eq_emissions:
emissions: 0.8803900471704509
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 74053139456
- CO2 Emissions (in grams): 0.8804
## Validation Metrics
- Loss: 0.473
- Accuracy: 0.760
- Precision: 0.667
- Recall: 0.630
- AUC: 0.835
- F1: 0.648
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
```
|
ynklab/XCodeBERT
|
ynklab
| 2023-07-12T15:17:51Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2306.15604",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-29T14:01:20Z |
---
license: mit
---
[Paper on arXiv](https://arxiv.org/abs/2306.15604)
### Code
Java/PHP/Python/Go
### Query
En/Fr/Ja/Zh
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('ynklab/XCodeBERT') # Same as `xlm-roberta-base`
model = AutoModel.from_pretrained('ynklab/XCodeBERT')
```
|
alujjdnd/10-Social-Dimensions
|
alujjdnd
| 2023-07-12T15:11:43Z | 0 | 0 | null |
[
"license:cc-by-4.0",
"region:us"
] | null | 2023-06-12T11:19:49Z |
---
license: cc-by-4.0
---
# 10 Social Dimensions
A mirror of https://github.com/minjechoi/10dimensions/tree/master/weights - notably the Dropbox links
## Overview
Contains pretrained weights for LSTM and BERT classifiers, for the 10 social dimensions.
### From Paper
https://dl.acm.org/doi/10.1145/3366423.3380224
### ACM Reference Format
Minje Choi, Luca Maria Aiello, Krisztián Zsolt Varga, and Daniele Quercia.
2020. Ten Social Dimensions of Conversations and Relationships. In Proceedings of The Web Conference 2020 (WWW ’20), April 20–24, 2020, Taipei, Taiwan.
ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3366423.3380224
|
vineetsharma/q-Taxi-v3_million_steps
|
vineetsharma
| 2023-07-12T15:08:50Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T15:05:01Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3_million_steps
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="vineetsharma/q-Taxi-v3_million_steps", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bofenghuang/vigogne-13b-chat
|
bofenghuang
| 2023-07-12T14:59:28Z | 1,466 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"LLM",
"fr",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-12T13:59:04Z |
---
license: openrail
language:
- fr
pipeline_tag: text-generation
library_name: transformers
tags:
- llama
- LLM
inference: false
---
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-13b-chat/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-13B-Chat: A French Chat LLaMA Model
Vigogne-13B-Chat is a LLaMA-13B model fine-tuned to conduct multi-turn dialogues in French between human user and AI assistant.
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
**Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Vigogne is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from vigogne.preprocess import generate_inference_chat_prompt
model_name_or_path = "bofenghuang/vigogne-13b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
user_query = "Expliquez la différence entre DoS et phishing."
prompt = generate_inference_chat_prompt([[user_query, ""]], tokenizer=tokenizer)
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=0.1,
do_sample=True,
repetition_penalty=1.0,
max_new_tokens=512,
),
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(generated_text)
```
You can infer this model by using the following Google Colab Notebook.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
|
bardsai/finance-sentiment-zh-base
|
bardsai
| 2023-07-12T14:54:22Z | 200 | 18 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"zh",
"dataset:datasets/financial_phrasebank",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T14:41:06Z |
---
language: zh
tags:
- text-classification
- financial-sentiment-analysis
- sentiment-analysis
datasets:
- datasets/financial_phrasebank
metrics:
- f1
- accuracy
- precision
- recall
widget:
- text: "净销售额增长30%,达到3600万欧元。"
example_title: "Example 1"
- text: "黑色星期五拉开帷幕。店内促销活动列表。"
example_title: "Example 2"
- text: "CDPROJEKT股价在WSE上市公司中跌幅最大。"
example_title: "Example 3"
---
# Finance Sentiment ZH (base)
Finance Sentiment ZH (base) is a model based on [bert-base-chinese](https://huggingface.co/bert-base-chinese) for analyzing sentiment of Chinese financial news. It was trained on the translated version of [Financial PhraseBank](https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts) by Malo et al. (20014) for 10 epochs on single RTX3090 gpu.
The model will give you a three labels: positive, negative and neutral.
## How to use
You can use this model directly with a pipeline for sentiment-analysis:
```python
from transformers import pipeline
nlp = pipeline("sentiment-analysis", model="bardsai/finance-sentiment-zh-base")
nlp("净销售额增长30%,达到3600万欧元。")
```
```bash
[{'label': 'negative', 'score': 0.9999314546585083}]
```
## Performance
| Metric | Value |
| --- | ----------- |
| f1 macro | 0.966 |
| precision macro | 0.963 |
| recall macro | 0.969 |
| accuracy | 0.973 |
| samples per second | 135.2 |
(The performance was evaluated on RTX 3090 gpu)
## Changelog
- 2023-07-12: Initial release
## About bards.ai
At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: [bards.ai](https://bards.ai/)
Let us know if you use our model :). Also, if you need any help, feel free to contact us at info@bards.ai
|
DipanAI/Low_falcan_7b
|
DipanAI
| 2023-07-12T14:49:33Z | 0 | 0 | null |
[
"pytorch",
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-07-12T14:00:09Z |
---
tags:
- generated_from_trainer
model-index:
- name: Low_falcan_7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Low_falcan_7b
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 800
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bardsai/finance-sentiment-zh-fast
|
bardsai
| 2023-07-12T14:48:43Z | 533 | 7 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"zh",
"dataset:datasets/financial_phrasebank",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T14:21:24Z |
---
language: zh
tags:
- text-classification
- financial-sentiment-analysis
- sentiment-analysis
datasets:
- datasets/financial_phrasebank
metrics:
- f1
- accuracy
- precision
- recall
widget:
- text: "净销售额增长30%,达到3600万欧元。"
example_title: "Example 1"
- text: "黑色星期五拉开帷幕。店内促销活动列表。"
example_title: "Example 2"
- text: "CDPROJEKT股价在WSE上市公司中跌幅最大。"
example_title: "Example 3"
---
# Finance Sentiment ZH (fast)
Finance Sentiment ZH (fast) is a [distiluse](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1)-based model for analyzing sentiment of Chinese financial news. It was trained on the translated version of [Financial PhraseBank](https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts) by Malo et al. (20014) for 10 epochs on single RTX3090 gpu.
The model will give you a three labels: positive, negative and neutral.
## How to use
You can use this model directly with a pipeline for sentiment-analysis:
```python
from transformers import pipeline
nlp = pipeline("sentiment-analysis", model="bardsai/finance-sentiment-zh-fast")
nlp("净销售额增长30%,达到3600万欧元。")
```
```bash
[{'label': 'positive', 'score': 0.9996095299720764}]
```
## Performance
| Metric | Value |
| --- | ----------- |
| f1 macro | 0.953 |
| precision macro | 0.959 |
| recall macro | 0.949 |
| accuracy | 0.961 |
| samples per second | 264.6 |
(The performance was evaluated on RTX 3090 gpu)
## Changelog
- 2023-07-12: Initial release
## About bards.ai
At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: [bards.ai](https://bards.ai/)
Let us know if you use our model :). Also, if you need any help, feel free to contact us at info@bards.ai
|
vineetsharma/q-Taxi-v3
|
vineetsharma
| 2023-07-12T14:35:12Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T14:35:11Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="vineetsharma/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cenkersisman/chatbotgpt-turkish-latin
|
cenkersisman
| 2023-07-12T14:22:53Z | 163 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"tr",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-02T04:19:39Z |
---
widget:
- text: '[Q] cengiz han kac yilinda dogmustur'
example_title: Örnek 1
- text: '[Q] etimesgut havaalani nerede bulunmaktadir'
example_title: Örnek 2
- text: '[Q] spartakus nereli bir gladyatordur'
example_title: Örnek 3
- text: '[Q] ajda pekkan hangi sarkisiyla iyi cikis yakalamistir'
example_title: Örnek 4
- text: '[Q] kobalt camlari kim tarafindan kesfedilmistir'
example_title: Örnek 5
- text: '[Q] john turk ne zaman dogmustur'
example_title: Örnek 6
- text: '[Q] tayvan ne zaman kurulmustur'
example_title: Örnek 7
- text: '[Q] dunyanin her bolgesinden kutuphaneler hangi imkani verir'
example_title: Örnek 8
- text: '[Q] gozun en distaki birinci tabakaya ne denir'
example_title: Örnek 9
language:
- tr
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Bu model test amaçlı hazırlanmıştır ve fikir vermesi açısından geliştirilmiştir. Model için Vikipedi üzerinden üretilen 40 bin soru cevap GPT ile eğitilmiştir. Daha büyük veri setlerinde daha iyi sonuçlar alınabilir.
## Önemli Notlar
* Inference için soruların başında [Q] kullanılmalıdır.
* Sorular latin karakterlerden oluşmalıdır. ç ğ I ö ş ü gibi harfler içermemelidir.
* Sorular küçük harflerden oluşmalıdır. Büyük harf veya sembol kullanımı farklı ve istenmeyen cevaplar üretecektir.
## Model Details
## Başlıklara göre en fazla soru cevap içeren konular aşağıdadır:
* Cengiz Han: 665 adet
* Futbol rekabetleri listesi: 409 adet
* Lüleburgaz Muharebesi: 336 adet
* I. Baybars: 263 adet
* Sovyetler Birliği'nin askerî tarihi: 258 adet
* Zümrüdüanka Yoldaşlığı: 245 adet
* Gilles Deleuze: 208 adet
* Ermenistan Sovyet Sosyalist Cumhuriyeti: 195 adet
* Nâzım Hikmet: 173 adet
* Hermann Göring: 169 adet
* V. Leon: 163 adet
* Gökhan Türkmen: 156 adet
* Dumbledore'un Ordusu: 153 adet
* Ajda Pekkan: 152 adet
* Kırkpınar Yağlı Güreşleri: 152 adet
* Mehdi Savaşı: 150 adet
* İmamiye (Şiilik öğretisi): 149 adet
* Rumyantsev Harekâtı: 145 adet
* II. Dünya Savaşı tankları: 144 adet
* Emîn: 142 adet
* Boshin Savaşı: 137 adet
* Wolfgang Amadeus Mozart: 136 adet
* Faşizm: 134 adet
* Kâzım Koyuncu: 133 adet
* Suvorov Harekâtı: 128 adet
* Mao Zedong: 127 adet
* Mehdî (Abbâsî halifesi): 127 adet
* Madagaskar: 124 adet
* Oscar Niemeyer: 123 adet
* Adolf Eichmann: 123 adet
* Joachim von Ribbentrop: 121 adet
* Crystal Palace FC: 119 adet
* IV. Mihail: 111 adet
* VI. Leon: 107 adet
* Han Hanedanı: 105 adet
* Portekiz coğrafi keşifleri: 104 adet
* II. Nikiforos: 102 adet
* III. Mihail: 98 adet
* Fidel Castro: 96 adet
* Tsushima Muharebesi: 94 adet
* I. Basileios: 93 adet
* VI. Konstantinos: 92 adet
* Bijan Muharebesi: 91 adet
* Otto Skorzeny: 90 adet
* Antimon: 89 adet
* Dijitalleştirme: 88 adet
* Theofilos: 79 adet
* Sovyet Hava Kuvvetleri: 78 adet
* IV. Henry, Bölüm 1: 77 adet
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Cenker Sisman
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model :** redrussianarmy/gpt2-turkish-cased

### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Sınırlamalar ve Önyargılar
chatbotgpt-turkish, bir özyineli dil modeli olarak eğitildi. Bu, temel işlevinin bir metin dizisi alıp bir sonraki belirteci tahmin etmek olduğu anlamına gelir. Dil modelleri bunun dışında birçok görev için yaygın olarak kullanılsa da, bu çalışmayla ilgili birçok bilinmeyen bulunmaktadır.
chatbotgpt-turkish, küfür, açık saçıklık ve aksi davranışlara yol açan metinleri içerdiği bilinen bir veri kümesi olan Pile üzerinde eğitildi. Kullanım durumunuza bağlı olarak, chatbotgpt-turkish toplumsal olarak kabul edilemez metinler üretebilir. Pile makalesinin 5. ve 6. bölümlerinde Pile'daki önyargıların daha ayrıntılı bir analizini bulabilirsiniz.
Tüm dil modellerinde olduğu gibi, chatbotgpt-turkish'in belirli bir girişe nasıl yanıt vereceğini önceden tahmin etmek zordur ve uyarı olmaksızın saldırgan içerik ortaya çıkabilir. Sonuçları yayınlamadan önce insanların çıktıları denetlemesini veya filtrelemesini öneririz, hem istenmeyen içeriği sansürlemek hem de sonuçların kalitesini iyileştirmek için.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
```python
"""Inference"""
from transformers import PreTrainedTokenizerFast, GPT2LMHeadModel, GPT2TokenizerFast, GPT2Tokenizer
def load_model(model_path):
model = GPT2LMHeadModel.from_pretrained(model_path)
return model
def load_tokenizer(tokenizer_path):
tokenizer = GPT2Tokenizer.from_pretrained(tokenizer_path)
return tokenizer
def generate_text(model_path, sequence, max_length):
model = load_model(model_path)
tokenizer = load_tokenizer(model_path)
ids = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(
ids,
do_sample=True,
max_length=max_length,
pad_token_id=model.config.eos_token_id,
top_k=1,
top_p=0.99,
)
converted = tokenizer.convert_ids_to_tokens(outputs[0])
valid_tokens = [token if token is not None else '.' for token in converted]
generated_text = tokenizer.convert_tokens_to_string(valid_tokens)
print(generated_text)
model2_path = "cenkersisman/chatbotgpt-turkish"
sequence2 = "[Q] cengiz han kimdir"
max_len = 120
generate_text(model2_path, sequence2, max_len)
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SHENMU007/neunit_BASE_V12.2
|
SHENMU007
| 2023-07-12T14:17:28Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-12T11:35:11Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jordyvl/vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.9
|
jordyvl
| 2023-07-12T14:10:56Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-12T13:24:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.9
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5908
- Accuracy: 0.54
- Brier Loss: 0.6121
- Nll: 2.4999
- F1 Micro: 0.54
- F1 Macro: 0.5334
- Ece: 0.1895
- Aurc: 0.2228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 3.9334 | 0.0425 | 1.0719 | 7.3494 | 0.0425 | 0.0341 | 0.2781 | 0.9557 |
| No log | 2.0 | 14 | 3.0206 | 0.09 | 0.9526 | 5.9271 | 0.09 | 0.0682 | 0.1659 | 0.8896 |
| No log | 3.0 | 21 | 2.6955 | 0.2275 | 0.8867 | 5.4041 | 0.2275 | 0.1804 | 0.1637 | 0.6351 |
| No log | 4.0 | 28 | 2.3029 | 0.29 | 0.8005 | 3.5699 | 0.29 | 0.2727 | 0.1512 | 0.4877 |
| No log | 5.0 | 35 | 2.0293 | 0.37 | 0.7219 | 2.9856 | 0.37 | 0.3519 | 0.1495 | 0.3632 |
| No log | 6.0 | 42 | 1.9194 | 0.44 | 0.6926 | 2.9746 | 0.44 | 0.4240 | 0.1778 | 0.3091 |
| No log | 7.0 | 49 | 1.9640 | 0.4675 | 0.7000 | 3.0857 | 0.4675 | 0.4390 | 0.2009 | 0.3004 |
| No log | 8.0 | 56 | 1.9049 | 0.4625 | 0.7003 | 2.8472 | 0.4625 | 0.4602 | 0.2053 | 0.3051 |
| No log | 9.0 | 63 | 2.0561 | 0.4675 | 0.7168 | 2.8229 | 0.4675 | 0.4501 | 0.2288 | 0.2994 |
| No log | 10.0 | 70 | 2.1002 | 0.45 | 0.7433 | 2.7915 | 0.45 | 0.4234 | 0.2691 | 0.3022 |
| No log | 11.0 | 77 | 2.2528 | 0.4525 | 0.7686 | 3.0103 | 0.4525 | 0.4320 | 0.2921 | 0.3183 |
| No log | 12.0 | 84 | 2.1190 | 0.475 | 0.7427 | 2.6715 | 0.4750 | 0.4660 | 0.2832 | 0.3077 |
| No log | 13.0 | 91 | 2.3102 | 0.445 | 0.7825 | 2.9698 | 0.445 | 0.4252 | 0.3093 | 0.3100 |
| No log | 14.0 | 98 | 2.3501 | 0.42 | 0.8145 | 2.7585 | 0.4200 | 0.4248 | 0.3206 | 0.3662 |
| No log | 15.0 | 105 | 2.2402 | 0.495 | 0.7423 | 3.0313 | 0.495 | 0.4702 | 0.2692 | 0.2818 |
| No log | 16.0 | 112 | 2.2266 | 0.49 | 0.7349 | 2.8824 | 0.49 | 0.4714 | 0.2763 | 0.2895 |
| No log | 17.0 | 119 | 2.2989 | 0.4725 | 0.7509 | 3.0951 | 0.4725 | 0.4499 | 0.2863 | 0.2855 |
| No log | 18.0 | 126 | 2.1355 | 0.47 | 0.7322 | 2.9349 | 0.47 | 0.4616 | 0.2725 | 0.2845 |
| No log | 19.0 | 133 | 2.0965 | 0.505 | 0.7067 | 2.8254 | 0.505 | 0.4956 | 0.2523 | 0.2757 |
| No log | 20.0 | 140 | 2.1961 | 0.485 | 0.7358 | 3.1604 | 0.485 | 0.4567 | 0.2825 | 0.2841 |
| No log | 21.0 | 147 | 2.1287 | 0.5025 | 0.7247 | 2.5998 | 0.5025 | 0.5074 | 0.2703 | 0.3064 |
| No log | 22.0 | 154 | 2.2280 | 0.4675 | 0.7760 | 2.8571 | 0.4675 | 0.4636 | 0.2911 | 0.3232 |
| No log | 23.0 | 161 | 1.9649 | 0.5025 | 0.6828 | 2.8224 | 0.5025 | 0.4970 | 0.2410 | 0.2633 |
| No log | 24.0 | 168 | 1.9361 | 0.5125 | 0.6780 | 2.7309 | 0.5125 | 0.5035 | 0.2326 | 0.2553 |
| No log | 25.0 | 175 | 2.0161 | 0.5 | 0.6980 | 2.9958 | 0.5 | 0.4912 | 0.2580 | 0.2556 |
| No log | 26.0 | 182 | 1.8763 | 0.5025 | 0.6624 | 2.8291 | 0.5025 | 0.4952 | 0.2305 | 0.2431 |
| No log | 27.0 | 189 | 1.9057 | 0.525 | 0.6793 | 2.5627 | 0.525 | 0.5174 | 0.2161 | 0.2634 |
| No log | 28.0 | 196 | 1.8529 | 0.52 | 0.6683 | 2.7191 | 0.52 | 0.5132 | 0.2375 | 0.2535 |
| No log | 29.0 | 203 | 1.9603 | 0.5125 | 0.6831 | 2.7822 | 0.5125 | 0.5076 | 0.2395 | 0.2657 |
| No log | 30.0 | 210 | 1.8247 | 0.52 | 0.6533 | 2.8547 | 0.52 | 0.5058 | 0.2080 | 0.2426 |
| No log | 31.0 | 217 | 1.8275 | 0.5125 | 0.6547 | 2.6194 | 0.5125 | 0.5032 | 0.2208 | 0.2488 |
| No log | 32.0 | 224 | 1.8003 | 0.52 | 0.6455 | 2.6138 | 0.52 | 0.5124 | 0.2302 | 0.2370 |
| No log | 33.0 | 231 | 1.8714 | 0.505 | 0.6694 | 2.6643 | 0.505 | 0.4970 | 0.2195 | 0.2553 |
| No log | 34.0 | 238 | 1.8018 | 0.5075 | 0.6659 | 2.5423 | 0.5075 | 0.4978 | 0.2241 | 0.2515 |
| No log | 35.0 | 245 | 1.7844 | 0.5225 | 0.6503 | 2.6100 | 0.5225 | 0.5181 | 0.2181 | 0.2435 |
| No log | 36.0 | 252 | 1.8321 | 0.5225 | 0.6674 | 2.7821 | 0.5225 | 0.5020 | 0.2285 | 0.2462 |
| No log | 37.0 | 259 | 1.7859 | 0.4975 | 0.6725 | 2.6066 | 0.4975 | 0.4974 | 0.2351 | 0.2627 |
| No log | 38.0 | 266 | 1.7790 | 0.5125 | 0.6595 | 2.6983 | 0.5125 | 0.5023 | 0.2172 | 0.2497 |
| No log | 39.0 | 273 | 1.6989 | 0.5225 | 0.6401 | 2.6743 | 0.5225 | 0.5151 | 0.2100 | 0.2407 |
| No log | 40.0 | 280 | 1.7568 | 0.52 | 0.6488 | 2.5294 | 0.52 | 0.5132 | 0.2208 | 0.2442 |
| No log | 41.0 | 287 | 1.6896 | 0.5275 | 0.6362 | 2.5489 | 0.5275 | 0.5141 | 0.2045 | 0.2323 |
| No log | 42.0 | 294 | 1.7193 | 0.5275 | 0.6517 | 2.5525 | 0.5275 | 0.5232 | 0.1986 | 0.2467 |
| No log | 43.0 | 301 | 1.7199 | 0.535 | 0.6403 | 2.5974 | 0.535 | 0.5279 | 0.2104 | 0.2432 |
| No log | 44.0 | 308 | 1.6594 | 0.5375 | 0.6330 | 2.4854 | 0.5375 | 0.5316 | 0.2015 | 0.2321 |
| No log | 45.0 | 315 | 1.6543 | 0.5275 | 0.6239 | 2.4955 | 0.5275 | 0.5223 | 0.2144 | 0.2308 |
| No log | 46.0 | 322 | 1.6490 | 0.5425 | 0.6262 | 2.5215 | 0.5425 | 0.5358 | 0.2104 | 0.2273 |
| No log | 47.0 | 329 | 1.6570 | 0.54 | 0.6233 | 2.5454 | 0.54 | 0.5380 | 0.2047 | 0.2301 |
| No log | 48.0 | 336 | 1.6359 | 0.5375 | 0.6218 | 2.5546 | 0.5375 | 0.5320 | 0.2171 | 0.2257 |
| No log | 49.0 | 343 | 1.6320 | 0.55 | 0.6214 | 2.4958 | 0.55 | 0.5452 | 0.2014 | 0.2267 |
| No log | 50.0 | 350 | 1.6230 | 0.53 | 0.6208 | 2.4979 | 0.53 | 0.5243 | 0.2017 | 0.2315 |
| No log | 51.0 | 357 | 1.6374 | 0.535 | 0.6257 | 2.4644 | 0.535 | 0.5293 | 0.2038 | 0.2286 |
| No log | 52.0 | 364 | 1.6190 | 0.5375 | 0.6199 | 2.5279 | 0.5375 | 0.5310 | 0.1855 | 0.2290 |
| No log | 53.0 | 371 | 1.6155 | 0.5475 | 0.6158 | 2.4738 | 0.5475 | 0.5435 | 0.1913 | 0.2239 |
| No log | 54.0 | 378 | 1.6131 | 0.5425 | 0.6184 | 2.4982 | 0.5425 | 0.5377 | 0.1969 | 0.2248 |
| No log | 55.0 | 385 | 1.6035 | 0.545 | 0.6138 | 2.4690 | 0.545 | 0.5406 | 0.2164 | 0.2223 |
| No log | 56.0 | 392 | 1.5990 | 0.54 | 0.6153 | 2.4701 | 0.54 | 0.5356 | 0.2019 | 0.2249 |
| No log | 57.0 | 399 | 1.6024 | 0.5425 | 0.6153 | 2.4626 | 0.5425 | 0.5375 | 0.1826 | 0.2237 |
| No log | 58.0 | 406 | 1.5935 | 0.545 | 0.6141 | 2.4390 | 0.545 | 0.5415 | 0.1933 | 0.2238 |
| No log | 59.0 | 413 | 1.6016 | 0.545 | 0.6137 | 2.4640 | 0.545 | 0.5401 | 0.2021 | 0.2230 |
| No log | 60.0 | 420 | 1.5976 | 0.54 | 0.6146 | 2.4618 | 0.54 | 0.5355 | 0.1912 | 0.2245 |
| No log | 61.0 | 427 | 1.5984 | 0.545 | 0.6133 | 2.4683 | 0.545 | 0.5408 | 0.1971 | 0.2228 |
| No log | 62.0 | 434 | 1.5941 | 0.54 | 0.6131 | 2.4639 | 0.54 | 0.5358 | 0.1898 | 0.2236 |
| No log | 63.0 | 441 | 1.5936 | 0.545 | 0.6123 | 2.4689 | 0.545 | 0.5404 | 0.1953 | 0.2222 |
| No log | 64.0 | 448 | 1.5970 | 0.5425 | 0.6138 | 2.4647 | 0.5425 | 0.5384 | 0.2015 | 0.2238 |
| No log | 65.0 | 455 | 1.5943 | 0.545 | 0.6130 | 2.4963 | 0.545 | 0.5400 | 0.1979 | 0.2229 |
| No log | 66.0 | 462 | 1.5936 | 0.545 | 0.6127 | 2.4977 | 0.545 | 0.5400 | 0.1933 | 0.2229 |
| No log | 67.0 | 469 | 1.5928 | 0.5425 | 0.6127 | 2.4965 | 0.5425 | 0.5381 | 0.1976 | 0.2233 |
| No log | 68.0 | 476 | 1.5946 | 0.5425 | 0.6128 | 2.4768 | 0.5425 | 0.5383 | 0.2149 | 0.2233 |
| No log | 69.0 | 483 | 1.5924 | 0.54 | 0.6126 | 2.4946 | 0.54 | 0.5356 | 0.2094 | 0.2233 |
| No log | 70.0 | 490 | 1.5921 | 0.54 | 0.6120 | 2.4964 | 0.54 | 0.5356 | 0.1801 | 0.2230 |
| No log | 71.0 | 497 | 1.5926 | 0.54 | 0.6126 | 2.4955 | 0.54 | 0.5356 | 0.2039 | 0.2235 |
| 0.3138 | 72.0 | 504 | 1.5916 | 0.5425 | 0.6121 | 2.4964 | 0.5425 | 0.5366 | 0.1898 | 0.2229 |
| 0.3138 | 73.0 | 511 | 1.5917 | 0.54 | 0.6119 | 2.4966 | 0.54 | 0.5356 | 0.2039 | 0.2231 |
| 0.3138 | 74.0 | 518 | 1.5918 | 0.54 | 0.6123 | 2.4964 | 0.54 | 0.5351 | 0.2035 | 0.2229 |
| 0.3138 | 75.0 | 525 | 1.5912 | 0.54 | 0.6118 | 2.4975 | 0.54 | 0.5351 | 0.2059 | 0.2228 |
| 0.3138 | 76.0 | 532 | 1.5918 | 0.54 | 0.6124 | 2.4965 | 0.54 | 0.5351 | 0.1971 | 0.2231 |
| 0.3138 | 77.0 | 539 | 1.5919 | 0.5425 | 0.6120 | 2.4974 | 0.5425 | 0.5358 | 0.2087 | 0.2227 |
| 0.3138 | 78.0 | 546 | 1.5903 | 0.54 | 0.6118 | 2.4978 | 0.54 | 0.5341 | 0.2169 | 0.2228 |
| 0.3138 | 79.0 | 553 | 1.5922 | 0.54 | 0.6124 | 2.4976 | 0.54 | 0.5351 | 0.2109 | 0.2234 |
| 0.3138 | 80.0 | 560 | 1.5914 | 0.54 | 0.6122 | 2.4983 | 0.54 | 0.5345 | 0.2041 | 0.2228 |
| 0.3138 | 81.0 | 567 | 1.5907 | 0.54 | 0.6119 | 2.4981 | 0.54 | 0.5345 | 0.2128 | 0.2226 |
| 0.3138 | 82.0 | 574 | 1.5921 | 0.5425 | 0.6124 | 2.4986 | 0.5425 | 0.5362 | 0.2084 | 0.2227 |
| 0.3138 | 83.0 | 581 | 1.5918 | 0.5425 | 0.6125 | 2.4987 | 0.5425 | 0.5362 | 0.2038 | 0.2230 |
| 0.3138 | 84.0 | 588 | 1.5902 | 0.54 | 0.6120 | 2.4989 | 0.54 | 0.5345 | 0.2043 | 0.2226 |
| 0.3138 | 85.0 | 595 | 1.5919 | 0.5425 | 0.6124 | 2.4988 | 0.5425 | 0.5360 | 0.1998 | 0.2228 |
| 0.3138 | 86.0 | 602 | 1.5916 | 0.5425 | 0.6124 | 2.4990 | 0.5425 | 0.5362 | 0.2079 | 0.2227 |
| 0.3138 | 87.0 | 609 | 1.5906 | 0.54 | 0.6120 | 2.4990 | 0.54 | 0.5345 | 0.2037 | 0.2227 |
| 0.3138 | 88.0 | 616 | 1.5908 | 0.54 | 0.6120 | 2.4989 | 0.54 | 0.5345 | 0.2091 | 0.2230 |
| 0.3138 | 89.0 | 623 | 1.5909 | 0.54 | 0.6120 | 2.4995 | 0.54 | 0.5344 | 0.2113 | 0.2228 |
| 0.3138 | 90.0 | 630 | 1.5906 | 0.54 | 0.6119 | 2.4996 | 0.54 | 0.5345 | 0.1969 | 0.2228 |
| 0.3138 | 91.0 | 637 | 1.5911 | 0.5425 | 0.6121 | 2.4999 | 0.5425 | 0.5360 | 0.1954 | 0.2226 |
| 0.3138 | 92.0 | 644 | 1.5909 | 0.54 | 0.6121 | 2.4994 | 0.54 | 0.5344 | 0.1928 | 0.2228 |
| 0.3138 | 93.0 | 651 | 1.5907 | 0.5425 | 0.6121 | 2.4999 | 0.5425 | 0.5360 | 0.2034 | 0.2225 |
| 0.3138 | 94.0 | 658 | 1.5910 | 0.5425 | 0.6122 | 2.4996 | 0.5425 | 0.5360 | 0.1974 | 0.2227 |
| 0.3138 | 95.0 | 665 | 1.5909 | 0.5375 | 0.6121 | 2.4995 | 0.5375 | 0.5319 | 0.1990 | 0.2230 |
| 0.3138 | 96.0 | 672 | 1.5907 | 0.5375 | 0.6120 | 2.4997 | 0.5375 | 0.5318 | 0.1980 | 0.2229 |
| 0.3138 | 97.0 | 679 | 1.5907 | 0.54 | 0.6120 | 2.4998 | 0.54 | 0.5344 | 0.1900 | 0.2228 |
| 0.3138 | 98.0 | 686 | 1.5907 | 0.5425 | 0.6120 | 2.4999 | 0.5425 | 0.5362 | 0.1899 | 0.2226 |
| 0.3138 | 99.0 | 693 | 1.5908 | 0.54 | 0.6121 | 2.4999 | 0.54 | 0.5334 | 0.1936 | 0.2228 |
| 0.3138 | 100.0 | 700 | 1.5908 | 0.54 | 0.6121 | 2.4999 | 0.54 | 0.5334 | 0.1895 | 0.2228 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
YojitShinde/Reinforce-CartPole-v1
|
YojitShinde
| 2023-07-12T14:06:04Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T14:05:54Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fireday/ppo-LunarLander-v2
|
fireday
| 2023-07-12T14:05:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T14:05:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 291.16 +/- 20.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
malditoguisante/ppo-LunarLander-v2
|
malditoguisante
| 2023-07-12T14:04:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T14:03:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.05 +/- 16.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aalloott/ppo-LunarLander-v2
|
aalloott
| 2023-07-12T14:01:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T14:00:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.52 +/- 23.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
alexbrandsen/bert-base-german-cased-archaeo-NER
|
alexbrandsen
| 2023-07-12T13:59:45Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"Archaeology",
"Named Entity Recognition",
"NER",
"de",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-12T13:22:47Z |
---
license: cc0-1.0
language:
- de
tags:
- Archaeology
- Named Entity Recognition
- NER
metrics:
- f1
---
# bert-base-german-cased-archaeo-NER
A German BERT model for Named Entity Recognition in the Archaeology domain
This is the bert-base-german-cased-archaeo model finetuned for NER, targeting the following entities:
- Time periods (PER)
- Places (LOC)
- Artefacts (ART)
- Contexts (CON)
- Materials (MAT)
- Species (SPE)
|
MUNDOJU/ppo-LunarLander-v2
|
MUNDOJU
| 2023-07-12T13:55:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T13:55:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.13 +/- 13.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mateo2008/ppo-LunarLander-v2
|
Mateo2008
| 2023-07-12T13:54:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T13:53:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.21 +/- 22.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PisoF/ppo-LunarLander-v2
|
PisoF
| 2023-07-12T13:53:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T13:53:19Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.80 +/- 20.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Samuel1234/ppo-LunarLander-v2
|
Samuel1234
| 2023-07-12T13:52:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T18:25:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.25 +/- 19.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aceituna0813/ppo-LunarLander-v2
|
Aceituna0813
| 2023-07-12T13:47:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T18:22:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.22 +/- 34.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jordyvl/vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.7
|
jordyvl
| 2023-07-12T13:24:05Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-12T12:43:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.7
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5923
- Accuracy: 0.57
- Brier Loss: 0.5750
- Nll: 2.3088
- F1 Micro: 0.57
- F1 Macro: 0.5661
- Ece: 0.1722
- Aurc: 0.2058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 4.3150 | 0.0425 | 1.0714 | 7.3695 | 0.0425 | 0.0339 | 0.2866 | 0.9549 |
| No log | 2.0 | 14 | 3.3516 | 0.1 | 0.9507 | 5.8558 | 0.1000 | 0.0866 | 0.1647 | 0.8831 |
| No log | 3.0 | 21 | 2.9890 | 0.225 | 0.8838 | 5.3580 | 0.225 | 0.1805 | 0.1576 | 0.6316 |
| No log | 4.0 | 28 | 2.5376 | 0.29 | 0.7946 | 3.5543 | 0.29 | 0.2749 | 0.1832 | 0.4807 |
| No log | 5.0 | 35 | 2.2193 | 0.3875 | 0.7186 | 3.0794 | 0.3875 | 0.3677 | 0.1577 | 0.3531 |
| No log | 6.0 | 42 | 2.0818 | 0.43 | 0.6905 | 2.9853 | 0.4300 | 0.4165 | 0.1668 | 0.3056 |
| No log | 7.0 | 49 | 2.1032 | 0.45 | 0.7019 | 3.1044 | 0.45 | 0.4208 | 0.2121 | 0.2997 |
| No log | 8.0 | 56 | 2.0360 | 0.455 | 0.6994 | 2.8491 | 0.455 | 0.4492 | 0.2131 | 0.3026 |
| No log | 9.0 | 63 | 2.1719 | 0.475 | 0.7092 | 2.7831 | 0.4750 | 0.4549 | 0.2332 | 0.2870 |
| No log | 10.0 | 70 | 2.1820 | 0.4525 | 0.7393 | 2.8185 | 0.4525 | 0.4318 | 0.2813 | 0.2994 |
| No log | 11.0 | 77 | 2.2659 | 0.4475 | 0.7485 | 2.8020 | 0.4475 | 0.4227 | 0.2677 | 0.3046 |
| No log | 12.0 | 84 | 2.1798 | 0.4575 | 0.7325 | 2.6772 | 0.4575 | 0.4555 | 0.2738 | 0.3081 |
| No log | 13.0 | 91 | 2.3031 | 0.465 | 0.7431 | 2.8956 | 0.465 | 0.4390 | 0.2771 | 0.2945 |
| No log | 14.0 | 98 | 2.0867 | 0.49 | 0.7048 | 2.5312 | 0.49 | 0.4823 | 0.2528 | 0.2921 |
| No log | 15.0 | 105 | 2.1671 | 0.49 | 0.7218 | 2.7779 | 0.49 | 0.4749 | 0.2396 | 0.2877 |
| No log | 16.0 | 112 | 2.0091 | 0.485 | 0.6857 | 2.7234 | 0.485 | 0.4608 | 0.2493 | 0.2577 |
| No log | 17.0 | 119 | 1.9494 | 0.515 | 0.6714 | 2.4841 | 0.515 | 0.5072 | 0.2380 | 0.2614 |
| No log | 18.0 | 126 | 1.9132 | 0.505 | 0.6665 | 2.4777 | 0.505 | 0.4945 | 0.2206 | 0.2622 |
| No log | 19.0 | 133 | 2.0539 | 0.505 | 0.6776 | 2.7885 | 0.505 | 0.4986 | 0.2209 | 0.2724 |
| No log | 20.0 | 140 | 1.9533 | 0.5125 | 0.6666 | 2.7287 | 0.5125 | 0.5044 | 0.2385 | 0.2645 |
| No log | 21.0 | 147 | 1.9038 | 0.525 | 0.6365 | 2.8345 | 0.525 | 0.5099 | 0.2021 | 0.2290 |
| No log | 22.0 | 154 | 1.8525 | 0.5075 | 0.6448 | 2.6337 | 0.5075 | 0.4958 | 0.2083 | 0.2494 |
| No log | 23.0 | 161 | 1.7880 | 0.51 | 0.6386 | 2.4856 | 0.51 | 0.5078 | 0.2186 | 0.2478 |
| No log | 24.0 | 168 | 1.8363 | 0.505 | 0.6456 | 2.5075 | 0.505 | 0.4966 | 0.1962 | 0.2399 |
| No log | 25.0 | 175 | 1.9655 | 0.4725 | 0.6864 | 2.6331 | 0.4725 | 0.4608 | 0.2291 | 0.2669 |
| No log | 26.0 | 182 | 1.8660 | 0.5175 | 0.6547 | 2.5404 | 0.5175 | 0.5076 | 0.2252 | 0.2489 |
| No log | 27.0 | 189 | 1.8693 | 0.525 | 0.6446 | 2.6230 | 0.525 | 0.5145 | 0.2047 | 0.2540 |
| No log | 28.0 | 196 | 1.8113 | 0.51 | 0.6407 | 2.4380 | 0.51 | 0.4978 | 0.2030 | 0.2454 |
| No log | 29.0 | 203 | 1.8313 | 0.53 | 0.6445 | 2.4777 | 0.53 | 0.5284 | 0.2071 | 0.2575 |
| No log | 30.0 | 210 | 1.7310 | 0.5425 | 0.6197 | 2.4559 | 0.5425 | 0.5384 | 0.1869 | 0.2367 |
| No log | 31.0 | 217 | 1.8023 | 0.5325 | 0.6351 | 2.5026 | 0.5325 | 0.5216 | 0.2081 | 0.2496 |
| No log | 32.0 | 224 | 1.7652 | 0.5325 | 0.6186 | 2.4794 | 0.5325 | 0.5145 | 0.1715 | 0.2338 |
| No log | 33.0 | 231 | 1.7583 | 0.525 | 0.6363 | 2.4889 | 0.525 | 0.5275 | 0.1984 | 0.2463 |
| No log | 34.0 | 238 | 1.7552 | 0.5475 | 0.6164 | 2.4291 | 0.5475 | 0.5305 | 0.2026 | 0.2377 |
| No log | 35.0 | 245 | 1.6839 | 0.5375 | 0.6085 | 2.5915 | 0.5375 | 0.5253 | 0.1828 | 0.2214 |
| No log | 36.0 | 252 | 1.7480 | 0.5425 | 0.6104 | 2.5809 | 0.5425 | 0.5366 | 0.1716 | 0.2232 |
| No log | 37.0 | 259 | 1.7534 | 0.525 | 0.6225 | 2.3614 | 0.525 | 0.5183 | 0.1930 | 0.2249 |
| No log | 38.0 | 266 | 1.7484 | 0.5425 | 0.6125 | 2.5714 | 0.5425 | 0.5282 | 0.1792 | 0.2272 |
| No log | 39.0 | 273 | 1.7073 | 0.55 | 0.6172 | 2.4200 | 0.55 | 0.5370 | 0.1902 | 0.2314 |
| No log | 40.0 | 280 | 1.7303 | 0.55 | 0.6134 | 2.4829 | 0.55 | 0.5394 | 0.1916 | 0.2324 |
| No log | 41.0 | 287 | 1.6684 | 0.54 | 0.6060 | 2.4632 | 0.54 | 0.5350 | 0.2028 | 0.2251 |
| No log | 42.0 | 294 | 1.7171 | 0.5375 | 0.6055 | 2.4705 | 0.5375 | 0.5213 | 0.1776 | 0.2262 |
| No log | 43.0 | 301 | 1.6493 | 0.545 | 0.5991 | 2.5207 | 0.545 | 0.5412 | 0.1779 | 0.2214 |
| No log | 44.0 | 308 | 1.6548 | 0.5625 | 0.5920 | 2.4810 | 0.5625 | 0.5568 | 0.1892 | 0.2182 |
| No log | 45.0 | 315 | 1.6392 | 0.565 | 0.5943 | 2.3771 | 0.565 | 0.5586 | 0.2165 | 0.2162 |
| No log | 46.0 | 322 | 1.6923 | 0.5225 | 0.6159 | 2.3661 | 0.5225 | 0.5158 | 0.1775 | 0.2400 |
| No log | 47.0 | 329 | 1.6266 | 0.5525 | 0.5827 | 2.4385 | 0.5525 | 0.5468 | 0.1845 | 0.2100 |
| No log | 48.0 | 336 | 1.6804 | 0.55 | 0.6019 | 2.3884 | 0.55 | 0.5481 | 0.1895 | 0.2291 |
| No log | 49.0 | 343 | 1.6202 | 0.5725 | 0.5847 | 2.4882 | 0.5725 | 0.5596 | 0.1642 | 0.2125 |
| No log | 50.0 | 350 | 1.6222 | 0.54 | 0.5882 | 2.4144 | 0.54 | 0.5311 | 0.1830 | 0.2226 |
| No log | 51.0 | 357 | 1.6119 | 0.5775 | 0.5794 | 2.4063 | 0.5775 | 0.5731 | 0.1647 | 0.2019 |
| No log | 52.0 | 364 | 1.5958 | 0.57 | 0.5757 | 2.3342 | 0.57 | 0.5642 | 0.1778 | 0.2094 |
| No log | 53.0 | 371 | 1.6206 | 0.545 | 0.5913 | 2.3884 | 0.545 | 0.5365 | 0.1799 | 0.2187 |
| No log | 54.0 | 378 | 1.5982 | 0.5675 | 0.5745 | 2.4276 | 0.5675 | 0.5640 | 0.1746 | 0.2050 |
| No log | 55.0 | 385 | 1.6258 | 0.5525 | 0.5856 | 2.4005 | 0.5525 | 0.5373 | 0.1890 | 0.2124 |
| No log | 56.0 | 392 | 1.5763 | 0.57 | 0.5744 | 2.4477 | 0.57 | 0.5729 | 0.1651 | 0.2081 |
| No log | 57.0 | 399 | 1.6249 | 0.5525 | 0.5861 | 2.3791 | 0.5525 | 0.5432 | 0.1531 | 0.2114 |
| No log | 58.0 | 406 | 1.6240 | 0.5775 | 0.5791 | 2.4540 | 0.5775 | 0.5730 | 0.1582 | 0.2054 |
| No log | 59.0 | 413 | 1.6149 | 0.545 | 0.5851 | 2.3134 | 0.545 | 0.5395 | 0.1870 | 0.2137 |
| No log | 60.0 | 420 | 1.6163 | 0.5775 | 0.5792 | 2.3778 | 0.5775 | 0.5708 | 0.1762 | 0.2076 |
| No log | 61.0 | 427 | 1.6132 | 0.5575 | 0.5868 | 2.3759 | 0.5575 | 0.5530 | 0.1842 | 0.2159 |
| No log | 62.0 | 434 | 1.5940 | 0.5725 | 0.5756 | 2.3394 | 0.5725 | 0.5731 | 0.2102 | 0.2054 |
| No log | 63.0 | 441 | 1.6167 | 0.56 | 0.5841 | 2.4117 | 0.56 | 0.5541 | 0.1806 | 0.2160 |
| No log | 64.0 | 448 | 1.5988 | 0.57 | 0.5775 | 2.3388 | 0.57 | 0.5667 | 0.1680 | 0.2064 |
| No log | 65.0 | 455 | 1.5893 | 0.5725 | 0.5752 | 2.4281 | 0.5725 | 0.5695 | 0.1624 | 0.2050 |
| No log | 66.0 | 462 | 1.5975 | 0.5725 | 0.5737 | 2.3760 | 0.5725 | 0.5662 | 0.1733 | 0.2026 |
| No log | 67.0 | 469 | 1.5903 | 0.57 | 0.5772 | 2.2921 | 0.57 | 0.5675 | 0.1888 | 0.2112 |
| No log | 68.0 | 476 | 1.5878 | 0.575 | 0.5730 | 2.3676 | 0.575 | 0.5706 | 0.1683 | 0.2039 |
| No log | 69.0 | 483 | 1.5950 | 0.57 | 0.5775 | 2.3006 | 0.57 | 0.5641 | 0.1639 | 0.2076 |
| No log | 70.0 | 490 | 1.5916 | 0.58 | 0.5728 | 2.3424 | 0.58 | 0.5769 | 0.1714 | 0.2026 |
| No log | 71.0 | 497 | 1.5960 | 0.5675 | 0.5784 | 2.3057 | 0.5675 | 0.5624 | 0.1600 | 0.2073 |
| 0.3705 | 72.0 | 504 | 1.5907 | 0.575 | 0.5755 | 2.3322 | 0.575 | 0.5723 | 0.1578 | 0.2066 |
| 0.3705 | 73.0 | 511 | 1.5918 | 0.5675 | 0.5762 | 2.3182 | 0.5675 | 0.5605 | 0.1942 | 0.2071 |
| 0.3705 | 74.0 | 518 | 1.5894 | 0.585 | 0.5747 | 2.3335 | 0.585 | 0.5818 | 0.1739 | 0.2035 |
| 0.3705 | 75.0 | 525 | 1.5878 | 0.565 | 0.5750 | 2.3019 | 0.565 | 0.5607 | 0.1649 | 0.2060 |
| 0.3705 | 76.0 | 532 | 1.5923 | 0.575 | 0.5742 | 2.3376 | 0.575 | 0.5699 | 0.1779 | 0.2048 |
| 0.3705 | 77.0 | 539 | 1.5891 | 0.565 | 0.5760 | 2.2978 | 0.565 | 0.5616 | 0.1691 | 0.2066 |
| 0.3705 | 78.0 | 546 | 1.5896 | 0.575 | 0.5738 | 2.3748 | 0.575 | 0.5703 | 0.1733 | 0.2048 |
| 0.3705 | 79.0 | 553 | 1.5901 | 0.5675 | 0.5757 | 2.3039 | 0.5675 | 0.5634 | 0.1710 | 0.2064 |
| 0.3705 | 80.0 | 560 | 1.5906 | 0.57 | 0.5746 | 2.3125 | 0.57 | 0.5657 | 0.1692 | 0.2054 |
| 0.3705 | 81.0 | 567 | 1.5907 | 0.57 | 0.5751 | 2.3097 | 0.57 | 0.5659 | 0.1600 | 0.2047 |
| 0.3705 | 82.0 | 574 | 1.5902 | 0.57 | 0.5746 | 2.3072 | 0.57 | 0.5657 | 0.1797 | 0.2055 |
| 0.3705 | 83.0 | 581 | 1.5906 | 0.5725 | 0.5746 | 2.3145 | 0.5725 | 0.5681 | 0.1547 | 0.2050 |
| 0.3705 | 84.0 | 588 | 1.5909 | 0.5725 | 0.5750 | 2.3057 | 0.5725 | 0.5684 | 0.1746 | 0.2055 |
| 0.3705 | 85.0 | 595 | 1.5906 | 0.57 | 0.5746 | 2.3098 | 0.57 | 0.5661 | 0.1721 | 0.2054 |
| 0.3705 | 86.0 | 602 | 1.5916 | 0.57 | 0.5749 | 2.3093 | 0.57 | 0.5661 | 0.1659 | 0.2058 |
| 0.3705 | 87.0 | 609 | 1.5913 | 0.57 | 0.5748 | 2.3084 | 0.57 | 0.5661 | 0.1631 | 0.2058 |
| 0.3705 | 88.0 | 616 | 1.5918 | 0.57 | 0.5749 | 2.3082 | 0.57 | 0.5661 | 0.1652 | 0.2058 |
| 0.3705 | 89.0 | 623 | 1.5919 | 0.57 | 0.5750 | 2.3084 | 0.57 | 0.5661 | 0.1658 | 0.2059 |
| 0.3705 | 90.0 | 630 | 1.5918 | 0.5725 | 0.5749 | 2.3087 | 0.5725 | 0.5685 | 0.1650 | 0.2056 |
| 0.3705 | 91.0 | 637 | 1.5921 | 0.57 | 0.5750 | 2.3076 | 0.57 | 0.5661 | 0.1549 | 0.2059 |
| 0.3705 | 92.0 | 644 | 1.5920 | 0.57 | 0.5750 | 2.3079 | 0.57 | 0.5661 | 0.1581 | 0.2058 |
| 0.3705 | 93.0 | 651 | 1.5917 | 0.57 | 0.5749 | 2.3080 | 0.57 | 0.5661 | 0.1680 | 0.2057 |
| 0.3705 | 94.0 | 658 | 1.5923 | 0.57 | 0.5750 | 2.3083 | 0.57 | 0.5661 | 0.1643 | 0.2058 |
| 0.3705 | 95.0 | 665 | 1.5924 | 0.57 | 0.5751 | 2.3085 | 0.57 | 0.5661 | 0.1543 | 0.2059 |
| 0.3705 | 96.0 | 672 | 1.5922 | 0.57 | 0.5750 | 2.3085 | 0.57 | 0.5661 | 0.1530 | 0.2058 |
| 0.3705 | 97.0 | 679 | 1.5923 | 0.57 | 0.5750 | 2.3088 | 0.57 | 0.5661 | 0.1688 | 0.2058 |
| 0.3705 | 98.0 | 686 | 1.5923 | 0.57 | 0.5749 | 2.3089 | 0.57 | 0.5661 | 0.1733 | 0.2058 |
| 0.3705 | 99.0 | 693 | 1.5923 | 0.57 | 0.5750 | 2.3088 | 0.57 | 0.5661 | 0.1735 | 0.2058 |
| 0.3705 | 100.0 | 700 | 1.5923 | 0.57 | 0.5750 | 2.3088 | 0.57 | 0.5661 | 0.1722 | 0.2058 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.