modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 00:42:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 00:42:38
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ayanban011/vit-base_tobacco_lr1e-5_wr_0.05_wd_0.1
|
ayanban011
| 2023-07-13T14:14:51Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T10:54:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco_lr1e-5_wr_0.05_wd_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco_lr1e-5_wr_0.05_wd_0.1
This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9592
- Accuracy: 0.775
- Brier Loss: 0.3981
- Nll: 1.5416
- F1 Micro: 0.775
- F1 Macro: 0.7418
- Ece: 0.2227
- Aurc: 0.1082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 12 | 0.7440 | 0.815 | 0.3076 | 1.1842 | 0.815 | 0.7942 | 0.2216 | 0.0733 |
| No log | 2.0 | 25 | 0.7436 | 0.82 | 0.3075 | 1.1869 | 0.82 | 0.8049 | 0.2132 | 0.0741 |
| No log | 2.96 | 37 | 0.7454 | 0.81 | 0.3085 | 1.1880 | 0.81 | 0.7914 | 0.2312 | 0.0755 |
| No log | 4.0 | 50 | 0.7439 | 0.815 | 0.3077 | 1.1846 | 0.815 | 0.7926 | 0.2369 | 0.0760 |
| No log | 4.96 | 62 | 0.7370 | 0.82 | 0.3040 | 1.1982 | 0.82 | 0.8028 | 0.2374 | 0.0745 |
| No log | 6.0 | 75 | 0.7507 | 0.82 | 0.3112 | 1.1980 | 0.82 | 0.8005 | 0.2513 | 0.0809 |
| No log | 6.96 | 87 | 0.7370 | 0.805 | 0.3060 | 1.1778 | 0.805 | 0.7841 | 0.2522 | 0.0746 |
| No log | 8.0 | 100 | 0.7437 | 0.81 | 0.3076 | 1.1846 | 0.81 | 0.7877 | 0.2301 | 0.0804 |
| No log | 8.96 | 112 | 0.7311 | 0.81 | 0.3031 | 1.1975 | 0.81 | 0.7920 | 0.2084 | 0.0753 |
| No log | 10.0 | 125 | 0.7305 | 0.8 | 0.3020 | 1.1785 | 0.8000 | 0.7792 | 0.2131 | 0.0777 |
| No log | 10.96 | 137 | 0.7478 | 0.805 | 0.3119 | 1.3979 | 0.805 | 0.7860 | 0.2133 | 0.0827 |
| No log | 12.0 | 150 | 0.7469 | 0.805 | 0.3082 | 1.3337 | 0.805 | 0.7844 | 0.2213 | 0.0843 |
| No log | 12.96 | 162 | 0.7545 | 0.805 | 0.3114 | 1.4280 | 0.805 | 0.7893 | 0.2092 | 0.0935 |
| No log | 14.0 | 175 | 0.7283 | 0.795 | 0.3012 | 1.1856 | 0.795 | 0.7739 | 0.2182 | 0.0806 |
| No log | 14.96 | 187 | 0.7219 | 0.815 | 0.2972 | 1.2792 | 0.815 | 0.8043 | 0.2024 | 0.0734 |
| No log | 16.0 | 200 | 0.7284 | 0.805 | 0.3001 | 1.2528 | 0.805 | 0.7899 | 0.2068 | 0.0858 |
| No log | 16.96 | 212 | 0.7191 | 0.805 | 0.2981 | 1.3067 | 0.805 | 0.7919 | 0.2062 | 0.0809 |
| No log | 18.0 | 225 | 0.7221 | 0.8 | 0.3011 | 1.1747 | 0.8000 | 0.7792 | 0.2091 | 0.0803 |
| No log | 18.96 | 237 | 0.7253 | 0.81 | 0.2995 | 1.3143 | 0.81 | 0.7955 | 0.2136 | 0.0889 |
| No log | 20.0 | 250 | 0.7186 | 0.8 | 0.2981 | 1.1839 | 0.8000 | 0.7819 | 0.1899 | 0.0812 |
| No log | 20.96 | 262 | 0.7247 | 0.805 | 0.3012 | 1.2501 | 0.805 | 0.7925 | 0.2214 | 0.0891 |
| No log | 22.0 | 275 | 0.7317 | 0.805 | 0.3058 | 1.3767 | 0.805 | 0.7853 | 0.2141 | 0.0893 |
| No log | 22.96 | 287 | 0.7250 | 0.81 | 0.3031 | 1.3683 | 0.81 | 0.7907 | 0.1886 | 0.0838 |
| No log | 24.0 | 300 | 0.7137 | 0.805 | 0.2983 | 1.3088 | 0.805 | 0.7851 | 0.1799 | 0.0782 |
| No log | 24.96 | 312 | 0.7334 | 0.81 | 0.3070 | 1.4296 | 0.81 | 0.7909 | 0.1903 | 0.0898 |
| No log | 26.0 | 325 | 0.7284 | 0.81 | 0.3035 | 1.2467 | 0.81 | 0.7984 | 0.2152 | 0.0916 |
| No log | 26.96 | 337 | 0.7242 | 0.805 | 0.3020 | 1.3077 | 0.805 | 0.7862 | 0.2071 | 0.0891 |
| No log | 28.0 | 350 | 0.7285 | 0.81 | 0.3028 | 1.3756 | 0.81 | 0.7910 | 0.2158 | 0.0915 |
| No log | 28.96 | 362 | 0.7253 | 0.8 | 0.3016 | 1.3714 | 0.8000 | 0.7716 | 0.2057 | 0.0894 |
| No log | 30.0 | 375 | 0.7321 | 0.8 | 0.3068 | 1.3688 | 0.8000 | 0.7736 | 0.1943 | 0.0885 |
| No log | 30.96 | 387 | 0.7294 | 0.8 | 0.3047 | 1.3713 | 0.8000 | 0.7746 | 0.2138 | 0.0900 |
| No log | 32.0 | 400 | 0.7296 | 0.81 | 0.3054 | 1.3749 | 0.81 | 0.7921 | 0.2074 | 0.0910 |
| No log | 32.96 | 412 | 0.7311 | 0.805 | 0.3061 | 1.3704 | 0.805 | 0.7811 | 0.1984 | 0.0920 |
| No log | 34.0 | 425 | 0.7291 | 0.805 | 0.3049 | 1.3686 | 0.805 | 0.7811 | 0.2126 | 0.0916 |
| No log | 34.96 | 437 | 0.7301 | 0.795 | 0.3048 | 1.3712 | 0.795 | 0.7654 | 0.1917 | 0.0904 |
| No log | 36.0 | 450 | 0.7318 | 0.81 | 0.3072 | 1.3695 | 0.81 | 0.7844 | 0.1976 | 0.0900 |
| No log | 36.96 | 462 | 0.7403 | 0.795 | 0.3102 | 1.3712 | 0.795 | 0.7656 | 0.2039 | 0.0934 |
| No log | 38.0 | 475 | 0.7376 | 0.795 | 0.3095 | 1.3653 | 0.795 | 0.7654 | 0.1982 | 0.0920 |
| No log | 38.96 | 487 | 0.7326 | 0.805 | 0.3049 | 1.3815 | 0.805 | 0.7744 | 0.1820 | 0.0948 |
| 0.1331 | 40.0 | 500 | 0.7268 | 0.8 | 0.3038 | 1.3702 | 0.8000 | 0.7704 | 0.2051 | 0.0899 |
| 0.1331 | 40.96 | 512 | 0.7371 | 0.8 | 0.3074 | 1.3824 | 0.8000 | 0.7684 | 0.1946 | 0.0939 |
| 0.1331 | 42.0 | 525 | 0.7374 | 0.81 | 0.3107 | 1.3600 | 0.81 | 0.7844 | 0.2109 | 0.0910 |
| 0.1331 | 42.96 | 537 | 0.7366 | 0.8 | 0.3071 | 1.4434 | 0.8000 | 0.7776 | 0.2042 | 0.0935 |
| 0.1331 | 44.0 | 550 | 0.7362 | 0.805 | 0.3083 | 1.3721 | 0.805 | 0.7829 | 0.1782 | 0.0929 |
| 0.1331 | 44.96 | 562 | 0.7389 | 0.8 | 0.3110 | 1.3695 | 0.8000 | 0.7704 | 0.1966 | 0.0917 |
| 0.1331 | 46.0 | 575 | 0.7426 | 0.79 | 0.3108 | 1.5068 | 0.79 | 0.7644 | 0.1938 | 0.0968 |
| 0.1331 | 46.96 | 587 | 0.7395 | 0.8 | 0.3096 | 1.3760 | 0.8000 | 0.7704 | 0.1951 | 0.0927 |
| 0.1331 | 48.0 | 600 | 0.7540 | 0.805 | 0.3185 | 1.4936 | 0.805 | 0.7821 | 0.1958 | 0.0979 |
| 0.1331 | 48.96 | 612 | 0.7413 | 0.805 | 0.3116 | 1.4368 | 0.805 | 0.7829 | 0.1835 | 0.0955 |
| 0.1331 | 50.0 | 625 | 0.7543 | 0.805 | 0.3167 | 1.4402 | 0.805 | 0.7831 | 0.2143 | 0.0974 |
| 0.1331 | 50.96 | 637 | 0.7378 | 0.805 | 0.3087 | 1.3850 | 0.805 | 0.7829 | 0.1886 | 0.0935 |
| 0.1331 | 52.0 | 650 | 0.7545 | 0.795 | 0.3175 | 1.3873 | 0.795 | 0.7656 | 0.2007 | 0.0957 |
| 0.1331 | 52.96 | 662 | 0.7464 | 0.8 | 0.3140 | 1.3734 | 0.8000 | 0.7707 | 0.1872 | 0.0938 |
| 0.1331 | 54.0 | 675 | 0.7439 | 0.8 | 0.3120 | 1.3765 | 0.8000 | 0.7704 | 0.2036 | 0.0942 |
| 0.1331 | 54.96 | 687 | 0.7506 | 0.8 | 0.3150 | 1.3788 | 0.8000 | 0.7707 | 0.1788 | 0.0959 |
| 0.1331 | 56.0 | 700 | 0.7511 | 0.805 | 0.3158 | 1.4378 | 0.805 | 0.7829 | 0.2054 | 0.0955 |
| 0.1331 | 56.96 | 712 | 0.7587 | 0.805 | 0.3196 | 1.4494 | 0.805 | 0.7831 | 0.1844 | 0.0972 |
| 0.1331 | 58.0 | 725 | 0.7505 | 0.8 | 0.3154 | 1.3759 | 0.8000 | 0.7704 | 0.1913 | 0.0953 |
| 0.1331 | 58.96 | 737 | 0.7553 | 0.79 | 0.3167 | 1.4457 | 0.79 | 0.7549 | 0.1977 | 0.0959 |
| 0.1331 | 60.0 | 750 | 0.7543 | 0.8 | 0.3175 | 1.3807 | 0.8000 | 0.7707 | 0.1963 | 0.0953 |
| 0.1331 | 60.96 | 762 | 0.7592 | 0.795 | 0.3200 | 1.3759 | 0.795 | 0.7681 | 0.1986 | 0.0961 |
| 0.1331 | 62.0 | 775 | 0.7557 | 0.795 | 0.3185 | 1.3785 | 0.795 | 0.7634 | 0.1971 | 0.0948 |
| 0.1331 | 62.96 | 787 | 0.7591 | 0.79 | 0.3200 | 1.4466 | 0.79 | 0.7613 | 0.2033 | 0.0963 |
| 0.1331 | 64.0 | 800 | 0.7624 | 0.795 | 0.3210 | 1.4423 | 0.795 | 0.7621 | 0.2030 | 0.0962 |
| 0.1331 | 64.96 | 812 | 0.7674 | 0.79 | 0.3240 | 1.4454 | 0.79 | 0.7596 | 0.1973 | 0.0969 |
| 0.1331 | 66.0 | 825 | 0.7645 | 0.79 | 0.3224 | 1.4497 | 0.79 | 0.7611 | 0.1999 | 0.0964 |
| 0.1331 | 66.96 | 837 | 0.7652 | 0.795 | 0.3234 | 1.4418 | 0.795 | 0.7668 | 0.1819 | 0.0968 |
| 0.1331 | 68.0 | 850 | 0.7695 | 0.795 | 0.3250 | 1.4969 | 0.795 | 0.7606 | 0.1914 | 0.0979 |
| 0.1331 | 68.96 | 862 | 0.7708 | 0.785 | 0.3258 | 1.4482 | 0.785 | 0.7516 | 0.1954 | 0.0976 |
| 0.1331 | 70.0 | 875 | 0.7691 | 0.795 | 0.3249 | 1.4960 | 0.795 | 0.7673 | 0.1895 | 0.0976 |
| 0.1331 | 70.96 | 887 | 0.7741 | 0.785 | 0.3272 | 1.5043 | 0.785 | 0.7519 | 0.1898 | 0.0982 |
| 0.1331 | 72.0 | 900 | 0.7788 | 0.79 | 0.3293 | 1.5094 | 0.79 | 0.7611 | 0.1738 | 0.0995 |
| 0.1331 | 72.96 | 912 | 0.7837 | 0.785 | 0.3329 | 1.5306 | 0.785 | 0.7577 | 0.2002 | 0.1004 |
| 0.1331 | 74.0 | 925 | 0.7755 | 0.785 | 0.3280 | 1.4985 | 0.785 | 0.7514 | 0.1906 | 0.0981 |
| 0.1331 | 74.96 | 937 | 0.7797 | 0.785 | 0.3308 | 1.4611 | 0.785 | 0.7580 | 0.1925 | 0.0994 |
| 0.1331 | 76.0 | 950 | 0.7744 | 0.785 | 0.3273 | 1.4441 | 0.785 | 0.7519 | 0.1929 | 0.0976 |
| 0.1331 | 76.96 | 962 | 0.7766 | 0.785 | 0.3295 | 1.4420 | 0.785 | 0.7516 | 0.1899 | 0.0972 |
| 0.1331 | 78.0 | 975 | 0.7888 | 0.785 | 0.3339 | 1.4991 | 0.785 | 0.7573 | 0.1879 | 0.0998 |
| 0.1331 | 78.96 | 987 | 0.7765 | 0.795 | 0.3292 | 1.4915 | 0.795 | 0.7663 | 0.1750 | 0.0948 |
| 0.071 | 80.0 | 1000 | 0.7821 | 0.785 | 0.3303 | 1.4990 | 0.785 | 0.7519 | 0.1940 | 0.0986 |
| 0.071 | 80.96 | 1012 | 0.7860 | 0.79 | 0.3330 | 1.4977 | 0.79 | 0.7644 | 0.1698 | 0.0976 |
| 0.071 | 82.0 | 1025 | 0.7882 | 0.78 | 0.3342 | 1.5243 | 0.78 | 0.7482 | 0.1930 | 0.1006 |
| 0.071 | 82.96 | 1037 | 0.7879 | 0.78 | 0.3333 | 1.5037 | 0.78 | 0.7491 | 0.2055 | 0.0995 |
| 0.071 | 84.0 | 1050 | 0.7842 | 0.78 | 0.3326 | 1.4959 | 0.78 | 0.7488 | 0.1945 | 0.0985 |
| 0.071 | 84.96 | 1062 | 0.7866 | 0.78 | 0.3338 | 1.4961 | 0.78 | 0.7488 | 0.1877 | 0.0982 |
| 0.071 | 86.0 | 1075 | 0.7931 | 0.785 | 0.3369 | 1.5006 | 0.785 | 0.7573 | 0.1898 | 0.1003 |
| 0.071 | 86.96 | 1087 | 0.7937 | 0.78 | 0.3360 | 1.5043 | 0.78 | 0.7488 | 0.1828 | 0.0999 |
| 0.071 | 88.0 | 1100 | 0.7948 | 0.78 | 0.3374 | 1.5034 | 0.78 | 0.7488 | 0.1893 | 0.0999 |
| 0.071 | 88.96 | 1112 | 0.7962 | 0.78 | 0.3372 | 1.5078 | 0.78 | 0.7494 | 0.1943 | 0.1011 |
| 0.071 | 90.0 | 1125 | 0.7956 | 0.785 | 0.3377 | 1.5039 | 0.785 | 0.7516 | 0.1918 | 0.0999 |
| 0.071 | 90.96 | 1137 | 0.7996 | 0.78 | 0.3382 | 1.5060 | 0.78 | 0.7491 | 0.1982 | 0.1013 |
| 0.071 | 92.0 | 1150 | 0.7980 | 0.78 | 0.3381 | 1.5023 | 0.78 | 0.7488 | 0.1902 | 0.1004 |
| 0.071 | 92.96 | 1162 | 0.8015 | 0.78 | 0.3396 | 1.5029 | 0.78 | 0.7488 | 0.1978 | 0.1007 |
| 0.071 | 94.0 | 1175 | 0.8044 | 0.78 | 0.3411 | 1.5047 | 0.78 | 0.7488 | 0.1929 | 0.1012 |
| 0.071 | 94.96 | 1187 | 0.7977 | 0.78 | 0.3392 | 1.4989 | 0.78 | 0.7488 | 0.1866 | 0.0989 |
| 0.071 | 96.0 | 1200 | 0.8071 | 0.78 | 0.3425 | 1.5021 | 0.78 | 0.7488 | 0.1941 | 0.1018 |
| 0.071 | 96.96 | 1212 | 0.8033 | 0.78 | 0.3406 | 1.4967 | 0.78 | 0.7488 | 0.1913 | 0.1000 |
| 0.071 | 98.0 | 1225 | 0.8148 | 0.775 | 0.3466 | 1.4555 | 0.775 | 0.7462 | 0.1828 | 0.1036 |
| 0.071 | 98.96 | 1237 | 0.8062 | 0.78 | 0.3417 | 1.5007 | 0.78 | 0.7488 | 0.1949 | 0.1004 |
| 0.071 | 100.0 | 1250 | 0.8123 | 0.77 | 0.3456 | 1.5069 | 0.7700 | 0.7424 | 0.1964 | 0.1020 |
| 0.071 | 100.96 | 1262 | 0.8117 | 0.78 | 0.3452 | 1.5048 | 0.78 | 0.7488 | 0.2081 | 0.1020 |
| 0.071 | 102.0 | 1275 | 0.8125 | 0.77 | 0.3454 | 1.5066 | 0.7700 | 0.7424 | 0.2040 | 0.1022 |
| 0.071 | 102.96 | 1287 | 0.8134 | 0.775 | 0.3458 | 1.5048 | 0.775 | 0.7450 | 0.1977 | 0.1013 |
| 0.071 | 104.0 | 1300 | 0.8152 | 0.78 | 0.3461 | 1.5027 | 0.78 | 0.7488 | 0.2044 | 0.1014 |
| 0.071 | 104.96 | 1312 | 0.8185 | 0.78 | 0.3478 | 1.5057 | 0.78 | 0.7488 | 0.1900 | 0.1022 |
| 0.071 | 106.0 | 1325 | 0.8191 | 0.78 | 0.3480 | 1.5053 | 0.78 | 0.7488 | 0.2084 | 0.1026 |
| 0.071 | 106.96 | 1337 | 0.8207 | 0.77 | 0.3497 | 1.5095 | 0.7700 | 0.7424 | 0.1984 | 0.1025 |
| 0.071 | 108.0 | 1350 | 0.8221 | 0.77 | 0.3487 | 1.5095 | 0.7700 | 0.7424 | 0.1871 | 0.1031 |
| 0.071 | 108.96 | 1362 | 0.8229 | 0.765 | 0.3501 | 1.4607 | 0.765 | 0.7331 | 0.1920 | 0.1028 |
| 0.071 | 110.0 | 1375 | 0.8232 | 0.78 | 0.3498 | 1.5044 | 0.78 | 0.7488 | 0.1995 | 0.1023 |
| 0.071 | 110.96 | 1387 | 0.8279 | 0.785 | 0.3513 | 1.5060 | 0.785 | 0.7526 | 0.2073 | 0.1033 |
| 0.071 | 112.0 | 1400 | 0.8246 | 0.775 | 0.3505 | 1.5038 | 0.775 | 0.7450 | 0.1927 | 0.1018 |
| 0.071 | 112.96 | 1412 | 0.8308 | 0.765 | 0.3537 | 1.5095 | 0.765 | 0.7331 | 0.1931 | 0.1035 |
| 0.071 | 114.0 | 1425 | 0.8277 | 0.775 | 0.3513 | 1.5058 | 0.775 | 0.7395 | 0.1977 | 0.1022 |
| 0.071 | 114.96 | 1437 | 0.8302 | 0.76 | 0.3531 | 1.4583 | 0.76 | 0.7296 | 0.2112 | 0.1028 |
| 0.071 | 116.0 | 1450 | 0.8328 | 0.765 | 0.3535 | 1.5125 | 0.765 | 0.7331 | 0.2008 | 0.1037 |
| 0.071 | 116.96 | 1462 | 0.8309 | 0.76 | 0.3533 | 1.4542 | 0.76 | 0.7296 | 0.2037 | 0.1029 |
| 0.071 | 118.0 | 1475 | 0.8378 | 0.765 | 0.3558 | 1.5162 | 0.765 | 0.7323 | 0.2040 | 0.1055 |
| 0.071 | 118.96 | 1487 | 0.8341 | 0.76 | 0.3547 | 1.5076 | 0.76 | 0.7296 | 0.1942 | 0.1032 |
| 0.0462 | 120.0 | 1500 | 0.8367 | 0.76 | 0.3557 | 1.5134 | 0.76 | 0.7296 | 0.1987 | 0.1034 |
| 0.0462 | 120.96 | 1512 | 0.8369 | 0.76 | 0.3553 | 1.5081 | 0.76 | 0.7296 | 0.2121 | 0.1036 |
| 0.0462 | 122.0 | 1525 | 0.8385 | 0.77 | 0.3560 | 1.5076 | 0.7700 | 0.7357 | 0.1944 | 0.1034 |
| 0.0462 | 122.96 | 1537 | 0.8415 | 0.76 | 0.3577 | 1.5127 | 0.76 | 0.7296 | 0.2080 | 0.1040 |
| 0.0462 | 124.0 | 1550 | 0.8418 | 0.765 | 0.3571 | 1.5123 | 0.765 | 0.7333 | 0.1905 | 0.1043 |
| 0.0462 | 124.96 | 1562 | 0.8431 | 0.76 | 0.3581 | 1.5124 | 0.76 | 0.7296 | 0.2029 | 0.1043 |
| 0.0462 | 126.0 | 1575 | 0.8461 | 0.765 | 0.3595 | 1.5115 | 0.765 | 0.7331 | 0.1861 | 0.1044 |
| 0.0462 | 126.96 | 1587 | 0.8446 | 0.76 | 0.3586 | 1.5117 | 0.76 | 0.7296 | 0.1962 | 0.1043 |
| 0.0462 | 128.0 | 1600 | 0.8448 | 0.765 | 0.3585 | 1.5106 | 0.765 | 0.7333 | 0.1899 | 0.1048 |
| 0.0462 | 128.96 | 1612 | 0.8503 | 0.765 | 0.3611 | 1.5156 | 0.765 | 0.7323 | 0.1865 | 0.1050 |
| 0.0462 | 130.0 | 1625 | 0.8473 | 0.765 | 0.3597 | 1.5082 | 0.765 | 0.7333 | 0.1992 | 0.1040 |
| 0.0462 | 130.96 | 1637 | 0.8530 | 0.76 | 0.3617 | 1.5178 | 0.76 | 0.7296 | 0.2008 | 0.1053 |
| 0.0462 | 132.0 | 1650 | 0.8499 | 0.765 | 0.3608 | 1.5105 | 0.765 | 0.7321 | 0.1910 | 0.1035 |
| 0.0462 | 132.96 | 1662 | 0.8529 | 0.765 | 0.3612 | 1.5095 | 0.765 | 0.7333 | 0.1943 | 0.1043 |
| 0.0462 | 134.0 | 1675 | 0.8547 | 0.765 | 0.3635 | 1.5095 | 0.765 | 0.7321 | 0.2002 | 0.1032 |
| 0.0462 | 134.96 | 1687 | 0.8572 | 0.765 | 0.3638 | 1.5159 | 0.765 | 0.7333 | 0.1979 | 0.1056 |
| 0.0462 | 136.0 | 1700 | 0.8582 | 0.765 | 0.3642 | 1.5165 | 0.765 | 0.7333 | 0.2026 | 0.1057 |
| 0.0462 | 136.96 | 1712 | 0.8581 | 0.76 | 0.3639 | 1.5118 | 0.76 | 0.7296 | 0.1965 | 0.1052 |
| 0.0462 | 138.0 | 1725 | 0.8570 | 0.77 | 0.3629 | 1.5094 | 0.7700 | 0.7358 | 0.1870 | 0.1029 |
| 0.0462 | 138.96 | 1737 | 0.8611 | 0.76 | 0.3650 | 1.5129 | 0.76 | 0.7296 | 0.1919 | 0.1040 |
| 0.0462 | 140.0 | 1750 | 0.8618 | 0.76 | 0.3659 | 1.5131 | 0.76 | 0.7296 | 0.1981 | 0.1042 |
| 0.0462 | 140.96 | 1762 | 0.8605 | 0.765 | 0.3652 | 1.5115 | 0.765 | 0.7333 | 0.1875 | 0.1048 |
| 0.0462 | 142.0 | 1775 | 0.8647 | 0.76 | 0.3666 | 1.5157 | 0.76 | 0.7296 | 0.2002 | 0.1052 |
| 0.0462 | 142.96 | 1787 | 0.8618 | 0.76 | 0.3654 | 1.5116 | 0.76 | 0.7296 | 0.2006 | 0.1045 |
| 0.0462 | 144.0 | 1800 | 0.8672 | 0.765 | 0.3672 | 1.5160 | 0.765 | 0.7333 | 0.1979 | 0.1053 |
| 0.0462 | 144.96 | 1812 | 0.8625 | 0.77 | 0.3648 | 1.5080 | 0.7700 | 0.7358 | 0.1975 | 0.1026 |
| 0.0462 | 146.0 | 1825 | 0.8695 | 0.765 | 0.3679 | 1.5169 | 0.765 | 0.7323 | 0.1973 | 0.1051 |
| 0.0462 | 146.96 | 1837 | 0.8696 | 0.76 | 0.3685 | 1.5132 | 0.76 | 0.7296 | 0.1936 | 0.1037 |
| 0.0462 | 148.0 | 1850 | 0.8678 | 0.765 | 0.3671 | 1.5110 | 0.765 | 0.7333 | 0.2008 | 0.1040 |
| 0.0462 | 148.96 | 1862 | 0.8713 | 0.765 | 0.3690 | 1.5152 | 0.765 | 0.7333 | 0.1983 | 0.1050 |
| 0.0462 | 150.0 | 1875 | 0.8716 | 0.765 | 0.3687 | 1.5163 | 0.765 | 0.7323 | 0.2029 | 0.1051 |
| 0.0462 | 150.96 | 1887 | 0.8724 | 0.77 | 0.3691 | 1.5113 | 0.7700 | 0.7358 | 0.1997 | 0.1037 |
| 0.0462 | 152.0 | 1900 | 0.8729 | 0.765 | 0.3695 | 1.5134 | 0.765 | 0.7333 | 0.1966 | 0.1050 |
| 0.0462 | 152.96 | 1912 | 0.8760 | 0.765 | 0.3706 | 1.5131 | 0.765 | 0.7333 | 0.2046 | 0.1040 |
| 0.0462 | 154.0 | 1925 | 0.8761 | 0.765 | 0.3707 | 1.5138 | 0.765 | 0.7333 | 0.1896 | 0.1037 |
| 0.0462 | 154.96 | 1937 | 0.8778 | 0.765 | 0.3711 | 1.5138 | 0.765 | 0.7333 | 0.2012 | 0.1046 |
| 0.0462 | 156.0 | 1950 | 0.8768 | 0.765 | 0.3712 | 1.5125 | 0.765 | 0.7333 | 0.1891 | 0.1041 |
| 0.0462 | 156.96 | 1962 | 0.8816 | 0.77 | 0.3732 | 1.5205 | 0.7700 | 0.7360 | 0.1993 | 0.1067 |
| 0.0462 | 158.0 | 1975 | 0.8793 | 0.765 | 0.3718 | 1.5157 | 0.765 | 0.7333 | 0.2025 | 0.1049 |
| 0.0462 | 158.96 | 1987 | 0.8788 | 0.77 | 0.3713 | 1.5126 | 0.7700 | 0.7358 | 0.2044 | 0.1039 |
| 0.0335 | 160.0 | 2000 | 0.8851 | 0.77 | 0.3739 | 1.5193 | 0.7700 | 0.7360 | 0.2042 | 0.1069 |
| 0.0335 | 160.96 | 2012 | 0.8872 | 0.77 | 0.3748 | 1.5200 | 0.7700 | 0.7360 | 0.2009 | 0.1057 |
| 0.0335 | 162.0 | 2025 | 0.8827 | 0.765 | 0.3731 | 1.5144 | 0.765 | 0.7333 | 0.1897 | 0.1050 |
| 0.0335 | 162.96 | 2037 | 0.8821 | 0.765 | 0.3724 | 1.5129 | 0.765 | 0.7333 | 0.1971 | 0.1042 |
| 0.0335 | 164.0 | 2050 | 0.8919 | 0.77 | 0.3770 | 1.5229 | 0.7700 | 0.7360 | 0.2119 | 0.1061 |
| 0.0335 | 164.96 | 2062 | 0.8907 | 0.765 | 0.3764 | 1.5240 | 0.765 | 0.7323 | 0.2125 | 0.1069 |
| 0.0335 | 166.0 | 2075 | 0.8857 | 0.765 | 0.3743 | 1.5127 | 0.765 | 0.7333 | 0.1906 | 0.1044 |
| 0.0335 | 166.96 | 2087 | 0.8928 | 0.77 | 0.3771 | 1.5253 | 0.7700 | 0.7360 | 0.2062 | 0.1062 |
| 0.0335 | 168.0 | 2100 | 0.8895 | 0.77 | 0.3750 | 1.5179 | 0.7700 | 0.7360 | 0.2062 | 0.1054 |
| 0.0335 | 168.96 | 2112 | 0.8904 | 0.77 | 0.3754 | 1.5178 | 0.7700 | 0.7360 | 0.2048 | 0.1055 |
| 0.0335 | 170.0 | 2125 | 0.8919 | 0.765 | 0.3766 | 1.5137 | 0.765 | 0.7333 | 0.2170 | 0.1044 |
| 0.0335 | 170.96 | 2137 | 0.8949 | 0.77 | 0.3779 | 1.5203 | 0.7700 | 0.7360 | 0.2042 | 0.1069 |
| 0.0335 | 172.0 | 2150 | 0.8949 | 0.77 | 0.3779 | 1.5204 | 0.7700 | 0.7360 | 0.2078 | 0.1069 |
| 0.0335 | 172.96 | 2162 | 0.8986 | 0.765 | 0.3794 | 1.5241 | 0.765 | 0.7310 | 0.2079 | 0.1072 |
| 0.0335 | 174.0 | 2175 | 0.8978 | 0.76 | 0.3787 | 1.5201 | 0.76 | 0.7272 | 0.2108 | 0.1056 |
| 0.0335 | 174.96 | 2187 | 0.8990 | 0.77 | 0.3786 | 1.5198 | 0.7700 | 0.7360 | 0.2032 | 0.1053 |
| 0.0335 | 176.0 | 2200 | 0.9003 | 0.77 | 0.3794 | 1.5206 | 0.7700 | 0.7360 | 0.1996 | 0.1060 |
| 0.0335 | 176.96 | 2212 | 0.9000 | 0.77 | 0.3797 | 1.5196 | 0.7700 | 0.7360 | 0.2116 | 0.1063 |
| 0.0335 | 178.0 | 2225 | 0.9000 | 0.765 | 0.3794 | 1.5178 | 0.765 | 0.7333 | 0.1875 | 0.1055 |
| 0.0335 | 178.96 | 2237 | 0.9034 | 0.77 | 0.3804 | 1.5218 | 0.7700 | 0.7360 | 0.1964 | 0.1068 |
| 0.0335 | 180.0 | 2250 | 0.9020 | 0.77 | 0.3802 | 1.5198 | 0.7700 | 0.7360 | 0.2058 | 0.1063 |
| 0.0335 | 180.96 | 2262 | 0.9037 | 0.77 | 0.3808 | 1.5192 | 0.7700 | 0.7360 | 0.1976 | 0.1063 |
| 0.0335 | 182.0 | 2275 | 0.9059 | 0.77 | 0.3812 | 1.5227 | 0.7700 | 0.7360 | 0.1962 | 0.1067 |
| 0.0335 | 182.96 | 2287 | 0.9063 | 0.77 | 0.3818 | 1.5206 | 0.7700 | 0.7360 | 0.2000 | 0.1065 |
| 0.0335 | 184.0 | 2300 | 0.9058 | 0.77 | 0.3814 | 1.5196 | 0.7700 | 0.7360 | 0.1926 | 0.1061 |
| 0.0335 | 184.96 | 2312 | 0.9082 | 0.77 | 0.3821 | 1.5211 | 0.7700 | 0.7360 | 0.2001 | 0.1067 |
| 0.0335 | 186.0 | 2325 | 0.9083 | 0.77 | 0.3824 | 1.5204 | 0.7700 | 0.7360 | 0.2062 | 0.1057 |
| 0.0335 | 186.96 | 2337 | 0.9090 | 0.77 | 0.3824 | 1.5220 | 0.7700 | 0.7360 | 0.2027 | 0.1063 |
| 0.0335 | 188.0 | 2350 | 0.9106 | 0.77 | 0.3828 | 1.5213 | 0.7700 | 0.7360 | 0.1968 | 0.1068 |
| 0.0335 | 188.96 | 2362 | 0.9116 | 0.77 | 0.3829 | 1.5238 | 0.7700 | 0.7360 | 0.2029 | 0.1071 |
| 0.0335 | 190.0 | 2375 | 0.9120 | 0.77 | 0.3835 | 1.5225 | 0.7700 | 0.7360 | 0.1953 | 0.1064 |
| 0.0335 | 190.96 | 2387 | 0.9123 | 0.77 | 0.3835 | 1.5227 | 0.7700 | 0.7360 | 0.2080 | 0.1069 |
| 0.0335 | 192.0 | 2400 | 0.9131 | 0.775 | 0.3838 | 1.5222 | 0.775 | 0.7418 | 0.2039 | 0.1061 |
| 0.0335 | 192.96 | 2412 | 0.9144 | 0.765 | 0.3841 | 1.5200 | 0.765 | 0.7333 | 0.2163 | 0.1060 |
| 0.0335 | 194.0 | 2425 | 0.9138 | 0.77 | 0.3839 | 1.5200 | 0.7700 | 0.7360 | 0.2092 | 0.1057 |
| 0.0335 | 194.96 | 2437 | 0.9164 | 0.775 | 0.3850 | 1.5249 | 0.775 | 0.7418 | 0.2188 | 0.1065 |
| 0.0335 | 196.0 | 2450 | 0.9185 | 0.77 | 0.3861 | 1.5257 | 0.7700 | 0.7360 | 0.2087 | 0.1067 |
| 0.0335 | 196.96 | 2462 | 0.9207 | 0.77 | 0.3868 | 1.5286 | 0.7700 | 0.7360 | 0.2063 | 0.1074 |
| 0.0335 | 198.0 | 2475 | 0.9191 | 0.77 | 0.3858 | 1.5254 | 0.7700 | 0.7360 | 0.2129 | 0.1068 |
| 0.0335 | 198.96 | 2487 | 0.9195 | 0.77 | 0.3861 | 1.5240 | 0.7700 | 0.7360 | 0.2059 | 0.1066 |
| 0.0264 | 200.0 | 2500 | 0.9205 | 0.77 | 0.3864 | 1.5246 | 0.7700 | 0.7360 | 0.2081 | 0.1069 |
| 0.0264 | 200.96 | 2512 | 0.9214 | 0.77 | 0.3865 | 1.5235 | 0.7700 | 0.7360 | 0.2018 | 0.1066 |
| 0.0264 | 202.0 | 2525 | 0.9216 | 0.775 | 0.3867 | 1.5253 | 0.775 | 0.7418 | 0.2156 | 0.1068 |
| 0.0264 | 202.96 | 2537 | 0.9218 | 0.775 | 0.3870 | 1.5225 | 0.775 | 0.7418 | 0.2108 | 0.1064 |
| 0.0264 | 204.0 | 2550 | 0.9241 | 0.775 | 0.3871 | 1.4893 | 0.775 | 0.7418 | 0.2087 | 0.1071 |
| 0.0264 | 204.96 | 2562 | 0.9270 | 0.77 | 0.3889 | 1.5244 | 0.7700 | 0.7360 | 0.2024 | 0.1071 |
| 0.0264 | 206.0 | 2575 | 0.9260 | 0.775 | 0.3885 | 1.5262 | 0.775 | 0.7418 | 0.2116 | 0.1069 |
| 0.0264 | 206.96 | 2587 | 0.9259 | 0.775 | 0.3883 | 1.5269 | 0.775 | 0.7418 | 0.2089 | 0.1065 |
| 0.0264 | 208.0 | 2600 | 0.9254 | 0.77 | 0.3875 | 1.5247 | 0.7700 | 0.7360 | 0.2060 | 0.1069 |
| 0.0264 | 208.96 | 2612 | 0.9285 | 0.775 | 0.3889 | 1.5281 | 0.775 | 0.7418 | 0.2115 | 0.1074 |
| 0.0264 | 210.0 | 2625 | 0.9277 | 0.775 | 0.3886 | 1.5254 | 0.775 | 0.7418 | 0.2114 | 0.1069 |
| 0.0264 | 210.96 | 2637 | 0.9304 | 0.775 | 0.3897 | 1.5278 | 0.775 | 0.7418 | 0.2095 | 0.1071 |
| 0.0264 | 212.0 | 2650 | 0.9288 | 0.77 | 0.3886 | 1.5270 | 0.7700 | 0.7360 | 0.2068 | 0.1070 |
| 0.0264 | 212.96 | 2662 | 0.9310 | 0.775 | 0.3896 | 1.5316 | 0.775 | 0.7418 | 0.2135 | 0.1071 |
| 0.0264 | 214.0 | 2675 | 0.9311 | 0.775 | 0.3899 | 1.5263 | 0.775 | 0.7418 | 0.2187 | 0.1070 |
| 0.0264 | 214.96 | 2687 | 0.9315 | 0.775 | 0.3899 | 1.5256 | 0.775 | 0.7418 | 0.2123 | 0.1068 |
| 0.0264 | 216.0 | 2700 | 0.9315 | 0.77 | 0.3896 | 1.5258 | 0.7700 | 0.7360 | 0.2070 | 0.1071 |
| 0.0264 | 216.96 | 2712 | 0.9334 | 0.775 | 0.3905 | 1.5291 | 0.775 | 0.7418 | 0.2088 | 0.1071 |
| 0.0264 | 218.0 | 2725 | 0.9342 | 0.775 | 0.3908 | 1.5283 | 0.775 | 0.7418 | 0.2146 | 0.1072 |
| 0.0264 | 218.96 | 2737 | 0.9337 | 0.775 | 0.3903 | 1.5282 | 0.775 | 0.7418 | 0.2110 | 0.1070 |
| 0.0264 | 220.0 | 2750 | 0.9357 | 0.775 | 0.3913 | 1.5284 | 0.775 | 0.7418 | 0.2149 | 0.1073 |
| 0.0264 | 220.96 | 2762 | 0.9367 | 0.775 | 0.3918 | 1.5299 | 0.775 | 0.7418 | 0.2088 | 0.1072 |
| 0.0264 | 222.0 | 2775 | 0.9371 | 0.775 | 0.3916 | 1.5294 | 0.775 | 0.7418 | 0.2141 | 0.1075 |
| 0.0264 | 222.96 | 2787 | 0.9359 | 0.775 | 0.3910 | 1.5271 | 0.775 | 0.7418 | 0.2126 | 0.1067 |
| 0.0264 | 224.0 | 2800 | 0.9374 | 0.775 | 0.3918 | 1.5298 | 0.775 | 0.7418 | 0.2084 | 0.1072 |
| 0.0264 | 224.96 | 2812 | 0.9378 | 0.775 | 0.3914 | 1.5296 | 0.775 | 0.7418 | 0.2073 | 0.1072 |
| 0.0264 | 226.0 | 2825 | 0.9377 | 0.775 | 0.3916 | 1.5274 | 0.775 | 0.7418 | 0.2075 | 0.1066 |
| 0.0264 | 226.96 | 2837 | 0.9412 | 0.775 | 0.3932 | 1.5310 | 0.775 | 0.7418 | 0.2096 | 0.1077 |
| 0.0264 | 228.0 | 2850 | 0.9402 | 0.775 | 0.3923 | 1.5329 | 0.775 | 0.7418 | 0.2161 | 0.1076 |
| 0.0264 | 228.96 | 2862 | 0.9420 | 0.775 | 0.3932 | 1.5301 | 0.775 | 0.7418 | 0.2078 | 0.1074 |
| 0.0264 | 230.0 | 2875 | 0.9412 | 0.775 | 0.3925 | 1.5315 | 0.775 | 0.7418 | 0.2078 | 0.1076 |
| 0.0264 | 230.96 | 2887 | 0.9422 | 0.775 | 0.3930 | 1.5340 | 0.775 | 0.7418 | 0.2179 | 0.1077 |
| 0.0264 | 232.0 | 2900 | 0.9431 | 0.775 | 0.3933 | 1.5336 | 0.775 | 0.7418 | 0.2158 | 0.1081 |
| 0.0264 | 232.96 | 2912 | 0.9428 | 0.775 | 0.3931 | 1.5304 | 0.775 | 0.7418 | 0.2086 | 0.1075 |
| 0.0264 | 234.0 | 2925 | 0.9434 | 0.775 | 0.3935 | 1.5325 | 0.775 | 0.7418 | 0.2152 | 0.1074 |
| 0.0264 | 234.96 | 2937 | 0.9431 | 0.775 | 0.3933 | 1.5286 | 0.775 | 0.7418 | 0.2081 | 0.1070 |
| 0.0264 | 236.0 | 2950 | 0.9438 | 0.775 | 0.3935 | 1.5307 | 0.775 | 0.7418 | 0.2077 | 0.1073 |
| 0.0264 | 236.96 | 2962 | 0.9452 | 0.775 | 0.3940 | 1.5329 | 0.775 | 0.7418 | 0.2217 | 0.1074 |
| 0.0264 | 238.0 | 2975 | 0.9453 | 0.775 | 0.3939 | 1.5328 | 0.775 | 0.7418 | 0.2129 | 0.1076 |
| 0.0264 | 238.96 | 2987 | 0.9451 | 0.775 | 0.3937 | 1.5308 | 0.775 | 0.7418 | 0.2133 | 0.1073 |
| 0.0223 | 240.0 | 3000 | 0.9470 | 0.775 | 0.3947 | 1.5333 | 0.775 | 0.7418 | 0.2220 | 0.1077 |
| 0.0223 | 240.96 | 3012 | 0.9461 | 0.775 | 0.3942 | 1.5329 | 0.775 | 0.7418 | 0.2127 | 0.1072 |
| 0.0223 | 242.0 | 3025 | 0.9477 | 0.775 | 0.3949 | 1.5310 | 0.775 | 0.7418 | 0.2133 | 0.1074 |
| 0.0223 | 242.96 | 3037 | 0.9480 | 0.775 | 0.3949 | 1.5331 | 0.775 | 0.7418 | 0.2165 | 0.1073 |
| 0.0223 | 244.0 | 3050 | 0.9499 | 0.775 | 0.3955 | 1.5384 | 0.775 | 0.7418 | 0.2226 | 0.1080 |
| 0.0223 | 244.96 | 3062 | 0.9476 | 0.775 | 0.3946 | 1.5322 | 0.775 | 0.7418 | 0.2128 | 0.1069 |
| 0.0223 | 246.0 | 3075 | 0.9490 | 0.775 | 0.3953 | 1.5298 | 0.775 | 0.7418 | 0.2137 | 0.1071 |
| 0.0223 | 246.96 | 3087 | 0.9496 | 0.775 | 0.3953 | 1.5315 | 0.775 | 0.7418 | 0.2133 | 0.1071 |
| 0.0223 | 248.0 | 3100 | 0.9500 | 0.775 | 0.3955 | 1.5335 | 0.775 | 0.7418 | 0.2131 | 0.1072 |
| 0.0223 | 248.96 | 3112 | 0.9503 | 0.775 | 0.3956 | 1.5323 | 0.775 | 0.7418 | 0.2164 | 0.1072 |
| 0.0223 | 250.0 | 3125 | 0.9505 | 0.775 | 0.3955 | 1.5338 | 0.775 | 0.7418 | 0.2128 | 0.1071 |
| 0.0223 | 250.96 | 3137 | 0.9510 | 0.775 | 0.3957 | 1.5372 | 0.775 | 0.7418 | 0.2266 | 0.1072 |
| 0.0223 | 252.0 | 3150 | 0.9517 | 0.775 | 0.3960 | 1.5363 | 0.775 | 0.7418 | 0.2222 | 0.1073 |
| 0.0223 | 252.96 | 3162 | 0.9526 | 0.775 | 0.3961 | 1.5372 | 0.775 | 0.7418 | 0.2227 | 0.1080 |
| 0.0223 | 254.0 | 3175 | 0.9527 | 0.77 | 0.3963 | 1.5340 | 0.7700 | 0.7368 | 0.2174 | 0.1081 |
| 0.0223 | 254.96 | 3187 | 0.9527 | 0.775 | 0.3962 | 1.5389 | 0.775 | 0.7418 | 0.2222 | 0.1074 |
| 0.0223 | 256.0 | 3200 | 0.9528 | 0.775 | 0.3962 | 1.5347 | 0.775 | 0.7418 | 0.2258 | 0.1073 |
| 0.0223 | 256.96 | 3212 | 0.9545 | 0.775 | 0.3969 | 1.5401 | 0.775 | 0.7418 | 0.2226 | 0.1083 |
| 0.0223 | 258.0 | 3225 | 0.9540 | 0.775 | 0.3966 | 1.5369 | 0.775 | 0.7418 | 0.2224 | 0.1074 |
| 0.0223 | 258.96 | 3237 | 0.9547 | 0.775 | 0.3969 | 1.5370 | 0.775 | 0.7418 | 0.2228 | 0.1082 |
| 0.0223 | 260.0 | 3250 | 0.9549 | 0.775 | 0.3969 | 1.5381 | 0.775 | 0.7418 | 0.2226 | 0.1075 |
| 0.0223 | 260.96 | 3262 | 0.9545 | 0.775 | 0.3968 | 1.5345 | 0.775 | 0.7418 | 0.2134 | 0.1072 |
| 0.0223 | 262.0 | 3275 | 0.9550 | 0.775 | 0.3970 | 1.5362 | 0.775 | 0.7418 | 0.2145 | 0.1079 |
| 0.0223 | 262.96 | 3287 | 0.9558 | 0.775 | 0.3971 | 1.5392 | 0.775 | 0.7418 | 0.2227 | 0.1076 |
| 0.0223 | 264.0 | 3300 | 0.9557 | 0.775 | 0.3970 | 1.5383 | 0.775 | 0.7418 | 0.2226 | 0.1074 |
| 0.0223 | 264.96 | 3312 | 0.9561 | 0.775 | 0.3973 | 1.5393 | 0.775 | 0.7418 | 0.2224 | 0.1080 |
| 0.0223 | 266.0 | 3325 | 0.9563 | 0.775 | 0.3972 | 1.5387 | 0.775 | 0.7418 | 0.2224 | 0.1073 |
| 0.0223 | 266.96 | 3337 | 0.9568 | 0.775 | 0.3974 | 1.5407 | 0.775 | 0.7418 | 0.2225 | 0.1082 |
| 0.0223 | 268.0 | 3350 | 0.9567 | 0.775 | 0.3973 | 1.5373 | 0.775 | 0.7418 | 0.2259 | 0.1080 |
| 0.0223 | 268.96 | 3362 | 0.9566 | 0.775 | 0.3973 | 1.5371 | 0.775 | 0.7418 | 0.2225 | 0.1080 |
| 0.0223 | 270.0 | 3375 | 0.9574 | 0.775 | 0.3976 | 1.5403 | 0.775 | 0.7418 | 0.2227 | 0.1075 |
| 0.0223 | 270.96 | 3387 | 0.9568 | 0.775 | 0.3974 | 1.5363 | 0.775 | 0.7418 | 0.2225 | 0.1072 |
| 0.0223 | 272.0 | 3400 | 0.9580 | 0.775 | 0.3978 | 1.5465 | 0.775 | 0.7418 | 0.2241 | 0.1081 |
| 0.0223 | 272.96 | 3412 | 0.9577 | 0.775 | 0.3977 | 1.5383 | 0.775 | 0.7418 | 0.2228 | 0.1074 |
| 0.0223 | 274.0 | 3425 | 0.9577 | 0.775 | 0.3976 | 1.5409 | 0.775 | 0.7418 | 0.2225 | 0.1080 |
| 0.0223 | 274.96 | 3437 | 0.9582 | 0.775 | 0.3978 | 1.5409 | 0.775 | 0.7418 | 0.2226 | 0.1075 |
| 0.0223 | 276.0 | 3450 | 0.9581 | 0.775 | 0.3978 | 1.5412 | 0.775 | 0.7418 | 0.2225 | 0.1082 |
| 0.0223 | 276.96 | 3462 | 0.9582 | 0.775 | 0.3978 | 1.5367 | 0.775 | 0.7418 | 0.2220 | 0.1073 |
| 0.0223 | 278.0 | 3475 | 0.9587 | 0.775 | 0.3980 | 1.5422 | 0.775 | 0.7418 | 0.2244 | 0.1082 |
| 0.0223 | 278.96 | 3487 | 0.9588 | 0.775 | 0.3980 | 1.5478 | 0.775 | 0.7418 | 0.2242 | 0.1082 |
| 0.0202 | 280.0 | 3500 | 0.9586 | 0.775 | 0.3980 | 1.5381 | 0.775 | 0.7418 | 0.2219 | 0.1081 |
| 0.0202 | 280.96 | 3512 | 0.9592 | 0.775 | 0.3981 | 1.5474 | 0.775 | 0.7418 | 0.2243 | 0.1082 |
| 0.0202 | 282.0 | 3525 | 0.9588 | 0.775 | 0.3980 | 1.5396 | 0.775 | 0.7418 | 0.2227 | 0.1080 |
| 0.0202 | 282.96 | 3537 | 0.9589 | 0.775 | 0.3980 | 1.5401 | 0.775 | 0.7418 | 0.2218 | 0.1074 |
| 0.0202 | 284.0 | 3550 | 0.9593 | 0.775 | 0.3982 | 1.5441 | 0.775 | 0.7418 | 0.2243 | 0.1083 |
| 0.0202 | 284.96 | 3562 | 0.9591 | 0.775 | 0.3981 | 1.5412 | 0.775 | 0.7418 | 0.2227 | 0.1082 |
| 0.0202 | 286.0 | 3575 | 0.9592 | 0.775 | 0.3981 | 1.5417 | 0.775 | 0.7418 | 0.2227 | 0.1082 |
| 0.0202 | 286.96 | 3587 | 0.9592 | 0.775 | 0.3981 | 1.5416 | 0.775 | 0.7418 | 0.2227 | 0.1082 |
| 0.0202 | 288.0 | 3600 | 0.9592 | 0.775 | 0.3981 | 1.5416 | 0.775 | 0.7418 | 0.2227 | 0.1082 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mort1k/dqn-SpaceInvadersNoFrameskip-v4
|
mort1k
| 2023-07-13T14:09:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T14:09:10Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 762.50 +/- 250.23
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mort1k -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mort1k -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mort1k
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jordiclive/scaled-llama-7b-lora-16k-rp2
|
jordiclive
| 2023-07-13T14:05:35Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"dataset:togethercomputer/RedPajama-Data-1T-Sample",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T12:39:35Z |
---
datasets:
- togethercomputer/RedPajama-Data-1T-Sample
---
# Linear Scaled RoPE LLama LoRA 16k
```
import torch
from transformers import LlamaTokenizerFast, AutoModelForCausalLM
model_name = "jordiclive/scaled-llama-7b-lora-16k-rp2"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,trust_remote_code=True
)
tokenizer = LlamaTokenizerFast.from_pretrained(
model_name)
tokenizer.model_max_length = 16384
tokenizer.pad_token = tokenizer.eos_token
model.max_sequence_length = tokenizer.model_max_length
```
- `huggyllama/llama-7b` Trained on Packed 16k sequences of the RedPajama dataset for 1 Epoch.
- Merged Model. If require LoRA parameters/config, they are in the `adapter` folder.
|
onlywone/layoutlm-funsd
|
onlywone
| 2023-07-13T13:58:09Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T13:48:57Z |
---
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6940
- Answer: {'precision': 0.721978021978022, 'recall': 0.8121137206427689, 'f1': 0.7643979057591623, 'number': 809}
- Header: {'precision': 0.2662337662337662, 'recall': 0.3445378151260504, 'f1': 0.30036630036630035, 'number': 119}
- Question: {'precision': 0.7816091954022989, 'recall': 0.8300469483568075, 'f1': 0.8051001821493625, 'number': 1065}
- Overall Precision: 0.7207
- Overall Recall: 0.7938
- Overall F1: 0.7555
- Overall Accuracy: 0.8073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.755 | 1.0 | 10 | 1.5815 | {'precision': 0.026919242273180457, 'recall': 0.03337453646477132, 'f1': 0.02980132450331126, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.20780487804878048, 'recall': 0.2, 'f1': 0.20382775119617225, 'number': 1065} | 0.1183 | 0.1204 | 0.1194 | 0.3885 |
| 1.4375 | 2.0 | 20 | 1.2088 | {'precision': 0.28227848101265823, 'recall': 0.27564894932014833, 'f1': 0.2789243277048155, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4782964782964783, 'recall': 0.5483568075117371, 'f1': 0.5109361329833771, 'number': 1065} | 0.4013 | 0.4049 | 0.4031 | 0.6223 |
| 1.0595 | 3.0 | 30 | 0.9379 | {'precision': 0.503954802259887, 'recall': 0.5512978986402967, 'f1': 0.526564344746163, 'number': 809} | {'precision': 0.0425531914893617, 'recall': 0.01680672268907563, 'f1': 0.024096385542168672, 'number': 119} | {'precision': 0.6126205083260298, 'recall': 0.6563380281690141, 'f1': 0.6337262012692656, 'number': 1065} | 0.5533 | 0.5755 | 0.5642 | 0.7194 |
| 0.8139 | 4.0 | 40 | 0.7735 | {'precision': 0.6280041797283177, 'recall': 0.7428924598269468, 'f1': 0.680634201585504, 'number': 809} | {'precision': 0.13432835820895522, 'recall': 0.07563025210084033, 'f1': 0.09677419354838708, 'number': 119} | {'precision': 0.6600688468158348, 'recall': 0.72018779342723, 'f1': 0.688819039066008, 'number': 1065} | 0.6299 | 0.6909 | 0.6590 | 0.7636 |
| 0.664 | 5.0 | 50 | 0.7245 | {'precision': 0.6519453207150369, 'recall': 0.7663782447466008, 'f1': 0.7045454545454546, 'number': 809} | {'precision': 0.24719101123595505, 'recall': 0.18487394957983194, 'f1': 0.21153846153846156, 'number': 119} | {'precision': 0.7090909090909091, 'recall': 0.7690140845070422, 'f1': 0.7378378378378379, 'number': 1065} | 0.6656 | 0.7331 | 0.6977 | 0.7757 |
| 0.5505 | 6.0 | 60 | 0.6956 | {'precision': 0.6834061135371179, 'recall': 0.7737948084054388, 'f1': 0.7257971014492753, 'number': 809} | {'precision': 0.28205128205128205, 'recall': 0.18487394957983194, 'f1': 0.2233502538071066, 'number': 119} | {'precision': 0.723421926910299, 'recall': 0.8178403755868544, 'f1': 0.7677390921110622, 'number': 1065} | 0.6911 | 0.7622 | 0.7249 | 0.7888 |
| 0.4759 | 7.0 | 70 | 0.6712 | {'precision': 0.6844396082698585, 'recall': 0.7775030902348579, 'f1': 0.7280092592592592, 'number': 809} | {'precision': 0.2727272727272727, 'recall': 0.2773109243697479, 'f1': 0.27499999999999997, 'number': 119} | {'precision': 0.7472527472527473, 'recall': 0.8300469483568075, 'f1': 0.786476868327402, 'number': 1065} | 0.6955 | 0.7757 | 0.7334 | 0.7975 |
| 0.4276 | 8.0 | 80 | 0.6765 | {'precision': 0.6889375684556407, 'recall': 0.7775030902348579, 'f1': 0.7305458768873403, 'number': 809} | {'precision': 0.28205128205128205, 'recall': 0.2773109243697479, 'f1': 0.2796610169491525, 'number': 119} | {'precision': 0.7527333894028595, 'recall': 0.8403755868544601, 'f1': 0.7941437444543035, 'number': 1065} | 0.7017 | 0.7812 | 0.7393 | 0.8021 |
| 0.3788 | 9.0 | 90 | 0.6653 | {'precision': 0.7081930415263749, 'recall': 0.7799752781211372, 'f1': 0.7423529411764707, 'number': 809} | {'precision': 0.2647058823529412, 'recall': 0.3025210084033613, 'f1': 0.2823529411764706, 'number': 119} | {'precision': 0.7667238421955404, 'recall': 0.8394366197183099, 'f1': 0.8014343343792021, 'number': 1065} | 0.7118 | 0.7832 | 0.7458 | 0.8049 |
| 0.3466 | 10.0 | 100 | 0.6838 | {'precision': 0.7005464480874317, 'recall': 0.792336217552534, 'f1': 0.7436194895591649, 'number': 809} | {'precision': 0.2706766917293233, 'recall': 0.3025210084033613, 'f1': 0.28571428571428564, 'number': 119} | {'precision': 0.7728055077452668, 'recall': 0.8431924882629108, 'f1': 0.8064660978895375, 'number': 1065} | 0.7127 | 0.7903 | 0.7495 | 0.8047 |
| 0.3142 | 11.0 | 110 | 0.6795 | {'precision': 0.6997816593886463, 'recall': 0.792336217552534, 'f1': 0.7431884057971013, 'number': 809} | {'precision': 0.2857142857142857, 'recall': 0.3025210084033613, 'f1': 0.2938775510204082, 'number': 119} | {'precision': 0.7994628469113697, 'recall': 0.8384976525821596, 'f1': 0.8185151237396883, 'number': 1065} | 0.7272 | 0.7878 | 0.7563 | 0.8067 |
| 0.2978 | 12.0 | 120 | 0.6922 | {'precision': 0.6927194860813705, 'recall': 0.799752781211372, 'f1': 0.7423981640849111, 'number': 809} | {'precision': 0.2585034013605442, 'recall': 0.31932773109243695, 'f1': 0.2857142857142857, 'number': 119} | {'precision': 0.7768090671316478, 'recall': 0.8366197183098592, 'f1': 0.8056057866184448, 'number': 1065} | 0.7074 | 0.7908 | 0.7467 | 0.8026 |
| 0.2824 | 13.0 | 130 | 0.6960 | {'precision': 0.7184357541899441, 'recall': 0.7948084054388134, 'f1': 0.754694835680751, 'number': 809} | {'precision': 0.2611464968152866, 'recall': 0.3445378151260504, 'f1': 0.2971014492753623, 'number': 119} | {'precision': 0.7757255936675461, 'recall': 0.828169014084507, 'f1': 0.8010899182561309, 'number': 1065} | 0.7154 | 0.7858 | 0.7489 | 0.8045 |
| 0.2696 | 14.0 | 140 | 0.6917 | {'precision': 0.7164667393675027, 'recall': 0.8121137206427689, 'f1': 0.7612977983777521, 'number': 809} | {'precision': 0.2708333333333333, 'recall': 0.3277310924369748, 'f1': 0.2965779467680608, 'number': 119} | {'precision': 0.7833775419982316, 'recall': 0.831924882629108, 'f1': 0.8069216757741348, 'number': 1065} | 0.7217 | 0.7938 | 0.7560 | 0.8067 |
| 0.2674 | 15.0 | 150 | 0.6940 | {'precision': 0.721978021978022, 'recall': 0.8121137206427689, 'f1': 0.7643979057591623, 'number': 809} | {'precision': 0.2662337662337662, 'recall': 0.3445378151260504, 'f1': 0.30036630036630035, 'number': 119} | {'precision': 0.7816091954022989, 'recall': 0.8300469483568075, 'f1': 0.8051001821493625, 'number': 1065} | 0.7207 | 0.7938 | 0.7555 | 0.8073 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
peft-internal-testing/tiny_OPTForSequenceClassification-lora
|
peft-internal-testing
| 2023-07-13T13:48:21Z | 25,195 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T13:48:20Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Yntec/DucHaiten-Retro-Diffusers
|
Yntec
| 2023-07-13T13:39:06Z | 1,798 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Retro",
"DucHaiten",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T13:02:56Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Retro
- DucHaiten
---
# DucHaiten Retro
I don't know about you, but in my opinion this is the best retro model DucHaiten has ever created. It's sad to see it sitting at 0 downloads at huggingface, so here's a Diffusers version you can use with huggingface's pipeline!
If you like their content, support them at:
https://linktr.ee/Duc_Haiten
Original page:
https://civitai.com/models/103966?modelVersionId=111392
|
orya16215/ppo-LunarLander-v2
|
orya16215
| 2023-07-13T13:27:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T13:27:09Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.86 +/- 15.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aga3134/my_awesome_eli5_clm-model
|
aga3134
| 2023-07-13T13:13:34Z | 202 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T04:54:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8799 | 1.0 | 1132 | 3.7450 |
| 3.7747 | 2.0 | 2264 | 3.7267 |
| 3.7347 | 3.0 | 3396 | 3.7230 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
T-Systems-onsite/cross-en-pt-roberta-sentence-transformer
|
T-Systems-onsite
| 2023-07-13T13:09:58Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"pt",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
- pt
license: mit
tags:
- sentence_embedding
---
|
peft-internal-testing/tiny_OPTForQuestionAnswering-lora
|
peft-internal-testing
| 2023-07-13T13:09:34Z | 25,146 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T13:09:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
zohaib99k/QnA_model_training
|
zohaib99k
| 2023-07-13T13:04:41Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T04:12:35Z |
---
license: other
---
LLaMA-13B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
mistdmar/sd-class-butterflies-32
|
mistdmar
| 2023-07-13T12:51:55Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-13T12:51:34Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('mistdmar/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Tanor/BERTovoSENTPOS0
|
Tanor
| 2023-07-13T12:45:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-04T11:29:42Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: BERTovoSENTPOS0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTovoSENTPOS0
This model is a fine-tuned version of [Tanor/BERTicovoSENTPOS0](https://huggingface.co/Tanor/BERTicovoSENTPOS0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0338
- F1: 0.4211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 52 | 0.0228 | 0.0 |
| No log | 2.0 | 105 | 0.0180 | 0.4286 |
| No log | 2.99 | 157 | 0.0180 | 0.5714 |
| No log | 4.0 | 210 | 0.0230 | 0.6 |
| No log | 4.99 | 262 | 0.0338 | 0.4211 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
EquinoxElahin/q-FrozenLake-v1-4x4-noSlippery
|
EquinoxElahin
| 2023-07-13T12:42:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-27T14:50:32Z |
# ANAIS
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://gitlab-interne.dev.klee.lan.net/datateam/anais.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab-interne.dev.klee.lan.net/datateam/anais/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
|
LsTam/lora-qacr
|
LsTam
| 2023-07-13T12:42:12Z | 0 | 0 | null |
[
"llama",
"fr",
"license:other",
"region:us"
] | null | 2023-07-10T08:53:59Z |
---
widget:
- text: "Jens Peter Hansen kommer fra Danmark"
language:
- fr
tags:
- llama
license: other
base_model:
- decapoda-research/llama-7b-hf
---
# Model Card: Llama-7b with LoRA Fine-tuning on QACR data
## Model Overview
- **Model Name**: Llama-7b
- **Model Architecture**: Transformer-based Language Model
- **Fine-tuning Method**: LoRA
- **Training Datasets**:
- Educational Question Generation Dataset (described in the dataset chart)
- Alpaca GPT-4 french dataset (chat instruction task)
- Dolly_fr dataset (chat instruction task)
## Model Details
- **Base Model**: decapoda-research/llama-7b-hf
- **Fine-tuning Approach**: LoRA fine-tuning method, which combines pre-training on a large corpus with additional task-specific fine-tuning.
- **Training Objective**: The model is trained to generate relevant and useful questions based on educational texts and to handle chat instruction tasks from the Alpaca GPT-4 and Dolly datasets.
- **Training Procedure**: The base Llama-7b model is first pretrained on a large corpus to learn general language patterns and representations. It is then fine-tuned using a combination of the aforementioned datasets to specialize in educational question generation and chat instruction tasks.
## Intended Use
- **Primary Task**: Question generation for educational purposes and chat instruction tasks.
- **Potential Use Cases**:
- Automated question generation for educational platforms and tutoring systems.
- Chat-based instruction and assistance in various domains.
|
atiiisham988/whisper-small-dv
|
atiiisham988
| 2023-07-13T12:41:14Z | 86 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-13T11:13:03Z |
---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - atiiisham
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.509754146816427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - atiiisham
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
- Wer Ortho: 62.8665
- Wer: 13.5098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1243 | 1.63 | 500 | 0.1709 | 62.8665 | 13.5098 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dsfsi/nr-en-m2m100-gov
|
dsfsi
| 2023-07-13T12:40:43Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"m2m100",
"translation",
"africanlp",
"african",
"ndebele",
"nr",
"en",
"arxiv:2303.03750",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-22T10:22:19Z |
---
license: cc-by-4.0
language:
- nr
- en
pipeline_tag: text2text-generation
tags:
- m2m100
- translation
- africanlp
- african
- ndebele
---
# [nr-en] South Ndebele to English Translation Model based on M2M100 and The South African Gov-ZA multilingual corpus
Model created from South Ndebele to English aligned sentences from [The South African Gov-ZA multilingual corpus](https://github.com/dsfsi/gov-za-multilingual)
The data set contains cabinet statements from the South African government, maintained by the Government Communication and Information System (GCIS). Data was scraped from the governments website: https://www.gov.za/cabinet-statements
## Authors
- Vukosi Marivate - [@vukosi](https://twitter.com/vukosi)
- Matimba Shingange
- Richard Lastrucci
- Isheanesu Joseph Dzingirai
- Jenalea Rajab
## BibTeX entry and citation info
```
@inproceedings{lastrucci-etal-2023-preparing,
title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora",
author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate",
booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.rail-1.3",
pages = "18--25"
}
```
[Paper - Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora](https://arxiv.org/abs/2303.03750)
|
RushTurtle/crnn_vgg16_bn_20230713-111621
|
RushTurtle
| 2023-07-13T12:33:27Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T12:33:19Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "/tmp/dataset/train3_1100/",
"val_path": "/tmp/dataset/val3_1100/",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 600,
"batch_size": 64,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 16,
"resume": null,
"vocab": "french",
"test_only": false,
"show_samples": false,
"wb": false,
"push_to_hub": true,
"pretrained": false,
"sched": "cosine",
"amp": false,
"find_lr": false
}
|
nmkd/stable-diffusion-1.5-onnx-fp16
|
nmkd
| 2023-07-13T12:32:40Z | 0 | 2 | null |
[
"onnx",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-13T12:07:44Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: false
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well.
extra_gated_fields:
I have read the License and agree with its terms: checkbox
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16")
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt)
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide four checkpoints, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
Ne01ynx/GXA-temp
|
Ne01ynx
| 2023-07-13T12:31:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T12:25:06Z |
<p><strong><font size="5">Information</font></strong></p>
GPT4-X-Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.
<p>There are 2 quantized versions, one is using <i>--true-sequential</i> and <i>--act-order</i> optimizations, and the other is using <i>--true-sequential</i> and <i>--groupsize 128</i> optimizations.</p>
This was made using Chansung's GPT4-Alpaca Lora: https://huggingface.co/chansung/gpt4-alpaca-lora-30b
<p><strong>Training Parameters</strong></p>
<ul><li>num_epochs=10</li><li>cutoff_len=512</li><li>group_by_length</li><li>lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'</li><li>lora_r=16</li><li>micro_batch_size=8</li></ul>
<p><strong><font size="5">Benchmarks</font></strong></p>
<p><strong><font size="4">--true-sequential --act-order</font></strong></p>
<strong>Wikitext2</strong>: 4.481280326843262
<strong>Ptb-New</strong>: 8.539161682128906
<strong>C4-New</strong>: 6.451964855194092
<strong>Note</strong>: This version does not use <i>--groupsize 128</i>, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.
<p><strong><font size="4">--true-sequential --groupsize 128</font></strong></p>
<strong>Wikitext2</strong>: 4.285132884979248
<strong>Ptb-New</strong>: 8.34856128692627
<strong>C4-New</strong>: 6.292652130126953
<strong>Note</strong>: This version uses <i>--groupsize 128</i>, resulting in better evaluations. However, it consumes more VRAM.
|
phatjk/bloomz-lora-vi-QA-NLLB-viquad_ver2
|
phatjk
| 2023-07-13T12:24:58Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T12:24:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
jordyvl/vit-tiny_tobacco3482_dualsimkd_
|
jordyvl
| 2023-07-13T12:19:30Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T10:55:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-tiny_tobacco3482_dualsimkd_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_tobacco3482_dualsimkd_
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1401
- Accuracy: 0.385
- Brier Loss: 0.8709
- Nll: 8.8462
- F1 Micro: 0.3850
- F1 Macro: 0.1979
- Ece: 0.3606
- Aurc: 0.3874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 100 | 0.5117 | 0.04 | 0.9009 | 19.1664 | 0.04 | 0.0077 | 0.1344 | 0.9445 |
| No log | 2.0 | 200 | 0.3168 | 0.05 | 0.8997 | 15.0313 | 0.0500 | 0.0095 | 0.1344 | 0.8364 |
| No log | 3.0 | 300 | 0.2703 | 0.18 | 0.8978 | 9.6860 | 0.18 | 0.0305 | 0.2180 | 0.7731 |
| No log | 4.0 | 400 | 0.2266 | 0.18 | 0.8952 | 12.0957 | 0.18 | 0.0305 | 0.2223 | 0.7993 |
| 1.1219 | 5.0 | 500 | 0.1687 | 0.18 | 0.8951 | 12.7136 | 0.18 | 0.0305 | 0.2215 | 0.7713 |
| 1.1219 | 6.0 | 600 | 0.1331 | 0.165 | 0.8956 | 12.6737 | 0.165 | 0.0284 | 0.2044 | 0.7829 |
| 1.1219 | 7.0 | 700 | 0.1139 | 0.18 | 0.8960 | 12.6380 | 0.18 | 0.0305 | 0.2283 | 0.7875 |
| 1.1219 | 8.0 | 800 | 0.1143 | 0.18 | 0.8963 | 12.6385 | 0.18 | 0.0306 | 0.2183 | 0.7703 |
| 1.1219 | 9.0 | 900 | 0.1246 | 0.18 | 0.8966 | 12.5389 | 0.18 | 0.0305 | 0.2223 | 0.7726 |
| 0.0694 | 10.0 | 1000 | 0.1262 | 0.18 | 0.8961 | 12.6316 | 0.18 | 0.0305 | 0.2271 | 0.7894 |
| 0.0694 | 11.0 | 1100 | 0.1186 | 0.155 | 0.8961 | 12.6309 | 0.155 | 0.0268 | 0.2169 | 0.6418 |
| 0.0694 | 12.0 | 1200 | 0.1290 | 0.18 | 0.8960 | 12.6360 | 0.18 | 0.0305 | 0.2272 | 0.8014 |
| 0.0694 | 13.0 | 1300 | 0.1202 | 0.18 | 0.8959 | 12.6644 | 0.18 | 0.0305 | 0.2274 | 0.7910 |
| 0.0694 | 14.0 | 1400 | 0.1341 | 0.18 | 0.8960 | 12.6667 | 0.18 | 0.0305 | 0.2273 | 0.7916 |
| 0.0505 | 15.0 | 1500 | 0.1234 | 0.18 | 0.8961 | 12.6653 | 0.18 | 0.0305 | 0.2261 | 0.7819 |
| 0.0505 | 16.0 | 1600 | 0.1375 | 0.18 | 0.8960 | 12.6951 | 0.18 | 0.0305 | 0.2283 | 0.7929 |
| 0.0505 | 17.0 | 1700 | 0.1249 | 0.18 | 0.8959 | 12.7041 | 0.18 | 0.0305 | 0.2262 | 0.7820 |
| 0.0505 | 18.0 | 1800 | 0.1263 | 0.18 | 0.8964 | 12.6096 | 0.18 | 0.0305 | 0.2228 | 0.7900 |
| 0.0505 | 19.0 | 1900 | 0.1243 | 0.18 | 0.8961 | 12.6667 | 0.18 | 0.0305 | 0.2229 | 0.7896 |
| 0.0483 | 20.0 | 2000 | 0.1246 | 0.18 | 0.8960 | 12.6285 | 0.18 | 0.0305 | 0.2172 | 0.7913 |
| 0.0483 | 21.0 | 2100 | 0.1218 | 0.18 | 0.8961 | 12.6375 | 0.18 | 0.0305 | 0.2250 | 0.8003 |
| 0.0483 | 22.0 | 2200 | 0.1228 | 0.18 | 0.8964 | 12.5765 | 0.18 | 0.0305 | 0.2258 | 0.7938 |
| 0.0483 | 23.0 | 2300 | 0.1270 | 0.18 | 0.8963 | 12.6332 | 0.18 | 0.0305 | 0.2239 | 0.8055 |
| 0.0483 | 24.0 | 2400 | 0.1303 | 0.18 | 0.8963 | 12.5914 | 0.18 | 0.0305 | 0.2270 | 0.8006 |
| 0.0484 | 25.0 | 2500 | 0.1234 | 0.18 | 0.8960 | 12.6429 | 0.18 | 0.0305 | 0.2208 | 0.7990 |
| 0.0484 | 26.0 | 2600 | 0.1313 | 0.18 | 0.8965 | 12.5721 | 0.18 | 0.0305 | 0.2205 | 0.8069 |
| 0.0484 | 27.0 | 2700 | 0.1314 | 0.18 | 0.8963 | 12.5982 | 0.18 | 0.0305 | 0.2247 | 0.8110 |
| 0.0484 | 28.0 | 2800 | 0.1326 | 0.18 | 0.8962 | 12.6539 | 0.18 | 0.0305 | 0.2143 | 0.8083 |
| 0.0484 | 29.0 | 2900 | 0.1337 | 0.18 | 0.8964 | 12.5814 | 0.18 | 0.0305 | 0.2225 | 0.8106 |
| 0.0473 | 30.0 | 3000 | 0.1369 | 0.18 | 0.8962 | 12.6021 | 0.18 | 0.0305 | 0.2258 | 0.8095 |
| 0.0473 | 31.0 | 3100 | 0.1295 | 0.18 | 0.8958 | 12.6587 | 0.18 | 0.0305 | 0.2273 | 0.8104 |
| 0.0473 | 32.0 | 3200 | 0.1343 | 0.18 | 0.8959 | 12.6740 | 0.18 | 0.0305 | 0.2220 | 0.8119 |
| 0.0473 | 33.0 | 3300 | 0.1359 | 0.18 | 0.8960 | 12.6790 | 0.18 | 0.0305 | 0.2273 | 0.8134 |
| 0.0473 | 34.0 | 3400 | 0.1367 | 0.18 | 0.8961 | 12.6336 | 0.18 | 0.0305 | 0.2228 | 0.8159 |
| 0.0476 | 35.0 | 3500 | 0.1378 | 0.18 | 0.8963 | 12.6119 | 0.18 | 0.0305 | 0.2270 | 0.8172 |
| 0.0476 | 36.0 | 3600 | 0.1286 | 0.18 | 0.8961 | 12.6340 | 0.18 | 0.0305 | 0.2218 | 0.8148 |
| 0.0476 | 37.0 | 3700 | 0.1333 | 0.18 | 0.8960 | 12.6328 | 0.18 | 0.0305 | 0.2207 | 0.8164 |
| 0.0476 | 38.0 | 3800 | 0.1328 | 0.18 | 0.8963 | 12.6294 | 0.18 | 0.0305 | 0.2196 | 0.8180 |
| 0.0476 | 39.0 | 3900 | 0.1344 | 0.18 | 0.8961 | 12.6417 | 0.18 | 0.0305 | 0.2207 | 0.8209 |
| 0.0474 | 40.0 | 4000 | 0.1362 | 0.18 | 0.8959 | 12.6775 | 0.18 | 0.0305 | 0.2187 | 0.8198 |
| 0.0474 | 41.0 | 4100 | 0.1340 | 0.18 | 0.8961 | 12.6746 | 0.18 | 0.0305 | 0.2249 | 0.8215 |
| 0.0474 | 42.0 | 4200 | 0.1308 | 0.18 | 0.8958 | 12.6621 | 0.18 | 0.0305 | 0.2208 | 0.8215 |
| 0.0474 | 43.0 | 4300 | 0.1372 | 0.18 | 0.8960 | 12.6133 | 0.18 | 0.0305 | 0.2249 | 0.8204 |
| 0.0474 | 44.0 | 4400 | 0.1436 | 0.18 | 0.8963 | 12.6014 | 0.18 | 0.0305 | 0.2280 | 0.8201 |
| 0.0472 | 45.0 | 4500 | 0.1374 | 0.18 | 0.8960 | 12.6316 | 0.18 | 0.0305 | 0.2228 | 0.8193 |
| 0.0472 | 46.0 | 4600 | 0.1261 | 0.18 | 0.8957 | 12.6840 | 0.18 | 0.0305 | 0.2251 | 0.8220 |
| 0.0472 | 47.0 | 4700 | 0.1340 | 0.18 | 0.8956 | 12.6704 | 0.18 | 0.0305 | 0.2251 | 0.8221 |
| 0.0472 | 48.0 | 4800 | 0.1320 | 0.18 | 0.8959 | 12.6111 | 0.18 | 0.0305 | 0.2227 | 0.8203 |
| 0.0472 | 49.0 | 4900 | 0.1336 | 0.18 | 0.8956 | 12.6838 | 0.18 | 0.0305 | 0.2294 | 0.8209 |
| 0.0474 | 50.0 | 5000 | 0.1342 | 0.18 | 0.8959 | 12.3426 | 0.18 | 0.0305 | 0.2292 | 0.8218 |
| 0.0474 | 51.0 | 5100 | 0.1362 | 0.18 | 0.8957 | 12.3611 | 0.18 | 0.0305 | 0.2261 | 0.8224 |
| 0.0474 | 52.0 | 5200 | 0.1368 | 0.18 | 0.8958 | 11.5617 | 0.18 | 0.0305 | 0.2205 | 0.8222 |
| 0.0474 | 53.0 | 5300 | 0.1391 | 0.18 | 0.8955 | 11.5519 | 0.18 | 0.0305 | 0.2312 | 0.8225 |
| 0.0474 | 54.0 | 5400 | 0.1366 | 0.18 | 0.8947 | 12.2068 | 0.18 | 0.0305 | 0.2231 | 0.8231 |
| 0.047 | 55.0 | 5500 | 0.1355 | 0.19 | 0.8943 | 11.5922 | 0.19 | 0.0641 | 0.2299 | 0.8248 |
| 0.047 | 56.0 | 5600 | 0.1386 | 0.17 | 0.8930 | 11.8204 | 0.17 | 0.0705 | 0.2240 | 0.5968 |
| 0.047 | 57.0 | 5700 | 0.1364 | 0.33 | 0.8936 | 11.0092 | 0.33 | 0.1878 | 0.3195 | 0.4381 |
| 0.047 | 58.0 | 5800 | 0.1368 | 0.27 | 0.8923 | 11.0463 | 0.27 | 0.1541 | 0.2874 | 0.5187 |
| 0.047 | 59.0 | 5900 | 0.1328 | 0.325 | 0.8915 | 10.5269 | 0.325 | 0.1702 | 0.3247 | 0.4469 |
| 0.0469 | 60.0 | 6000 | 0.1402 | 0.235 | 0.8945 | 9.2940 | 0.235 | 0.1141 | 0.2558 | 0.6612 |
| 0.0469 | 61.0 | 6100 | 0.1387 | 0.345 | 0.8913 | 9.2678 | 0.345 | 0.1657 | 0.3422 | 0.4100 |
| 0.0469 | 62.0 | 6200 | 0.1386 | 0.31 | 0.8891 | 10.1100 | 0.31 | 0.1637 | 0.3134 | 0.4609 |
| 0.0469 | 63.0 | 6300 | 0.1379 | 0.34 | 0.8892 | 9.1965 | 0.34 | 0.1582 | 0.3388 | 0.4344 |
| 0.0469 | 64.0 | 6400 | 0.1375 | 0.335 | 0.8876 | 9.2252 | 0.335 | 0.1624 | 0.3356 | 0.4239 |
| 0.0469 | 65.0 | 6500 | 0.1357 | 0.345 | 0.8868 | 9.1887 | 0.345 | 0.1659 | 0.3361 | 0.4061 |
| 0.0469 | 66.0 | 6600 | 0.1394 | 0.345 | 0.8850 | 9.1819 | 0.345 | 0.1641 | 0.3398 | 0.4265 |
| 0.0469 | 67.0 | 6700 | 0.1410 | 0.34 | 0.8850 | 9.1158 | 0.34 | 0.1590 | 0.3328 | 0.4302 |
| 0.0469 | 68.0 | 6800 | 0.1387 | 0.295 | 0.8814 | 9.2693 | 0.295 | 0.1374 | 0.3039 | 0.4572 |
| 0.0469 | 69.0 | 6900 | 0.1385 | 0.335 | 0.8814 | 9.1526 | 0.335 | 0.1668 | 0.3324 | 0.4205 |
| 0.0463 | 70.0 | 7000 | 0.1392 | 0.34 | 0.8814 | 9.1159 | 0.34 | 0.1546 | 0.3405 | 0.4263 |
| 0.0463 | 71.0 | 7100 | 0.1418 | 0.35 | 0.8820 | 9.1363 | 0.35 | 0.1692 | 0.3436 | 0.4019 |
| 0.0463 | 72.0 | 7200 | 0.1379 | 0.35 | 0.8791 | 9.0483 | 0.35 | 0.1726 | 0.3402 | 0.4226 |
| 0.0463 | 73.0 | 7300 | 0.1405 | 0.33 | 0.8760 | 9.3563 | 0.33 | 0.1731 | 0.3207 | 0.4307 |
| 0.0463 | 74.0 | 7400 | 0.1401 | 0.31 | 0.8769 | 9.4413 | 0.31 | 0.1676 | 0.3099 | 0.4383 |
| 0.0458 | 75.0 | 7500 | 0.1393 | 0.38 | 0.8778 | 9.0788 | 0.38 | 0.1985 | 0.3518 | 0.3976 |
| 0.0458 | 76.0 | 7600 | 0.1384 | 0.39 | 0.8779 | 9.0233 | 0.39 | 0.2027 | 0.3673 | 0.4144 |
| 0.0458 | 77.0 | 7700 | 0.1403 | 0.365 | 0.8818 | 9.1567 | 0.3650 | 0.1953 | 0.3518 | 0.4181 |
| 0.0458 | 78.0 | 7800 | 0.1400 | 0.27 | 0.8725 | 11.0592 | 0.27 | 0.1627 | 0.2896 | 0.4809 |
| 0.0458 | 79.0 | 7900 | 0.1402 | 0.375 | 0.8739 | 9.1158 | 0.375 | 0.1961 | 0.3540 | 0.3929 |
| 0.0455 | 80.0 | 8000 | 0.1401 | 0.315 | 0.8722 | 9.9114 | 0.315 | 0.1771 | 0.3220 | 0.4443 |
| 0.0455 | 81.0 | 8100 | 0.1378 | 0.39 | 0.8761 | 9.0128 | 0.39 | 0.2048 | 0.3642 | 0.4020 |
| 0.0455 | 82.0 | 8200 | 0.1401 | 0.38 | 0.8729 | 9.1624 | 0.38 | 0.2006 | 0.3612 | 0.3924 |
| 0.0455 | 83.0 | 8300 | 0.1391 | 0.38 | 0.8742 | 8.8982 | 0.38 | 0.2048 | 0.3561 | 0.3991 |
| 0.0455 | 84.0 | 8400 | 0.1381 | 0.375 | 0.8734 | 9.0598 | 0.375 | 0.1901 | 0.3567 | 0.4010 |
| 0.0453 | 85.0 | 8500 | 0.1398 | 0.39 | 0.8718 | 9.1407 | 0.39 | 0.2057 | 0.3693 | 0.3892 |
| 0.0453 | 86.0 | 8600 | 0.1389 | 0.37 | 0.8721 | 9.3494 | 0.37 | 0.2006 | 0.3505 | 0.3914 |
| 0.0453 | 87.0 | 8700 | 0.1390 | 0.395 | 0.8743 | 8.7444 | 0.395 | 0.2113 | 0.3724 | 0.3854 |
| 0.0453 | 88.0 | 8800 | 0.1404 | 0.395 | 0.8739 | 8.7654 | 0.395 | 0.2134 | 0.3657 | 0.3925 |
| 0.0453 | 89.0 | 8900 | 0.1409 | 0.385 | 0.8726 | 8.7763 | 0.3850 | 0.2032 | 0.3643 | 0.3963 |
| 0.0451 | 90.0 | 9000 | 0.1403 | 0.39 | 0.8717 | 8.8363 | 0.39 | 0.2055 | 0.3668 | 0.3926 |
| 0.0451 | 91.0 | 9100 | 0.1388 | 0.39 | 0.8719 | 9.2985 | 0.39 | 0.2099 | 0.3662 | 0.3847 |
| 0.0451 | 92.0 | 9200 | 0.1397 | 0.385 | 0.8702 | 9.4449 | 0.3850 | 0.2050 | 0.3535 | 0.3877 |
| 0.0451 | 93.0 | 9300 | 0.1403 | 0.385 | 0.8709 | 8.9790 | 0.3850 | 0.1989 | 0.3473 | 0.3887 |
| 0.0451 | 94.0 | 9400 | 0.1400 | 0.39 | 0.8705 | 9.1647 | 0.39 | 0.2053 | 0.3569 | 0.3865 |
| 0.045 | 95.0 | 9500 | 0.1404 | 0.395 | 0.8712 | 9.1707 | 0.395 | 0.2087 | 0.3688 | 0.3815 |
| 0.045 | 96.0 | 9600 | 0.1404 | 0.385 | 0.8711 | 8.6711 | 0.3850 | 0.1980 | 0.3566 | 0.3867 |
| 0.045 | 97.0 | 9700 | 0.1399 | 0.39 | 0.8706 | 9.1288 | 0.39 | 0.2035 | 0.3610 | 0.3845 |
| 0.045 | 98.0 | 9800 | 0.1400 | 0.385 | 0.8708 | 9.1302 | 0.3850 | 0.1982 | 0.3538 | 0.3870 |
| 0.045 | 99.0 | 9900 | 0.1398 | 0.39 | 0.8712 | 8.8257 | 0.39 | 0.2002 | 0.3660 | 0.3825 |
| 0.0449 | 100.0 | 10000 | 0.1401 | 0.385 | 0.8709 | 8.8462 | 0.3850 | 0.1979 | 0.3606 | 0.3874 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
grace-pro/xlmr-base-finetuned-hausa-2e-4
|
grace-pro
| 2023-07-13T12:14:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T10:57:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlmr-base-finetuned-hausa-2e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-base-finetuned-hausa-2e-4
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2708
- Precision: 0.1719
- Recall: 0.0235
- F1: 0.0414
- Accuracy: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2716 | 1.0 | 1312 | 0.2690 | 0.1719 | 0.0235 | 0.0414 | 0.9247 |
| 0.2744 | 2.0 | 2624 | 0.2697 | 0.1719 | 0.0235 | 0.0414 | 0.9247 |
| 0.2735 | 3.0 | 3936 | 0.2693 | 0.1719 | 0.0235 | 0.0414 | 0.9247 |
| 0.2739 | 4.0 | 5248 | 0.2697 | 0.1719 | 0.0235 | 0.0414 | 0.9247 |
| 0.2709 | 5.0 | 6560 | 0.2708 | 0.1719 | 0.0235 | 0.0414 | 0.9247 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yassmine/plbart-finetuned-unitTest-1000
|
yassmine
| 2023-07-13T12:04:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"plbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-13T09:49:08Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: plbart-finetuned-unitTest-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbart-finetuned-unitTest-1000
This model is a fine-tuned version of [uclanlp/plbart-base](https://huggingface.co/uclanlp/plbart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0000
- Bleu: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 92 | 0.9023 | 0.0000 |
| No log | 2.0 | 184 | 0.8401 | 0.0000 |
| No log | 3.0 | 276 | 0.8096 | 0.0000 |
| No log | 4.0 | 368 | 0.7942 | 0.0000 |
| No log | 5.0 | 460 | 0.7848 | 0.0000 |
| 0.943 | 6.0 | 552 | 0.7818 | 0.0000 |
| 0.943 | 7.0 | 644 | 0.7911 | 0.0000 |
| 0.943 | 8.0 | 736 | 0.7874 | 0.0000 |
| 0.943 | 9.0 | 828 | 0.7970 | 0.0000 |
| 0.943 | 10.0 | 920 | 0.8062 | 0.0000 |
| 0.5025 | 11.0 | 1012 | 0.8085 | 0.0000 |
| 0.5025 | 12.0 | 1104 | 0.8179 | 0.0000 |
| 0.5025 | 13.0 | 1196 | 0.8360 | 0.0000 |
| 0.5025 | 14.0 | 1288 | 0.8385 | 0.0000 |
| 0.5025 | 15.0 | 1380 | 0.8470 | 0.0000 |
| 0.5025 | 16.0 | 1472 | 0.8556 | 0.0000 |
| 0.3309 | 17.0 | 1564 | 0.8619 | 0.0000 |
| 0.3309 | 18.0 | 1656 | 0.8701 | 0.0000 |
| 0.3309 | 19.0 | 1748 | 0.8827 | 0.0000 |
| 0.3309 | 20.0 | 1840 | 0.8871 | 0.0000 |
| 0.3309 | 21.0 | 1932 | 0.8970 | 0.0000 |
| 0.2266 | 22.0 | 2024 | 0.8984 | 0.0000 |
| 0.2266 | 23.0 | 2116 | 0.9051 | 0.0000 |
| 0.2266 | 24.0 | 2208 | 0.9188 | 0.0000 |
| 0.2266 | 25.0 | 2300 | 0.9205 | 0.0000 |
| 0.2266 | 26.0 | 2392 | 0.9278 | 0.0000 |
| 0.2266 | 27.0 | 2484 | 0.9333 | 0.0000 |
| 0.1639 | 28.0 | 2576 | 0.9456 | 0.0000 |
| 0.1639 | 29.0 | 2668 | 0.9454 | 0.0000 |
| 0.1639 | 30.0 | 2760 | 0.9522 | 0.0000 |
| 0.1639 | 31.0 | 2852 | 0.9513 | 0.0000 |
| 0.1639 | 32.0 | 2944 | 0.9554 | 0.0000 |
| 0.1251 | 33.0 | 3036 | 0.9661 | 0.0000 |
| 0.1251 | 34.0 | 3128 | 0.9698 | 0.0000 |
| 0.1251 | 35.0 | 3220 | 0.9750 | 0.0000 |
| 0.1251 | 36.0 | 3312 | 0.9722 | 0.0000 |
| 0.1251 | 37.0 | 3404 | 0.9780 | 0.0000 |
| 0.1251 | 38.0 | 3496 | 0.9789 | 0.0000 |
| 0.1019 | 39.0 | 3588 | 0.9825 | 0.0000 |
| 0.1019 | 40.0 | 3680 | 0.9913 | 0.0000 |
| 0.1019 | 41.0 | 3772 | 0.9906 | 0.0000 |
| 0.1019 | 42.0 | 3864 | 0.9922 | 0.0000 |
| 0.1019 | 43.0 | 3956 | 0.9937 | 0.0000 |
| 0.0863 | 44.0 | 4048 | 0.9981 | 0.0000 |
| 0.0863 | 45.0 | 4140 | 0.9979 | 0.0000 |
| 0.0863 | 46.0 | 4232 | 0.9984 | 0.0000 |
| 0.0863 | 47.0 | 4324 | 0.9970 | 0.0000 |
| 0.0863 | 48.0 | 4416 | 1.0003 | 0.0000 |
| 0.0783 | 49.0 | 4508 | 0.9993 | 0.0000 |
| 0.0783 | 50.0 | 4600 | 1.0000 | 0.0000 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vnktrmnb/bert-base-multilingual-cased-finetuned-SQUAD2
|
vnktrmnb
| 2023-07-13T11:56:45Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-12T09:50:00Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/bert-base-multilingual-cased-finetuned-SQUAD2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/bert-base-multilingual-cased-finetuned-SQUAD2
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3530
- Train End Logits Accuracy: 0.6339
- Train Start Logits Accuracy: 0.6471
- Validation Loss: 0.9662
- Validation End Logits Accuracy: 0.7197
- Validation Start Logits Accuracy: 0.7298
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11957, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.3530 | 0.6339 | 0.6471 | 0.9662 | 0.7197 | 0.7298 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AllenQ/model_archive
|
AllenQ
| 2023-07-13T11:53:36Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-13T11:30:15Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-AllenQ/model_archive
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: car

|
sayali45/falcon-7b
|
sayali45
| 2023-07-13T11:49:26Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T05:13:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Fixedbot/Taxis-v3
|
Fixedbot
| 2023-07-13T11:36:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T11:18:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxis-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="Fixedbot/Taxis-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
IbrahemVX2000/kandiskyai2-1
|
IbrahemVX2000
| 2023-07-13T11:29:14Z | 0 | 0 | null |
[
"text-to-image",
"kandinsky",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2023-07-13T11:27:16Z |
---
license: apache-2.0
prior: kandinsky-community/kandinsky-2-1-prior
tags:
- text-to-image
- kandinsky
---
# Kandinsky 2.1
Kandinsky 2.1 inherits best practices from Dall-E 2 and Latent diffusion while introducing some new ideas.
It uses the CLIP model as a text and image encoder, and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov)
## Usage
Kandinsky 2.1 is available in diffusers!
```python
pip install diffusers transformers accelerate
```
### Text to image
```python
from diffusers import DiffusionPipeline
import torch
pipe_prior = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16)
pipe_prior.to("cuda")
t2i_pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
t2i_pipe.to("cuda")
prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality"
image_embeds, negative_image_embeds = pipe_prior(prompt, negative_prompt, guidance_scale=1.0).to_tuple()
image = t2i_pipe(prompt, negative_prompt=negative_prompt, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0]
image.save("cheeseburger_monster.png")
```

### Text Guided Image-to-Image Generation
```python
from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline
import torch
from PIL import Image
import requests
from io import BytesIO
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
original_image = Image.open(BytesIO(response.content)).convert("RGB")
original_image = original_image.resize((768, 512))
# create prior
pipe_prior = KandinskyPriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
# create img2img pipeline
pipe = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipe.to("cuda")
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"
image_embeds, negative_image_embeds = pipe_prior(prompt, negative_prompt).to_tuple()
out = pipe(
prompt,
image=original_image,
image_embeds=image_embeds,
negative_image_embeds=negative_image_embeds,
height=768,
width=768,
strength=0.3,
)
out.images[0].save("fantasy_land.png")
```

### Interpolate
```python
from diffusers import KandinskyPriorPipeline, KandinskyPipeline
from diffusers.utils import load_image
import PIL
import torch
pipe_prior = KandinskyPriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
img1 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)
img2 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/starry_night.jpeg"
)
# add all the conditions we want to interpolate, can be either text or image
images_texts = ["a cat", img1, img2]
# specify the weights for each condition in images_texts
weights = [0.3, 0.3, 0.4]
# We can leave the prompt empty
prompt = ""
prior_out = pipe_prior.interpolate(images_texts, weights)
pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(prompt, **prior_out, height=768, width=768).images[0]
image.save("starry_cat.png")
```

## Model Architecture
### Overview
Kandinsky 2.1 is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder.
The model architectures are illustrated in the figure below - the chart on the left describes the process to train the image prior model, the figure in the center is the text-to-image generation process, and the figure on the right is image interpolation.
<p float="left">
<img src="https://raw.githubusercontent.com/ai-forever/Kandinsky-2/main/content/kandinsky21.png"/>
</p>
Specifically, the image prior model was trained on CLIP text and image embeddings generated with a pre-trained [mCLIP model](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14). The trained image prior model is then used to generate mCLIP image embeddings for input text prompts. Both the input text prompts and its mCLIP image embeddings are used in the diffusion process. A [MoVQGAN](https://openreview.net/forum?id=Qb-AoSw4Jnm) model acts as the final block of the model, which decodes the latent representation into an actual image.
### Details
The image prior training of the model was performed on the [LAION Improved Aesthetics dataset](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images), and then fine-tuning was performed on the [LAION HighRes data](https://huggingface.co/datasets/laion/laion-high-resolution).
The main Text2Image diffusion model was trained on the basis of 170M text-image pairs from the [LAION HighRes dataset](https://huggingface.co/datasets/laion/laion-high-resolution) (an important condition was the presence of images with a resolution of at least 768x768). The use of 170M pairs is due to the fact that we kept the UNet diffusion block from Kandinsky 2.0, which allowed us not to train it from scratch. Further, at the stage of fine-tuning, a dataset of 2M very high-quality high-resolution images with descriptions (COYO, anime, landmarks_russia, and a number of others) was used separately collected from open sources.
### Evaluation
We quantitatively measure the performance of Kandinsky 2.1 on the COCO_30k dataset, in zero-shot mode. The table below presents FID.
FID metric values for generative models on COCO_30k
| | FID (30k)|
|:------|----:|
| eDiff-I (2022) | 6.95 |
| Image (2022) | 7.27 |
| Kandinsky 2.1 (2023) | 8.21|
| Stable Diffusion 2.1 (2022) | 8.59 |
| GigaGAN, 512x512 (2023) | 9.09 |
| DALL-E 2 (2022) | 10.39 |
| GLIDE (2022) | 12.24 |
| Kandinsky 1.0 (2022) | 15.40 |
| DALL-E (2021) | 17.89 |
| Kandinsky 2.0 (2022) | 20.00 |
| GLIGEN (2022) | 21.04 |
For more information, please refer to the upcoming technical report.
## BibTex
If you find this repository useful in your research, please cite:
```
@misc{kandinsky 2.1,
title = {kandinsky 2.1},
author = {Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, Denis Dimitrov},
year = {2023},
howpublished = {},
}
```
|
offlinehq/autotrain-slovenian-swear-words-74310139575
|
offlinehq
| 2023-07-13T11:28:35Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:offlinehq/autotrain-data-slovenian-swear-words",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T11:22:57Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- offlinehq/autotrain-data-slovenian-swear-words
co2_eq_emissions:
emissions: 3.733207533466129
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 74310139575
- CO2 Emissions (in grams): 3.7332
## Validation Metrics
- Loss: 0.575
- Accuracy: 0.702
- Precision: 0.682
- Recall: 0.708
- AUC: 0.764
- F1: 0.695
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/offlinehq/autotrain-slovenian-swear-words-74310139575
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("offlinehq/autotrain-slovenian-swear-words-74310139575", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("offlinehq/autotrain-slovenian-swear-words-74310139575", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Fixedbot/q-FrozenLake-v1-4x4-noSlippery
|
Fixedbot
| 2023-07-13T11:13:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T11:08:04Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="Fixedbot/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
PraveenJesu/openai-whisper-medium-murf
|
PraveenJesu
| 2023-07-13T11:13:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T11:13:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
RushTurtle/crnn_vgg16_bn_20230713-111233
|
RushTurtle
| 2023-07-13T11:13:02Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T11:12:55Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "/tmp/dataset/train3_1100/",
"val_path": "/tmp/dataset/val3_1100/",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 3,
"batch_size": 64,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 16,
"resume": null,
"vocab": "french",
"test_only": false,
"show_samples": false,
"wb": false,
"push_to_hub": true,
"pretrained": false,
"sched": "cosine",
"amp": false,
"find_lr": false
}
|
FreedomIntelligence/HuatuoGPT-13b-delta
|
FreedomIntelligence
| 2023-07-13T11:07:20Z | 24 | 18 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T06:09:35Z |
---
license: apache-2.0
---
Please see our [HuatuoGPT](https://github.com/FreedomIntelligence/HuatuoGPT) project: https://github.com/FreedomIntelligence/HuatuoGPT.
|
mkobos/joules-lorretta-jersey-blouse
|
mkobos
| 2023-07-13T11:06:11Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:emilianJR/HRA_hyperrealism_art",
"base_model:adapter:emilianJR/HRA_hyperrealism_art",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-13T11:06:11Z |
---
license: creativeml-openrail-m
base_model: emilianJR/HRA_hyperrealism_art
instance_prompt: Joules Lorretta Jersey Blouse
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - joules-lorretta-jersey-blouse
These are LoRA adaption weights for [emilianJR/HRA_hyperrealism_art](https://huggingface.co/emilianJR/HRA_hyperrealism_art). The weights were trained on the instance prompt "Joules Lorretta Jersey Blouse" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
BlueSunflower/gpt2-medium-chess
|
BlueSunflower
| 2023-07-13T10:51:47Z | 188 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-30T14:13:46Z |
# Model description
GPT-2 medium finetuned on 8 million chess games (short algebraic notation)
Data: Chess DB + sample from lichess + sample from CCRL
Example context: "1-0 2700 1350 1.e4 e5 2.Nf3 Nc6" (white score-black score white_elo black_elo moves)
# Model results
- ELO (measured against Stockfish) ~ 1340
- % legal moves 98.5%
- checkmates in one move (from BigBench benchmark) - 46.5%
---
license: agpl-3.0
---
|
Virch/q-FrozenLake-v1-4x4-noSlippery
|
Virch
| 2023-07-13T10:51:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T10:43:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Virch/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jpandeinge/DialoGPT-medium-Oshiwambo-Bot
|
jpandeinge
| 2023-07-13T10:48:52Z | 154 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T06:12:35Z |
---
pipeline_tag: conversational
---
|
Shishir1807/Indication_Training-1
|
Shishir1807
| 2023-07-13T10:42:46Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-13T10:40:21Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/Indication_Training-1",
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/Indication_Training-1",
use_fast=True,
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/Indication_Training-1",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/Indication_Training-1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2560)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=2560, out_features=7680, bias=True)
(dense): Linear(in_features=2560, out_features=2560, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True)
(dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2560, out_features=50304, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=Shishir1807/Indication_Training-1 --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
waliaMuskaan011/model5
|
waliaMuskaan011
| 2023-07-13T10:40:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T16:33:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
- tasks: automatic speech recognition
### Framework versions
- PEFT 0.4.0.dev0
|
arunboss/triage_R5_model
|
arunboss
| 2023-07-13T10:17:28Z | 218 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T03:02:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: triage_R5_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# triage_R5_model
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0123
- Accuracy: 0.6837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0452 | 1.0 | 159 | 1.9622 | 0.3814 |
| 1.7034 | 2.0 | 319 | 1.5695 | 0.4923 |
| 1.441 | 3.0 | 479 | 1.4427 | 0.5433 |
| 1.2908 | 4.0 | 639 | 1.2970 | 0.5895 |
| 1.2294 | 5.0 | 798 | 1.2293 | 0.6071 |
| 1.1097 | 6.0 | 958 | 1.1892 | 0.6300 |
| 1.0342 | 7.0 | 1118 | 1.1048 | 0.6546 |
| 0.9644 | 8.0 | 1278 | 1.0731 | 0.6678 |
| 0.8534 | 9.0 | 1437 | 1.0367 | 0.6766 |
| 0.8037 | 10.0 | 1597 | 1.0211 | 0.6802 |
| 0.7765 | 11.0 | 1757 | 1.0073 | 0.6885 |
| 0.7658 | 11.94 | 1908 | 1.0123 | 0.6837 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Devops-hestabit/Othehalf-1.3b-onnx
|
Devops-hestabit
| 2023-07-13T10:08:08Z | 4 | 0 |
transformers
|
[
"transformers",
"onnx",
"gpt_neox",
"text-generation",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T09:25:46Z |
---
license: creativeml-openrail-m
---
|
ZoeVN/segformer-scene-parse-150-lora-50-epoch
|
ZoeVN
| 2023-07-13T10:02:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T10:02:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
fnlp/moss-rlhf-reward-model-7B-en
|
fnlp
| 2023-07-13T09:54:07Z | 0 | 9 | null |
[
"llm",
"reward model",
"moss",
"rlhf",
"zh",
"arxiv:2307.04964",
"license:agpl-3.0",
"region:us"
] | null | 2023-07-13T03:12:42Z |
---
license: agpl-3.0
language:
- zh
tags:
- llm
- reward model
- moss
- rlhf
---
# MOSS-RLHF
### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>👉 <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]*
## 🌟 News
### 👉 Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B!
[moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main)
<br>
### 👉 Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B!
[moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en)
[moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en)
<br>
## 🧾 Open-source List
- [x] Open source code for RL training in large language models.
- [x] A 7B Chinese reward model based on openChineseLlama.
- [x] A 7B English reward model based on Llama-7B.
- [x] SFT model for English.
- [ ] Policy model for English after RLHF.
- ...
## 🌠 Introduction
Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
In this technical report, we intend to help researchers to train their models stably with human feedback.
Contributions are summarized as follows:
1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data;
2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training;
3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
## 🔩 Requirements & Setup
This repository works on Python 3.8 and PyTorch 1.13.1.
We recommend using the **conda** virtual environment to run the code.
#### Step 1: Create a new Python virtual environment
```bash
conda update conda -n base -c defaults
conda create -n rlhf python=3.8
conda activate rlhf
```
#### Step 2: Install PyTorch and TensorBoard
```bash
conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
```
#### Step 3: Install the remaining dependencies
```bash
conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
apt install libaio-dev
DS_BUILD_OPS=1 pip install deepspeed
```
## ✨ Start training your own model!
Run code in a few steps.
### Step 1: Recover Reward model weights
We can not directly release the full weight of the reward model because of protocol restrictions.
You can merge the diff weight with original Llama-7B to recover the reward model we used.
We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps:
```bash
1) Download the weight diff into your local machine. The weight diff is located at:
# For English:
TODO
# For Chinese:
https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main
2) Merge the weight diff with the original Llama-7B:
# For English:
# Reward model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward
# SFT model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft
# Policy model
TODO
# For Chinese:
python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover
```
### Step 2: Select your own SFT model.
Because of some limitations, we can not release the **Chinese** SFT model (Currently).
You can use your own SFT model, or a strong base model instead of our SFT model.
### Step 3: Start training
Run the command below.
```
# For Chinese:
# You need to use your own sft model currently.
bash run_zh.sh
# For English:
# We have loaded the sft model and reward model to huggingface.
bash run_en.sh
```
## Citation
```bibtex
@article{zheng2023secrets,
title={Secrets of RLHF in Large Language Models Part I: PPO},
author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
year={2023},
eprint={2307.04964},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
gaioNL/roberta-base_ag_news
|
gaioNL
| 2023-07-13T09:49:21Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T04:49:51Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- ag_news
model-index:
- name: roberta-base_ag_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_ag_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4306 | 1.0 | 15000 | 1.3696 |
| 1.0725 | 2.0 | 30000 | 0.9407 |
| 0.8715 | 3.0 | 45000 | 0.7991 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pigliketoeat/distilgpt2-finetuned-wikitext2
|
pigliketoeat
| 2023-07-13T09:45:58Z | 200 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T08:51:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dada325/Taxi-v3-qLearning-test
|
dada325
| 2023-07-13T09:34:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T09:34:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-qLearning-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dada325/Taxi-v3-qLearning-test", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Fixedbot/ppo-Huggy
|
Fixedbot
| 2023-07-13T09:33:07Z | 23 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-13T09:32:52Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Fixedbot/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KyriaAnnwyn/vit-large-artifacts
|
KyriaAnnwyn
| 2023-07-13T09:26:30Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-07T12:11:49Z |
---
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-large-artifacts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-artifacts
This model is a fine-tuned version of [kakaobrain/vit-large-patch16-512](https://huggingface.co/kakaobrain/vit-large-patch16-512) on the KyriaAnnwyn/artifacts_ds dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5995
- Accuracy: 0.6705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7001 | 0.01 | 100 | 0.6414 | 0.6559 |
| 0.6288 | 0.01 | 200 | 0.6666 | 0.6559 |
| 0.7237 | 0.02 | 300 | 0.7087 | 0.6559 |
| 0.8741 | 0.03 | 400 | 0.6739 | 0.6257 |
| 0.6093 | 0.04 | 500 | 0.6462 | 0.6559 |
| 0.5801 | 0.04 | 600 | 0.6822 | 0.6559 |
| 0.594 | 0.05 | 700 | 1.9948 | 0.6395 |
| 0.7724 | 0.06 | 800 | 0.6566 | 0.6553 |
| 0.6976 | 0.07 | 900 | 0.6774 | 0.6325 |
| 0.6583 | 0.07 | 1000 | 0.7175 | 0.3517 |
| 0.6779 | 0.08 | 1100 | 0.7012 | 0.6559 |
| 0.6478 | 0.09 | 1200 | 0.6336 | 0.6559 |
| 0.7405 | 0.1 | 1300 | 0.6577 | 0.6559 |
| 0.7362 | 0.1 | 1400 | 0.6630 | 0.6142 |
| 0.535 | 0.11 | 1500 | 0.7445 | 0.6559 |
| 0.7338 | 0.12 | 1600 | 0.7046 | 0.4718 |
| 0.6519 | 0.13 | 1700 | 0.6601 | 0.6426 |
| 0.5969 | 0.13 | 1800 | 0.6518 | 0.6559 |
| 0.5992 | 0.14 | 1900 | 0.6544 | 0.6559 |
| 0.5762 | 0.15 | 2000 | 0.6608 | 0.6559 |
| 0.6483 | 0.16 | 2100 | 0.6436 | 0.6331 |
| 0.7594 | 0.16 | 2200 | 0.7562 | 0.5213 |
| 0.6423 | 0.17 | 2300 | 0.6326 | 0.6433 |
| 0.7006 | 0.18 | 2400 | 0.6669 | 0.6108 |
| 0.833 | 0.19 | 2500 | 0.7043 | 0.6559 |
| 0.6133 | 0.19 | 2600 | 0.6356 | 0.6532 |
| 0.5285 | 0.2 | 2700 | 0.6619 | 0.6606 |
| 0.7209 | 0.21 | 2800 | 0.7306 | 0.4196 |
| 0.682 | 0.22 | 2900 | 0.6400 | 0.6539 |
| 0.7148 | 0.22 | 3000 | 0.6421 | 0.6559 |
| 0.6288 | 0.23 | 3100 | 0.7416 | 0.6559 |
| 0.666 | 0.24 | 3200 | 0.6368 | 0.6293 |
| 0.772 | 0.25 | 3300 | 0.6973 | 0.4985 |
| 0.6778 | 0.25 | 3400 | 0.6288 | 0.6604 |
| 0.5939 | 0.26 | 3500 | 0.6566 | 0.6559 |
| 0.6246 | 0.27 | 3600 | 0.6347 | 0.6618 |
| 0.649 | 0.28 | 3700 | 0.6353 | 0.6277 |
| 0.7122 | 0.28 | 3800 | 0.6407 | 0.6559 |
| 0.6292 | 0.29 | 3900 | 0.6776 | 0.6560 |
| 0.6079 | 0.3 | 4000 | 0.6220 | 0.6609 |
| 0.6971 | 0.31 | 4100 | 0.6258 | 0.6394 |
| 0.7131 | 0.31 | 4200 | 0.7202 | 0.6556 |
| 0.5346 | 0.32 | 4300 | 0.6394 | 0.6571 |
| 0.5801 | 0.33 | 4400 | 0.6960 | 0.6664 |
| 0.6806 | 0.34 | 4500 | 0.6339 | 0.6348 |
| 0.6245 | 0.34 | 4600 | 0.6226 | 0.6477 |
| 0.6905 | 0.35 | 4700 | 0.6203 | 0.6533 |
| 0.741 | 0.36 | 4800 | 0.6464 | 0.6680 |
| 0.5712 | 0.37 | 4900 | 0.6162 | 0.6640 |
| 0.5566 | 0.37 | 5000 | 0.6182 | 0.6507 |
| 0.6443 | 0.38 | 5100 | 0.6457 | 0.6664 |
| 0.6107 | 0.39 | 5200 | 0.6092 | 0.6617 |
| 0.5824 | 0.4 | 5300 | 0.6383 | 0.6571 |
| 0.4775 | 0.4 | 5400 | 0.6606 | 0.6621 |
| 0.7114 | 0.41 | 5500 | 0.6179 | 0.6619 |
| 0.7701 | 0.42 | 5600 | 0.7982 | 0.4217 |
| 0.6974 | 0.42 | 5700 | 0.6223 | 0.6540 |
| 0.6669 | 0.43 | 5800 | 0.6249 | 0.6559 |
| 0.6982 | 0.44 | 5900 | 0.6287 | 0.6564 |
| 0.5811 | 0.45 | 6000 | 0.6104 | 0.6506 |
| 0.4347 | 0.45 | 6100 | 1.0475 | 0.6559 |
| 0.5885 | 0.46 | 6200 | 0.6125 | 0.6552 |
| 0.6867 | 0.47 | 6300 | 0.6435 | 0.6468 |
| 0.6088 | 0.48 | 6400 | 0.6047 | 0.6623 |
| 0.8194 | 0.48 | 6500 | 0.6972 | 0.6589 |
| 0.8182 | 0.49 | 6600 | 0.6053 | 0.6644 |
| 0.6104 | 0.5 | 6700 | 0.7375 | 0.6571 |
| 0.5552 | 0.51 | 6800 | 0.6231 | 0.6402 |
| 0.6451 | 0.51 | 6900 | 0.6452 | 0.6561 |
| 0.7849 | 0.52 | 7000 | 0.6177 | 0.6612 |
| 0.64 | 0.53 | 7100 | 0.6307 | 0.6234 |
| 0.6393 | 0.54 | 7200 | 0.6130 | 0.6554 |
| 0.8326 | 0.54 | 7300 | 0.7210 | 0.6421 |
| 0.6579 | 0.55 | 7400 | 0.6227 | 0.6544 |
| 0.5195 | 0.56 | 7500 | 0.6619 | 0.6557 |
| 0.6197 | 0.57 | 7600 | 0.6354 | 0.6498 |
| 0.8507 | 0.57 | 7700 | 0.6820 | 0.6550 |
| 0.7163 | 0.58 | 7800 | 0.6720 | 0.5328 |
| 0.6896 | 0.59 | 7900 | 0.6530 | 0.6386 |
| 0.62 | 0.6 | 8000 | 0.6296 | 0.6559 |
| 0.8254 | 0.6 | 8100 | 0.6752 | 0.6200 |
| 0.7653 | 0.61 | 8200 | 0.7118 | 0.6558 |
| 0.7742 | 0.62 | 8300 | 0.6262 | 0.6497 |
| 0.6861 | 0.63 | 8400 | 0.6799 | 0.5566 |
| 0.5652 | 0.63 | 8500 | 0.6708 | 0.6559 |
| 0.7486 | 0.64 | 8600 | 0.6319 | 0.6559 |
| 0.6204 | 0.65 | 8700 | 0.6407 | 0.6530 |
| 0.673 | 0.66 | 8800 | 0.7154 | 0.4672 |
| 0.7272 | 0.66 | 8900 | 0.6323 | 0.6528 |
| 0.7364 | 0.67 | 9000 | 0.6436 | 0.6188 |
| 0.71 | 0.68 | 9100 | 0.6507 | 0.5924 |
| 0.6767 | 0.69 | 9200 | 0.6347 | 0.6575 |
| 0.7046 | 0.69 | 9300 | 0.6723 | 0.6127 |
| 0.7486 | 0.7 | 9400 | 0.6328 | 0.6485 |
| 0.7646 | 0.71 | 9500 | 0.6244 | 0.6550 |
| 0.5971 | 0.72 | 9600 | 0.6610 | 0.6558 |
| 0.6195 | 0.72 | 9700 | 0.6219 | 0.6515 |
| 0.6891 | 0.73 | 9800 | 0.6300 | 0.6619 |
| 0.6829 | 0.74 | 9900 | 0.6312 | 0.6568 |
| 0.4786 | 0.75 | 10000 | 0.7160 | 0.6573 |
| 0.6093 | 0.75 | 10100 | 0.6245 | 0.6503 |
| 0.672 | 0.76 | 10200 | 0.6248 | 0.6577 |
| 0.6734 | 0.77 | 10300 | 0.6541 | 0.6600 |
| 0.7826 | 0.78 | 10400 | 0.6413 | 0.6559 |
| 0.6851 | 0.78 | 10500 | 0.6478 | 0.6006 |
| 0.6776 | 0.79 | 10600 | 0.6453 | 0.6175 |
| 0.7322 | 0.8 | 10700 | 0.6188 | 0.6353 |
| 0.5144 | 0.81 | 10800 | 0.6762 | 0.6571 |
| 0.6977 | 0.81 | 10900 | 0.6559 | 0.6544 |
| 0.5681 | 0.82 | 11000 | 0.7225 | 0.6559 |
| 0.6449 | 0.83 | 11100 | 0.6372 | 0.6576 |
| 0.6067 | 0.83 | 11200 | 0.6207 | 0.6391 |
| 0.5921 | 0.84 | 11300 | 0.6178 | 0.6538 |
| 0.5373 | 0.85 | 11400 | 0.7370 | 0.6559 |
| 0.6926 | 0.86 | 11500 | 0.6346 | 0.6372 |
| 0.6634 | 0.86 | 11600 | 0.6274 | 0.6489 |
| 0.61 | 0.87 | 11700 | 0.6309 | 0.6427 |
| 0.6214 | 0.88 | 11800 | 0.6273 | 0.6480 |
| 0.6202 | 0.89 | 11900 | 0.6255 | 0.6559 |
| 0.6153 | 0.89 | 12000 | 0.6348 | 0.6459 |
| 0.7062 | 0.9 | 12100 | 0.6283 | 0.6512 |
| 0.6977 | 0.91 | 12200 | 0.6159 | 0.6515 |
| 0.6041 | 0.92 | 12300 | 0.6251 | 0.6504 |
| 0.6609 | 0.92 | 12400 | 0.6633 | 0.5870 |
| 0.7565 | 0.93 | 12500 | 0.6200 | 0.6562 |
| 0.6133 | 0.94 | 12600 | 0.6193 | 0.6527 |
| 0.7066 | 0.95 | 12700 | 0.6279 | 0.6180 |
| 0.5706 | 0.95 | 12800 | 0.6128 | 0.6575 |
| 0.6992 | 0.96 | 12900 | 0.6334 | 0.6449 |
| 0.6834 | 0.97 | 13000 | 0.6258 | 0.6591 |
| 0.6069 | 0.98 | 13100 | 0.6290 | 0.6620 |
| 0.743 | 0.98 | 13200 | 0.6110 | 0.6562 |
| 0.5226 | 0.99 | 13300 | 0.6165 | 0.6557 |
| 0.7359 | 1.0 | 13400 | 0.6207 | 0.6376 |
| 0.5812 | 1.01 | 13500 | 0.6192 | 0.6559 |
| 0.666 | 1.01 | 13600 | 0.6347 | 0.6602 |
| 0.5489 | 1.02 | 13700 | 0.6107 | 0.6459 |
| 0.701 | 1.03 | 13800 | 0.6172 | 0.6518 |
| 0.4873 | 1.04 | 13900 | 0.6786 | 0.6559 |
| 0.5807 | 1.04 | 14000 | 0.6636 | 0.6433 |
| 0.6824 | 1.05 | 14100 | 0.6176 | 0.6315 |
| 0.6012 | 1.06 | 14200 | 0.6097 | 0.6617 |
| 0.4865 | 1.07 | 14300 | 0.6103 | 0.6623 |
| 0.5612 | 1.07 | 14400 | 0.6947 | 0.6559 |
| 0.5968 | 1.08 | 14500 | 0.6559 | 0.5981 |
| 0.5657 | 1.09 | 14600 | 0.6076 | 0.6509 |
| 0.4778 | 1.1 | 14700 | 0.6808 | 0.6535 |
| 0.6047 | 1.1 | 14800 | 0.6131 | 0.6480 |
| 0.5999 | 1.11 | 14900 | 0.6120 | 0.6559 |
| 0.5852 | 1.12 | 15000 | 0.6356 | 0.6553 |
| 0.7033 | 1.13 | 15100 | 0.6578 | 0.6647 |
| 0.5925 | 1.13 | 15200 | 0.6153 | 0.6633 |
| 0.5959 | 1.14 | 15300 | 0.6306 | 0.6211 |
| 0.5929 | 1.15 | 15400 | 0.6246 | 0.6655 |
| 0.5621 | 1.16 | 15500 | 0.6126 | 0.6424 |
| 0.5508 | 1.16 | 15600 | 0.6844 | 0.6559 |
| 0.6276 | 1.17 | 15700 | 0.6066 | 0.6531 |
| 1.0359 | 1.18 | 15800 | 0.6271 | 0.6617 |
| 0.6191 | 1.19 | 15900 | 0.6166 | 0.6480 |
| 0.7095 | 1.19 | 16000 | 0.6228 | 0.6462 |
| 0.6567 | 1.2 | 16100 | 0.6066 | 0.6653 |
| 0.5653 | 1.21 | 16200 | 0.6022 | 0.6605 |
| 0.6894 | 1.21 | 16300 | 0.6216 | 0.6568 |
| 0.608 | 1.22 | 16400 | 0.6041 | 0.6559 |
| 0.665 | 1.23 | 16500 | 0.6111 | 0.6564 |
| 0.6753 | 1.24 | 16600 | 0.6138 | 0.6581 |
| 0.6213 | 1.24 | 16700 | 0.6121 | 0.6380 |
| 0.6983 | 1.25 | 16800 | 0.6166 | 0.6661 |
| 0.8521 | 1.26 | 16900 | 0.6202 | 0.6461 |
| 0.4927 | 1.27 | 17000 | 0.6313 | 0.6547 |
| 0.6414 | 1.27 | 17100 | 0.6011 | 0.6667 |
| 0.539 | 1.28 | 17200 | 0.6451 | 0.6664 |
| 0.5118 | 1.29 | 17300 | 0.6243 | 0.6641 |
| 0.7512 | 1.3 | 17400 | 0.6257 | 0.6586 |
| 0.5943 | 1.3 | 17500 | 0.6186 | 0.6423 |
| 0.5861 | 1.31 | 17600 | 0.6435 | 0.6638 |
| 0.7065 | 1.32 | 17700 | 0.6197 | 0.6279 |
| 0.5973 | 1.33 | 17800 | 0.6081 | 0.6535 |
| 0.5997 | 1.33 | 17900 | 0.6053 | 0.6608 |
| 0.7091 | 1.34 | 18000 | 0.6013 | 0.6644 |
| 0.691 | 1.35 | 18100 | 0.6103 | 0.6654 |
| 0.5559 | 1.36 | 18200 | 0.6110 | 0.6658 |
| 0.6309 | 1.36 | 18300 | 0.6067 | 0.6664 |
| 0.6262 | 1.37 | 18400 | 0.6027 | 0.6616 |
| 0.5551 | 1.38 | 18500 | 0.6106 | 0.6671 |
| 0.6703 | 1.39 | 18600 | 0.6043 | 0.6576 |
| 0.6849 | 1.39 | 18700 | 0.6018 | 0.6616 |
| 0.6136 | 1.4 | 18800 | 0.6324 | 0.6629 |
| 0.7075 | 1.41 | 18900 | 0.6057 | 0.6561 |
| 0.6036 | 1.42 | 19000 | 0.6081 | 0.6559 |
| 0.6549 | 1.42 | 19100 | 0.6352 | 0.6655 |
| 0.5168 | 1.43 | 19200 | 0.6042 | 0.6632 |
| 0.5864 | 1.44 | 19300 | 0.6111 | 0.6639 |
| 0.5961 | 1.45 | 19400 | 0.6003 | 0.6644 |
| 0.6077 | 1.45 | 19500 | 0.6125 | 0.6566 |
| 0.6215 | 1.46 | 19600 | 0.6128 | 0.6582 |
| 0.4005 | 1.47 | 19700 | 0.6348 | 0.6642 |
| 0.5689 | 1.48 | 19800 | 0.6355 | 0.6647 |
| 0.6026 | 1.48 | 19900 | 0.6127 | 0.6444 |
| 0.4982 | 1.49 | 20000 | 0.6034 | 0.6654 |
| 0.6189 | 1.5 | 20100 | 0.6202 | 0.6609 |
| 0.5502 | 1.51 | 20200 | 0.6044 | 0.6621 |
| 0.5924 | 1.51 | 20300 | 0.6107 | 0.6445 |
| 0.744 | 1.52 | 20400 | 0.6164 | 0.6559 |
| 0.5582 | 1.53 | 20500 | 0.6166 | 0.6559 |
| 0.6994 | 1.54 | 20600 | 0.6109 | 0.6664 |
| 0.5396 | 1.54 | 20700 | 0.6189 | 0.6670 |
| 0.7232 | 1.55 | 20800 | 0.6104 | 0.6610 |
| 0.9802 | 1.56 | 20900 | 0.6232 | 0.6642 |
| 0.6487 | 1.57 | 21000 | 0.6056 | 0.6505 |
| 0.5932 | 1.57 | 21100 | 0.5980 | 0.6702 |
| 0.7897 | 1.58 | 21200 | 0.6012 | 0.6638 |
| 0.6006 | 1.59 | 21300 | 0.6232 | 0.6672 |
| 0.4481 | 1.6 | 21400 | 0.6124 | 0.6676 |
| 0.6078 | 1.6 | 21500 | 0.6495 | 0.6664 |
| 0.595 | 1.61 | 21600 | 0.7122 | 0.6675 |
| 0.6388 | 1.62 | 21700 | 0.6227 | 0.6671 |
| 0.5731 | 1.62 | 21800 | 0.6252 | 0.6682 |
| 0.8603 | 1.63 | 21900 | 0.6026 | 0.6653 |
| 0.6316 | 1.64 | 22000 | 0.6494 | 0.6669 |
| 0.6712 | 1.65 | 22100 | 0.6097 | 0.6676 |
| 0.6102 | 1.65 | 22200 | 0.6221 | 0.6585 |
| 0.7099 | 1.66 | 22300 | 0.6006 | 0.6658 |
| 0.621 | 1.67 | 22400 | 0.6026 | 0.6626 |
| 0.478 | 1.68 | 22500 | 0.6062 | 0.6624 |
| 0.6106 | 1.68 | 22600 | 0.5990 | 0.6669 |
| 0.5793 | 1.69 | 22700 | 0.5980 | 0.6681 |
| 0.5804 | 1.7 | 22800 | 0.6014 | 0.6626 |
| 0.6304 | 1.71 | 22900 | 0.6107 | 0.6380 |
| 0.7427 | 1.71 | 23000 | 0.6051 | 0.6682 |
| 0.5794 | 1.72 | 23100 | 0.6105 | 0.6611 |
| 0.5084 | 1.73 | 23200 | 0.6643 | 0.6673 |
| 0.6518 | 1.74 | 23300 | 0.6366 | 0.6687 |
| 0.5129 | 1.74 | 23400 | 0.6053 | 0.6682 |
| 0.7593 | 1.75 | 23500 | 0.5977 | 0.6662 |
| 0.6645 | 1.76 | 23600 | 0.5988 | 0.6683 |
| 0.6144 | 1.77 | 23700 | 0.6130 | 0.6673 |
| 0.6855 | 1.77 | 23800 | 0.6192 | 0.6596 |
| 0.559 | 1.78 | 23900 | 0.6208 | 0.6574 |
| 0.4202 | 1.79 | 24000 | 0.6125 | 0.6690 |
| 0.6604 | 1.8 | 24100 | 0.6052 | 0.6685 |
| 0.5487 | 1.8 | 24200 | 0.6086 | 0.6685 |
| 0.6816 | 1.81 | 24300 | 0.5997 | 0.6620 |
| 0.6057 | 1.82 | 24400 | 0.6128 | 0.6530 |
| 0.4335 | 1.83 | 24500 | 0.6121 | 0.6676 |
| 0.6147 | 1.83 | 24600 | 0.6225 | 0.6670 |
| 0.7414 | 1.84 | 24700 | 0.6248 | 0.6718 |
| 0.622 | 1.85 | 24800 | 0.6084 | 0.6722 |
| 0.5356 | 1.86 | 24900 | 0.6003 | 0.6611 |
| 0.7994 | 1.86 | 25000 | 0.6098 | 0.6657 |
| 0.5389 | 1.87 | 25100 | 0.6052 | 0.6633 |
| 0.6985 | 1.88 | 25200 | 0.6073 | 0.6694 |
| 0.652 | 1.89 | 25300 | 0.6040 | 0.6709 |
| 0.5409 | 1.89 | 25400 | 0.6065 | 0.6709 |
| 0.6356 | 1.9 | 25500 | 0.6062 | 0.6699 |
| 0.7588 | 1.91 | 25600 | 0.6025 | 0.6711 |
| 0.5109 | 1.92 | 25700 | 0.5992 | 0.6693 |
| 0.6766 | 1.92 | 25800 | 0.6004 | 0.6693 |
| 0.6517 | 1.93 | 25900 | 0.6020 | 0.6701 |
| 0.6561 | 1.94 | 26000 | 0.5995 | 0.6705 |
| 0.6224 | 1.95 | 26100 | 0.6008 | 0.6717 |
| 0.6054 | 1.95 | 26200 | 0.6005 | 0.6714 |
| 0.5152 | 1.96 | 26300 | 0.6023 | 0.6709 |
| 0.5503 | 1.97 | 26400 | 0.6032 | 0.6706 |
| 0.5101 | 1.98 | 26500 | 0.6067 | 0.6709 |
| 0.5229 | 1.98 | 26600 | 0.6079 | 0.6702 |
| 0.8387 | 1.99 | 26700 | 0.6079 | 0.6700 |
| 0.608 | 2.0 | 26800 | 0.6069 | 0.6699 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu116
- Datasets 2.13.1
- Tokenizers 0.13.3
|
imvladikon/het5_small_summarization
|
imvladikon
| 2023-07-13T09:02:58Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"he",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-07-02T12:43:15Z |
---
language:
- he
pipeline_tag: summarization
---
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, SummarizationPipeline
model_name = "imvladikon/het5_small_summarization"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
summarizer = SummarizationPipeline(model=model, tokenizer=tokenizer)
```
example
```python
text = """
צרפת ממשיכה לבעור: לאחר ארבעה ימים של עימותים אלימים בין מתפרעים לכוחות הביטחון בכל רחבי צרפת, היום (שבת) התקיימה הלוויתו של הנער האלג'יראי, נאהל בן ה-17, שנורה למוות על ידי שוטר לאחר שנחשד בגניבת רכב. לבקשת משפחתו, ההלוויה התקיימה כאירוע מצומצמם שבו השתתפו בני משפחה וחברים בלבד. לאחר שארונו של נאהל הוצא מהמסגד בעיר נאנטר, אלפים קראו "לעשיית צדק עבורו".במקביל, המשטרה הצרפתית נערכת להמשך המהומות בעשרות מוקדים ברחבי המדינה, כשבמהלך הלילה נעצרו 1,300 בני אדם. משרד הפנים הצרפתי הודיע כי במהלך האירועים הוצתו 1,350 כלי רכב, ו-234 הצתות של מבנים. כמו כן, על פי הנתונים נגרם נזק ל-200 מרכזי קניות, 200 סופרמרקטים ו-250 סניפי בנק.
""".strip()
summarizer(text,
max_length=50,
num_beams=4,
no_repeat_ngram_size=2,
early_stopping=True)[0]["summary_text"]
#נער האלג'יראי, בן 17, נורה למוות על ידי שוטר לאחר שנחשד בגניבת רכב. הלוויתו התקיימה כאירוע מצומצם שבו השתתפו בני משפחה
```
|
soonmo/distilbert-base-uncased-finetuned-clinc
|
soonmo
| 2023-07-13T08:58:26Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T01:45:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7754
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2893 | 1.0 | 318 | 3.2831 | 0.7397 |
| 2.6289 | 2.0 | 636 | 1.8731 | 0.8345 |
| 1.5481 | 3.0 | 954 | 1.1580 | 0.89 |
| 1.0137 | 4.0 | 1272 | 0.8584 | 0.9077 |
| 0.7969 | 5.0 | 1590 | 0.7754 | 0.9161 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aditii09/whisper_eng_asr
|
aditii09
| 2023-07-13T08:58:20Z | 76 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-13T08:45:39Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-base
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 5.008769117619326
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 12.84936273212057
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- name: Test WER
type: wer
value: 131
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-base")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-base")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-base")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Base on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-base")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
5.082316555716899
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-base",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2
|
Jorgeutd
| 2023-07-13T08:54:20Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sagemaker",
"bert-base-uncased",
"text classification",
"en",
"dataset:adecorpusv2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: en
widget:
- text: "I got a rash from taking acetaminophen"
tags:
- sagemaker
- bert-base-uncased
- text classification
license: apache-2.0
datasets:
- adecorpusv2
model-index:
- name: BERT-ade_corpus
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: "ade_corpus_v2Ade_corpus_v2_classification"
type: ade_corpus
metrics:
- name: Validation Accuracy
type: accuracy
value: 92.98
- name: Validation F1
type: f1
value: 82.73
---
## bert-base-uncased
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
- Problem type: Text Classification(adverse drug effects detection).
## Hyperparameters
```json
{
"do_eval": true,
"do_train": true,
"fp16": true,
"load_best_model_at_end": true,
"model_name": "bert-base-uncased",
"num_train_epochs": 10,
"per_device_eval_batch_size": 16,
"per_device_train_batch_size": 16,
"learning_rate":5e-5
}
```
## Validation Metrics
| key | value |
| --- | ----- |
| eval_accuracy | 0.9298021697511167 |
| eval_auc | 0.8902672664394546 |
| eval_f1 | 0.827315541601256 |
| eval_loss | 0.17835010588169098 |
| eval_recall | 0.8234375 |
| eval_precision | 0.831230283911672 |
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I got a rash from taking acetaminophen"}' https://api-inference.huggingface.co/models/Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2
```
"""
|
digiplay/hellopure_v2.24Beta
|
digiplay
| 2023-07-13T08:49:07Z | 70 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T04:21:25Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
👍👍👍👍👍
https://civitai.com/models/88202/hellopure
Other models from Author: https://civitai.com/user/aji1/models

Sample image I made with AUTOMATIC1111 :

parameters
very close-up ,(best beautiful:1.2), (masterpiece:1.2), (best quality:1.2),masterpiece, best quality, The image features a beautiful young woman with long light golden hair, beach near the ocean, white dress ,The beach is lined with palm trees,
Negative prompt: worst quality ,normal quality ,
Steps: 17, Sampler: Euler, CFG scale: 5, Seed: 1097775045, Size: 480x680, Model hash: 8d4fa7988b, Clip skip: 2, Version: v1.4.1
|
Krelyshy/Heavy
|
Krelyshy
| 2023-07-13T08:48:28Z | 0 | 0 | null |
[
"en",
"region:us"
] | null | 2023-07-12T20:34:24Z |
---
language:
- en
---
# Heavy (Misha) - Team Fortress 2 [RVC V2] [305 Epochs]
Created by @Krelyshy on discord, use freely.
Download: https://huggingface.co/Krelyshy/Heavy/resolve/main/heavy-krel.zip
Backup: https://drive.google.com/file/d/1osCZrtcx0Gtc-8nthZ6L1Pm5nRMi8kxk/view?usp=drive_link
|
gabrielgme/falcon-7b-spider-with-schema
|
gabrielgme
| 2023-07-13T08:44:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T13:21:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
seongj/polyglot-ko-1.3b-quant
|
seongj
| 2023-07-13T08:24:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-12T10:52:42Z |
Pytorch Quantized Model
- Dynamic Quantization : INT8
|
uripper/AVA
|
uripper
| 2023-07-13T08:15:52Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-22T20:54:37Z |
---
license: cc
widget:
- text: "Movie: Parasite Score:"
example_title: "Parasite"
- text: "Movie: Come and See Score:"
example_title: "Come and See"
- text: "Movie: Harakiri Score:"
example_title: "Harakiri"
---
# Review Training Bot
This model was trained for the purpose of generating scores and reviews for any given movie. It is fine-tuned on distilgpt2 as a baseline and trained on a custom dataset created by scraping around 120k letterboxd reviews. The current state of the model can get the correct formatting reliably but oftentimes is prone to gibberish. Further training will hopefully add coherency. It is in version 0.1 currently.
## Intended uses & limitations
This model is intended to be used for entertainment.
Limitations for this model will be much of the same as distilgpt2 which can be viewed here https://huggingface.co/distilgpt2. These may include persistent biases. Another issue may be through language specifically on letterboxd that the algorithm may not be able to understand. i.e. an LGBT+ film on letterboxd may have multiple reviews that mention the word "gay" positively, this model has not been able to understand this contextual usage and will use the word as a slur. As the current model also struggles to find a connection between movie titles and the reviews, this could happen with any entered movie.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 10
- eval_batch_size: 20
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
HoaAn2003/ppo-Huggy
|
HoaAn2003
| 2023-07-13T08:13:54Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-13T08:13:06Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HoaAn2003/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ablustrund/moss-rlhf-reward-model-7B-zh
|
Ablustrund
| 2023-07-13T08:10:42Z | 3 | 23 | null |
[
"llm",
"reward model",
"moss",
"rlhf",
"zh",
"arxiv:2307.04964",
"license:agpl-3.0",
"region:us"
] | null | 2023-07-12T02:27:02Z |
---
license: agpl-3.0
language:
- zh
tags:
- llm
- reward model
- moss
- rlhf
---
# MOSS-RLHF
### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>👉 <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]*
## 🌟 News
### 👉 Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B!
[moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main)
<br>
### 👉 Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B!
[moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en)
[moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en)
<br>
## 🧾 Open-source List
- [x] Open source code for RL training in large language models.
- [x] A 7B Chinese reward model based on openChineseLlama.
- [x] A 7B English reward model based on Llama-7B.
- [x] SFT model for English.
- [ ] Policy model for English after RLHF.
- ...
## 🌠 Introduction
Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
In this technical report, we intend to help researchers to train their models stably with human feedback.
Contributions are summarized as follows:
1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data;
2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training;
3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
## 🔩 Requirements & Setup
This repository works on Python 3.8 and PyTorch 1.13.1.
We recommend using the **conda** virtual environment to run the code.
#### Step 1: Create a new Python virtual environment
```bash
conda update conda -n base -c defaults
conda create -n rlhf python=3.8
conda activate rlhf
```
#### Step 2: Install PyTorch and TensorBoard
```bash
conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
```
#### Step 3: Install the remaining dependencies
```bash
conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
apt install libaio-dev
DS_BUILD_OPS=1 pip install deepspeed
```
## ✨ Start training your own model!
Run code in a few steps.
### Step 1: Recover Reward model weights
We can not directly release the full weight of the reward model because of protocol restrictions.
You can merge the diff weight with original Llama-7B to recover the reward model we used.
We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps:
```bash
1) Download the weight diff into your local machine. The weight diff is located at:
# For English:
TODO
# For Chinese:
https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main
2) Merge the weight diff with the original Llama-7B:
# For English:
# Reward model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward
# SFT model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft
# Policy model
TODO
# For Chinese:
python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover
```
### Step 2: Select your own SFT model.
Because of some limitations, we can not release the **Chinese** SFT model (Currently).
You can use your own SFT model, or a strong base model instead of our SFT model.
### Step 3: Start training
Run the command below.
```
# For Chinese:
# You need to use your own sft model currently.
bash run_zh.sh
# For English:
# We have loaded the sft model and reward model to huggingface.
bash run_en.sh
```
## Citation
```bibtex
@article{zheng2023secrets,
title={Secrets of RLHF in Large Language Models Part I: PPO},
author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
year={2023},
eprint={2307.04964},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
yubuu/path-to-save-model
|
yubuu
| 2023-07-13T08:03:07Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T07:51:30Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - yubuu/path-to-save-model
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
nicotaroni/fine_tuned_classification
|
nicotaroni
| 2023-07-13T07:58:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T09:04:12Z |
---
pipeline_tag: text2text-generation
---
|
saeedehj/led-base-finetune-cnn
|
saeedehj
| 2023-07-13T07:50:12Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T22:27:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-base-16384-finetune-cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-base-16384-finetune-cnn
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2020
- Rouge1: 24.2258
- Rouge2: 9.0151
- Rougel: 19.0336
- Rougelsum: 22.2604
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8988 | 1.0 | 2000 | 2.0031 | 25.1709 | 10.0426 | 20.1311 | 23.1639 | 20.0 |
| 1.6038 | 2.0 | 4000 | 2.0314 | 25.0213 | 9.8701 | 19.8987 | 23.0129 | 20.0 |
| 1.3352 | 3.0 | 6000 | 2.1124 | 24.99 | 9.905 | 19.9566 | 23.0973 | 20.0 |
| 1.1173 | 4.0 | 8000 | 2.2055 | 25.0568 | 10.0949 | 19.9602 | 23.18 | 20.0 |
| 0.9566 | 5.0 | 10000 | 2.3262 | 24.941 | 9.5856 | 19.6285 | 23.042 | 20.0 |
| 0.7986 | 6.0 | 12000 | 2.4489 | 24.4114 | 9.2808 | 19.3296 | 22.5481 | 20.0 |
| 0.6685 | 7.0 | 14000 | 2.5211 | 24.467 | 9.5124 | 19.2685 | 22.5624 | 20.0 |
| 0.5601 | 8.0 | 16000 | 2.6299 | 24.6939 | 9.6533 | 19.4627 | 22.8048 | 20.0 |
| 0.4757 | 9.0 | 18000 | 2.7185 | 24.2098 | 9.1232 | 19.0181 | 22.4085 | 20.0 |
| 0.3926 | 10.0 | 20000 | 2.7947 | 24.5092 | 9.3964 | 19.2593 | 22.5592 | 20.0 |
| 0.3391 | 11.0 | 22000 | 2.8626 | 24.4731 | 9.3634 | 19.2966 | 22.5688 | 20.0 |
| 0.2872 | 12.0 | 24000 | 2.9175 | 24.5587 | 9.3888 | 19.3335 | 22.6443 | 20.0 |
| 0.2479 | 13.0 | 26000 | 2.9658 | 24.2983 | 9.1038 | 19.019 | 22.3675 | 20.0 |
| 0.213 | 14.0 | 28000 | 3.0273 | 24.4196 | 9.1481 | 19.0458 | 22.5135 | 20.0 |
| 0.1828 | 15.0 | 30000 | 3.0751 | 24.3283 | 9.2334 | 18.9771 | 22.3322 | 20.0 |
| 0.1608 | 16.0 | 32000 | 3.1185 | 24.3965 | 9.2047 | 19.0899 | 22.4666 | 20.0 |
| 0.1442 | 17.0 | 34000 | 3.1494 | 24.3832 | 9.1915 | 19.077 | 22.4366 | 20.0 |
| 0.1293 | 18.0 | 36000 | 3.1738 | 24.3796 | 9.1132 | 19.1015 | 22.3862 | 20.0 |
| 0.1165 | 19.0 | 38000 | 3.2073 | 24.2804 | 9.1018 | 19.0692 | 22.3023 | 20.0 |
| 0.1118 | 20.0 | 40000 | 3.2020 | 24.2258 | 9.0151 | 19.0336 | 22.2604 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jslin09/LegalChatbot-bloom-3b
|
jslin09
| 2023-07-13T07:45:16Z | 19 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-06T02:44:57Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
hoanghoavienvo/bert-large-uncased-stage-2-v1
|
hoanghoavienvo
| 2023-07-13T07:35:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T01:34:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-stage-2-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-stage-2-v1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4491
- Accuracy: 0.8317
- F1: 0.8995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.3824 | 0.83 | 0.8998 |
| 0.4209 | 2.0 | 938 | 0.3631 | 0.8533 | 0.9159 |
| 0.3378 | 3.0 | 1407 | 0.4491 | 0.8317 | 0.8995 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
JeffreyHuang/llm-selector
|
JeffreyHuang
| 2023-07-13T07:30:31Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T04:16:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: llm-selector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llm-selector
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7315
- Accuracy: 0.5048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 118 | 1.8920 | 0.3714 |
| No log | 2.0 | 236 | 1.7753 | 0.5143 |
| No log | 3.0 | 354 | 1.7671 | 0.4952 |
| No log | 4.0 | 472 | 1.7441 | 0.5048 |
| 1.8665 | 5.0 | 590 | 1.7315 | 0.5048 |
| 1.8665 | 6.0 | 708 | 1.7413 | 0.5048 |
| 1.8665 | 7.0 | 826 | 1.7378 | 0.4667 |
| 1.8665 | 8.0 | 944 | 1.7426 | 0.4667 |
| 1.7254 | 9.0 | 1062 | 1.7513 | 0.4476 |
| 1.7254 | 10.0 | 1180 | 1.7513 | 0.4476 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
K024/chatglm2-6b-int8
|
K024
| 2023-07-13T07:18:11Z | 49 | 1 |
transformers
|
[
"transformers",
"ChatGLM2Model",
"glm",
"chatglm",
"thudm",
"zh",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T07:13:41Z |
---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM2 6b int8 量化模型
详情参考 [K024/chatglm-q](https://github.com/K024/chatglm-q)。
See [K024/chatglm-q](https://github.com/K024/chatglm-q) for more details.
```python
import torch
from chatglm_q.decoder import ChatGLMDecoder, chat_template
device = torch.device("cuda")
decoder = ChatGLMDecoder.from_pretrained("K024/chatglm2-6b-int8", device=device)
prompt = chat_template([], "我是谁?")
for text in decoder.generate(prompt):
print(text)
```
模型权重按 ChatGLM2-6b 许可发布,见 [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE)。
Model weights are released under the same license as ChatGLM2-6b, see [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE).
|
smithlai/qtable-taxi-v3
|
smithlai
| 2023-07-13T07:12:19Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T07:10:33Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: qtable-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="smithlai/qtable-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
preetham/rpanda1
|
preetham
| 2023-07-13T07:10:56Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T06:22:15Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks panda
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - preetham/rpanda1
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks panda using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
kaelee/llava-lightning-mpt-7b-chat-pretrain
|
kaelee
| 2023-07-13T07:08:09Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llava_mpt",
"text-generation",
"custom_code",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T00:20:35Z |
---
license: cc-by-nc-sa-4.0
---
|
KevinHemsig/my_awesome_qa_model
|
KevinHemsig
| 2023-07-13T07:05:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-11T04:30:18Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: KevinHemsig/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KevinHemsig/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5159
- Validation Loss: 1.6940
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.3612 | 2.0301 | 0 |
| 1.7557 | 1.6940 | 1 |
| 1.5159 | 1.6940 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ajaydvrj/dataset2
|
ajaydvrj
| 2023-07-13T06:48:15Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-12T12:07:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: dataset2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dataset2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 5.9615 |
| No log | 2.0 | 2 | 5.8187 |
| No log | 3.0 | 3 | 5.7431 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
xian79/a2c-AntBulletEnv-v0
|
xian79
| 2023-07-13T06:43:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T06:28:44Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1080.97 +/- 252.97
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YanJiangJerry/SA-tweet-bert-large-e6-w1-1.5-b16-m4
|
YanJiangJerry
| 2023-07-13T06:33:15Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T05:57:56Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-tweet-bert-large-e6-w1-1.5-b16-m4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-tweet-bert-large-e6-w1-1.5-b16-m4
This model is a fine-tuned version of [vinai/bertweet-large](https://huggingface.co/vinai/bertweet-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4333
- Accuracy: 0.933
- F1: 0.9406
- Precision: 0.9414
- Recall: 0.9397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 285 | 0.2078 | 0.917 | 0.9280 | 0.9083 | 0.9486 |
| 0.2897 | 2.0 | 570 | 0.2084 | 0.92 | 0.9313 | 0.9033 | 0.9610 |
| 0.2897 | 3.0 | 855 | 0.2873 | 0.925 | 0.9343 | 0.9237 | 0.9450 |
| 0.1152 | 4.0 | 1140 | 0.3181 | 0.933 | 0.9408 | 0.9383 | 0.9433 |
| 0.1152 | 5.0 | 1425 | 0.4471 | 0.93 | 0.9382 | 0.9349 | 0.9415 |
| 0.036 | 6.0 | 1710 | 0.4333 | 0.933 | 0.9406 | 0.9414 | 0.9397 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
localmodels/Wizard-Vicuna-7B-Uncensored-GPTQ
|
localmodels
| 2023-07-13T06:20:44Z | 7 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T06:20:41Z |
---
duplicated_from: localmodels/LLM
---
# Wizard Vicuna 7B Uncensored GPTQ
From: https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored
---
## Model
* `Wizard-Vicuna-7B-Uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ.
* Parameters: Groupsize = 128g. No act-order.
|
AnirbanRC/anirban_qa_model_finetuned
|
AnirbanRC
| 2023-07-13T06:14:26Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-13T05:38:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: anirban_qa_model_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# anirban_qa_model_finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.5534 |
| 2.7985 | 2.0 | 500 | 1.8251 |
| 2.7985 | 3.0 | 750 | 1.7210 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
localmodels/WizardLM-13B-v1.1-GPTQ
|
localmodels
| 2023-07-13T06:11:46Z | 7 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T06:11:46Z |
---
duplicated_from: localmodels/LLM
---
# WizardLM 13B v1.1 GPTQ
From: https://huggingface.co/WizardLM/WizardLM-13B-V1.1
---
| Model | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| wizardlm-13b-v1.1-GPTQ-4bit-128g.no-act.order | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. |
|
ssunny/distilbert-base-uncased-finetuned-squad
|
ssunny
| 2023-07-13T06:05:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-27T08:15:03Z |
---
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7932 | 1.0 | 39822 | 3.0591 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mingkuan/longchat-7b-qlora-customer-support
|
mingkuan
| 2023-07-13T06:03:17Z | 0 | 6 | null |
[
"text-generation",
"dataset:bitext/customer-support-intent-dataset",
"arxiv:2305.14314",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-07-13T01:23:17Z |
---
inference: true
license: apache-2.0
datasets:
- bitext/customer-support-intent-dataset
pipeline_tag: text-generation
---
# longchat-7b-qlora-customer-support Model Card
This repo contains the 4-bit LORA (low-rank) adapter weights for [longchat-7b-16k model](https://huggingface.co/lmsys/longchat-7b-16k), fine-tuned on top of [Bitext's customor support domain dataset](https://huggingface.co/datasets/bitext/customer-support-intent-dataset).
The Supervised Fine-Tuning (SFT) method is based on this [qlora paper](https://arxiv.org/abs/2305.14314) using 🤗 peft adapters, transformers, and bitsandbytes.
## Model details
**Model type:**
longchat-7b-qlora-customer-support is an 4-bit LORA (low-rank) adapter supervised fine-tuned on top of the [longchat-7b-16k model](https://huggingface.co/lmsys/longchat-7b-16k) with [Bitext's customor support domain dataset](https://huggingface.co/datasets/bitext/customer-support-intent-dataset).
It's a Causal LM decoder-only LLM.
**Language:**
English
**License:**
apache-2.0 inherited from [Base Model](https://huggingface.co/lmsys/longchat-7b-16k) and the [dataset](https://huggingface.co/datasets/bitext/customer-support-intent-dataset).
**Base Model:**
lmsys/longchat-7b-16k
**Dataset:**
bitext/customer-support-intent-dataset
**GPU Mermory Consumption:**
~6GB GPU consumption in 4-bit mode with fully loaded (base + qlora) models
## Install dependcy packages
```shell
pip install -r requirements.txt
```
Per the [base model instruction](https://huggingface.co/lmsys/longchat-7b-16k), the [llma_condense_monkey_patch.py file](https://github.com/lm-sys/FastChat/blob/main/fastchat/model/llama_condense_monkey_patch.py) is needed to load the base model properly. This file is alreay included in this repo.
## Load the model in 4-bit mode
```ipython
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from llama_condense_monkey_patch import replace_llama_with_condense
from peft import PeftConfig
from peft import PeftModel
import torch
## config device params & load model
peft_model_id = "mingkuan/longchat-7b-qlora-customer-support"
base_model_id = "lmsys/longchat-7b-16k"
config = AutoConfig.from_pretrained(base_model_id)
replace_llama_with_condense(config.rope_condense_ratio)
tokenizer = AutoTokenizer.from_pretrained(base_model_id, use_fast=False)
kwargs = {"torch_dtype": torch.float16}
kwargs["device_map"] = "auto"
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
return_dict=True,
trust_remote_code=True,
quantization_config=nf4_config,
load_in_4bit=True,
**kwargs
)
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference the model
```ipython
def getLLMResponse(prompt):
device = "cuda"
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.5, max_new_tokens=256)
promptLen = len(prompt)
response = tokenizer.decode(output[0], skip_special_tokens=True)[promptLen:] ## omit the user input part
return response
query = 'help me to setup my new shipping address.'
response = getLLMResponse(generate_prompt(query))
print(f'\nUserInput:{query}\n\nLLM:\n{response}\n\n')
```
Inference Output:
```shell
{
"category": "SHIPPING",
"intent": "setup_new_shipping_address",
"answer": "Sure, I can help you with that. Can you please provide me your full name, current shipping address, and the new shipping address you would like to set up?"
}
```
|
localmodels/WizardLM-30B-v1.0-GPTQ
|
localmodels
| 2023-07-13T06:01:08Z | 5 | 1 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T00:29:47Z |
# WizardLM 30B v1.0 GPTQ
From: https://huggingface.co/WizardLM/WizardLM-30B-V1.0
---
## Model
* wizardlm-30b-1.0-4bit.safetensors
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Parameters: Groupsize = None. --act-order.
|
traintogpb/mt5-large-kor-qa-generation-finetuned
|
traintogpb
| 2023-07-13T05:57:05Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"ko",
"dataset:squad_kor_v1",
"dataset:klue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-13T04:33:07Z |
---
datasets:
- squad_kor_v1
- klue
language:
- ko
metrics:
- bleu
---
|
NasimB/gpt2-concat-all-text-processign-rarity-all-iorder-est-5p5k
|
NasimB
| 2023-07-13T05:53:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T04:18:13Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-text-processign-rarity-all-iorder-est-5p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-text-processign-rarity-all-iorder-est-5p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7435 | 0.32 | 500 | 5.6693 |
| 5.3983 | 0.63 | 1000 | 5.2259 |
| 5.0552 | 0.95 | 1500 | 4.9848 |
| 4.7718 | 1.27 | 2000 | 4.8394 |
| 4.6329 | 1.58 | 2500 | 4.7145 |
| 4.5322 | 1.9 | 3000 | 4.6174 |
| 4.3187 | 2.22 | 3500 | 4.5659 |
| 4.2361 | 2.53 | 4000 | 4.4971 |
| 4.1996 | 2.85 | 4500 | 4.4287 |
| 4.0309 | 3.17 | 5000 | 4.4140 |
| 3.9128 | 3.48 | 5500 | 4.3761 |
| 3.8993 | 3.8 | 6000 | 4.3344 |
| 3.7784 | 4.12 | 6500 | 4.3363 |
| 3.619 | 4.43 | 7000 | 4.3222 |
| 3.6107 | 4.75 | 7500 | 4.3063 |
| 3.5596 | 5.07 | 8000 | 4.3030 |
| 3.4209 | 5.38 | 8500 | 4.3070 |
| 3.4095 | 5.7 | 9000 | 4.3053 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
YanJiangJerry/SA-tweet-roberta-large-e4-w1-1.5-b16-m4
|
YanJiangJerry
| 2023-07-13T05:42:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T05:19:19Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-tweet-roberta-large-e4-w1-1.5-b16-m4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-tweet-roberta-large-e4-w1-1.5-b16-m4
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3545
- Accuracy: 0.945
- F1: 0.9511
- Precision: 0.9537
- Recall: 0.9486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 285 | 0.1933 | 0.92 | 0.9290 | 0.9306 | 0.9273 |
| 0.2508 | 2.0 | 570 | 0.2097 | 0.933 | 0.9411 | 0.9337 | 0.9486 |
| 0.2508 | 3.0 | 855 | 0.2958 | 0.937 | 0.9450 | 0.9312 | 0.9592 |
| 0.0947 | 4.0 | 1140 | 0.3545 | 0.945 | 0.9511 | 0.9537 | 0.9486 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ajaydvrj/my-qa-model
|
ajaydvrj
| 2023-07-13T05:31:47Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T05:25:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-qa-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-qa-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4373
- Train Accuracy: 0.1538
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 2.6280 | 0.0769 | 0 |
| 2.5014 | 0.1538 | 1 |
| 2.5604 | 0.2308 | 2 |
| 2.5289 | 0.0769 | 3 |
| 2.4373 | 0.1538 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
localmodels/Guanaco-33B-GPTQ
|
localmodels
| 2023-07-13T05:28:12Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"arxiv:2305.14314",
"arxiv:2302.13971",
"arxiv:2304.07327",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T05:28:12Z |
---
duplicated_from: localmodels/LLM
---
# Guanaco 33B GPTQ
From: https://huggingface.co/timdettmers/guanaco-33b-merged
---
## Model
* Guanaco-33B-GPTQ-4bit.act-order.safetensors
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Parameters: Groupsize = None. --act-order.
---
# Guanaco Models Based on LLaMA
| [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
**The Guanaco models are open-source finetuned chatbots obtained through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are available in 7B, 13B, 33B, and 65B parameter sizes.**
⚠️Guanaco is a model purely intended for research purposes and could produce problematic outputs.
## Why use Guanaco?
- **Competitive with commercial chatbot systems on the Vicuna and OpenAssistant benchmarks** (ChatGPT and BARD) according to human and GPT-4 raters. We note that the relative performance on tasks not covered in these benchmarks could be very different. In addition, commercial systems evolve over time (we used outputs from the March 2023 version of the models).
- **Available open-source for research purposes**. Guanaco models allow *cheap* and *local* experimentation with high-quality chatbot systems.
- **Replicable and efficient training procedure** that can be extended to new use cases. Guanaco training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora).
- **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning.
- **Lightweight** checkpoints which only contain adapter weights.
## License and Intended Use
Guanaco adapter weights are available under Apache 2 license. Note the use of the Guanaco adapter weights, requires access to the LLaMA model weighs.
Guanaco is based on LLaMA and therefore should be used according to the LLaMA license.
## Usage
Here is an example of how you would load Guanaco 7B in 4-bits:
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Inference can then be performed as usual with HF models as follows:
```python
prompt = "Introduce yourself"
formatted_prompt = (
f"A chat between a curious human and an artificial intelligence assistant."
f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
f"### Human: {prompt} ### Assistant:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Expected output similar to the following:
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have.
```
## Current Inference Limitations
Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels.
Below is how you would load the model in 16 bits:
```python
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Model Card
**Architecture**: The Guanaco models are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$.
**Base Model**: Guanaco uses LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that Guanaco can inherit biases and limitations of the base model.
**Finetuning Data**: Guanaco is finetuned on OASST1. The exact dataset is available at [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
**Languages**: The OASST1 dataset is multilingual (see [the paper](https://arxiv.org/abs/2304.07327) for details) and as such Guanaco responds to user queries in different languages. We note, however, that OASST1 is heavy in high-resource languages. In addition, human evaluation of Guanaco was only performed in English and based on qualitative analysis we observed degradation in performance in other languages.
Next, we describe Training and Evaluation details.
### Training
Guanaco models are the result of 4-bit QLoRA supervised finetuning on the OASST1 dataset.
All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models.
For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer.
### Training hyperparameters
Size| Dataset | Batch Size | Learning Rate | Max Steps | Sequence length
---|---|---|---|---|---
7B | OASST1 | 16 | 2e-4 | 1875 | 512
13B | OASST1 | 16 | 2e-4 | 1875 | 512
33B | OASST1 | 16 | 1e-4 | 1875 | 512
65B | OASST1 | 16 | 1e-4 | 1875 | 512
### Evaluation
We test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. We use the Vicuna and OpenAssistant datasets with 80 and 953 prompts respectively.
In both human and automated evaluations, for each prompt, raters compare all pairs of responses across the models considered. For human raters we randomize the order of the systems, for GPT-4 we evaluate with both orders.
Benchmark | Vicuna | | Vicuna | | OpenAssistant | | -
-----------|----|-----|--------|---|---------------|---|---
Prompts | 80 | | 80 | | 953 | |
Judge | Human | | GPT-4 | | GPT-4 | |
Model | Elo | Rank | Elo | Rank | Elo | Rank | **Median Rank**
GPT-4 | 1176 | 1 | 1348 | 1 | 1294 | 1 | 1
Guanaco-65B | 1023 | 2 | 1022 | 2 | 1008 | 3 | 2
Guanaco-33B | 1009 | 4 | 992 | 3 | 1002 | 4 | 4
ChatGPT-3.5 Turbo | 916 | 7 | 966 | 5 | 1015 | 2 | 5
Vicuna-13B | 984 | 5 | 974 | 4 | 936 | 5 | 5
Guanaco-13B | 975 | 6 | 913 | 6 | 885 | 6 | 6
Guanaco-7B | 1010 | 3 | 879 | 8 | 860 | 7 | 7
Bard | 909 | 8 | 902 | 7 | - | - | 8
We also use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy.
Dataset | 7B | 13B | 33B | 65B
---|---|---|---|---
LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4
Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7
Longform | 32.1 | 43.2 | 56.6 | 59.7
Chip2 | 34.5 | 41.6 | 53.6 | 59.8
HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1
Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3
OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2
Alpaca | 38.8 | 47.8 | 57.3 | 62.5
FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
However, we note that finetuning on OASST1 seems to reduce biases as measured on the CrowS dataset. We report here the performance of Guanaco-65B compared to other baseline models on the CrowS dataset.
| | LLaMA-65B | GPT-3 | OPT-175B | Guanaco-65B |
|----------------------|-----------|-------|----------|---------------|
| Gender | 70.6 | 62.6 | 65.7 | **47.5** |
| Religion | {79.0} | 73.3 | 68.6 | **38.7** |
| Race/Color | 57.0 | 64.7 | 68.6 | **45.3** |
| Sexual orientation | {81.0} | 76.2 | 78.6 | **59.1** |
| Age | 70.1 | 64.4 | 67.8 | **36.3** |
| Nationality | 64.2 | 61.6 | 62.9 | **32.4** |
| Disability | 66.7 | 76.7 | 76.7 | **33.9** |
| Physical appearance | 77.8 | 74.6 | 76.2 | **43.1** |
| Socioeconomic status | 71.5 | 73.8 | 76.2 | **55.3** |
| Average | 66.6 | 67.2 | 69.5 | **43.5** |
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```
|
HoaAn2003/ppo-LunarLander-v2
|
HoaAn2003
| 2023-07-13T05:06:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T05:06:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.13 +/- 20.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vivek22/vivek
|
vivek22
| 2023-07-13T05:05:38Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"en",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-12T12:30:41Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
language:
- en
pipeline_tag: text-to-image
---
# LoRA text2image fine-tuning - vivek22/vivek
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the vivek22/randm dataset.
|
vertxlabs/controlnet_qrcode-control_v11p_v1
|
vertxlabs
| 2023-07-13T05:04:14Z | 13 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"controlnet",
"image-to-image",
"en",
"license:openrail++",
"endpoints_compatible",
"region:us"
] |
image-to-image
| 2023-07-13T03:45:24Z |
---
tags:
- stable-diffusion
- controlnet
- image-to-image
license: openrail++
language:
- en
pipeline_tag: image-to-image
---
# QR Code Conditioned ControlNet Models for Stable Diffusion 2.1

## Model Description
This repo holds the safetensors & diffusers versions of the QR code conditioned ControlNet for Stable Diffusion v2.1.
The Stable Diffusion 2.1 version is marginally more effective, as it was developed to address my specific needs. However, a 1.5 version model was also trained on the same dataset for those who are using the older version.
## How to use with diffusers
```bash
pip -q install diffusers transformers accelerate torch xformers
```
```python
import torch
from PIL import Image
from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, DDIMScheduler
from diffusers.utils import load_image
controlnet = ControlNetModel.from_pretrained("DionTimmer/controlnet_qrcode-control_v11p_sd21",
torch_dtype=torch.float16)
pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1",
controlnet=controlnet,
safety_checker=None,
torch_dtype=torch.float16
)
pipe.enable_xformers_memory_efficient_attention()
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
def resize_for_condition_image(input_image: Image, resolution: int):
input_image = input_image.convert("RGB")
W, H = input_image.size
k = float(resolution) / min(H, W)
H *= k
W *= k
H = int(round(H / 64.0)) * 64
W = int(round(W / 64.0)) * 64
img = input_image.resize((W, H), resample=Image.LANCZOS)
return img
# play with guidance_scale, controlnet_conditioning_scale and strength to make a valid QR Code Image
# qr code image
source_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/6064e095abd8d3692e3e2ed6/A_RqHaAM6YHBodPLwqtjn.png")
# initial image, anything
init_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/noauth/KfMBABpOwIuNolv1pe3qX.jpeg")
condition_image = resize_for_condition_image(source_image, 768)
init_image = resize_for_condition_image(init_image, 768)
generator = torch.manual_seed(123121231)
image = pipe(prompt="a bilboard in NYC with a qrcode",
negative_prompt="ugly, disfigured, low quality, blurry, nsfw",
image=init_image,
control_image=condition_image,
width=768,
height=768,
guidance_scale=20,
controlnet_conditioning_scale=1.5,
generator=generator,
strength=0.9,
num_inference_steps=150,
)
image.images[0]
```
## Performance and Limitations
These models perform quite well in most cases, but please note that they are not 100% accurate. In some instances, the QR code shape might not come through as expected. You can increase the ControlNet weight to emphasize the QR code shape. However, be cautious as this might negatively impact the style of your output.**To optimize for scanning, please generate your QR codes with correction mode 'H' (30%).**
To balance between style and shape, a gentle fine-tuning of the control weight might be required based on the individual input and the desired output, aswell as the correct prompt. Some prompts do not work until you increase the weight by a lot. The process of finding the right balance between these factors is part art and part science. For the best results, it is recommended to generate your artwork at a resolution of 768. This allows for a higher level of detail in the final product, enhancing the quality and effectiveness of the QR code-based artwork.
## Installation
The simplest way to use this is to place the .safetensors model and its .yaml config file in the folder where your other controlnet models are installed, which varies per application.
For usage in auto1111 they can be placed in the webui/models/ControlNet folder. They can be loaded using the controlnet webui extension which you can install through the extensions tab in the webui (https://github.com/Mikubill/sd-webui-controlnet). Make sure to enable your controlnet unit and set your input image as the QR code. Set the model to either the SD2.1 or 1.5 version depending on your base stable diffusion model, or it will error. No pre-processor is needed, though you can use the invert pre-processor for a different variation of results. 768 is the preferred resolution for generation since it allows for more detail.
Make sure to look up additional info on how to use controlnet if you get stuck, once you have the webui up and running its really easy to install the controlnet extension aswell.
|
Treeizard/tactile_generator
|
Treeizard
| 2023-07-13T04:57:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T04:57:07Z |
---
license: creativeml-openrail-m
---
|
YanJiangJerry/SA-tweet-roberta-large-e4-w1-1.5-b16
|
YanJiangJerry
| 2023-07-13T04:53:22Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T04:17:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-tweet-roberta-large-e4-w1-1.5-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-tweet-roberta-large-e4-w1-1.5-b16
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6396
- Accuracy: 0.9166
- F1: 0.8872
- Precision: 0.8939
- Recall: 0.8806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2895 | 1.0 | 581 | 0.4026 | 0.9110 | 0.8806 | 0.8806 | 0.8806 |
| 0.1182 | 2.0 | 1162 | 0.6190 | 0.9110 | 0.8754 | 0.9153 | 0.8388 |
| 0.0589 | 3.0 | 1743 | 0.6167 | 0.9155 | 0.8838 | 0.9060 | 0.8627 |
| 0.0211 | 4.0 | 2324 | 0.6396 | 0.9166 | 0.8872 | 0.8939 | 0.8806 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ui-chope/distilbert-base-uncased
|
ui-chope
| 2023-07-13T04:52:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-05T01:45:44Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1298
- Precision: 0.9739
- Recall: 0.9617
- F1: 0.9678
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0218 | 1.0 | 5296 | 0.0828 | 0.9609 | 0.9609 | 0.9609 | 0.9842 |
| 0.0159 | 2.0 | 10592 | 0.1135 | 0.9677 | 0.9602 | 0.9639 | 0.9820 |
| 0.0137 | 3.0 | 15888 | 0.0846 | 0.9631 | 0.9570 | 0.9600 | 0.9831 |
| 0.0074 | 4.0 | 21184 | 0.1179 | 0.9621 | 0.9523 | 0.9572 | 0.9804 |
| 0.0058 | 5.0 | 26480 | 0.1080 | 0.9763 | 0.9664 | 0.9713 | 0.9857 |
| 0.0056 | 6.0 | 31776 | 0.1273 | 0.9685 | 0.9594 | 0.9639 | 0.9828 |
| 0.0055 | 7.0 | 37072 | 0.1451 | 0.9637 | 0.9531 | 0.9584 | 0.9800 |
| 0.0035 | 8.0 | 42368 | 0.1345 | 0.9707 | 0.9563 | 0.9634 | 0.9805 |
| 0.0027 | 9.0 | 47664 | 0.1242 | 0.9739 | 0.9633 | 0.9686 | 0.9852 |
| 0.0018 | 10.0 | 52960 | 0.1232 | 0.9739 | 0.9633 | 0.9686 | 0.9844 |
| 0.0017 | 11.0 | 58256 | 0.1298 | 0.9739 | 0.9617 | 0.9678 | 0.9837 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
insomeniaT/falcon-7b-uae-qapairs-67
|
insomeniaT
| 2023-07-13T04:40:37Z | 10 | 1 |
peft
|
[
"peft",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-07-07T19:21:06Z |
---
license: apache-2.0
language:
- en
library_name: peft
pipeline_tag: text-generation
inference: false
---
# PEFT Model Fine-tuned on UAE QA Pairs
This repository contains a fine-tuned model based on the PEFT framework for question answering tasks. The model has been trained on a dataset of question and answer pairs related to the UAE.
## Installation
Before using the model, make sure to install the necessary packages:
```sh
pip install transformers
pip install torch torchvision
pip install peft
```
## Usage
The model can be used for generating responses to prompts. Here is an example:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
peft_model_id = "insomeniaT/falcon-7b-uae-qapairs-67"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, trust_remote_code=True)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b")
tokenizer.pad_token = tokenizer.eos_token
text = "### Human: What is the minimum requirement for the UAE's GCC residency?? ### Assistant: "
device = "cuda:0"
inputs = tokenizer(text, return_tensors="pt")
inputs.to(device)
model.to(device)
outputs = model.generate(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=300, pad_token_id=tokenizer.eos_token_id)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
|
hoanghoavienvo/xlnet-large-cased-stage-2-ver1
|
hoanghoavienvo
| 2023-07-13T04:37:38Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T03:34:49Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlnet-large-cased-stage-2-ver1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-large-cased-stage-2-ver1
This model is a fine-tuned version of [xlnet-large-cased](https://huggingface.co/xlnet-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4128
- Accuracy: 0.8317
- F1: 0.9022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.4226 | 0.85 | 0.9189 |
| 0.4839 | 2.0 | 938 | 0.3964 | 0.845 | 0.9141 |
| 0.4284 | 3.0 | 1407 | 0.4128 | 0.8317 | 0.9022 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
glitchyordis/poca-SoccerTwos
|
glitchyordis
| 2023-07-13T04:35:57Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-13T04:35:46Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: glitchyordis/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
localmodels/Vicuna-33B-v1.3-GPTQ
|
localmodels
| 2023-07-13T04:30:40Z | 7 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.05685",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T04:30:40Z |
---
duplicated_from: localmodels/LLM
---
# Vicuna 33B v1.3 GPTQ
From LMSYS: https://huggingface.co/lmsys/vicuna-33b-v1.3
---
| Model | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| vicuna-33b-GPTQ-4bit--1g.act.order | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. |
---
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 140K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
GerbilLab/IPythia-70m
|
GerbilLab
| 2023-07-13T04:28:25Z | 157 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"alpaca",
"instruction",
"pythia",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-06T02:25:45Z |
---
tags:
- alpaca
- instruction
- pythia
---
All IPythia models were trained on an internal GerbilLab high quality instruction dataset of ~75k instructions for 3 epochs. Prompt format:
```
Instruction: [instruction goes here]
Input: [input goes here]
Output: [output will be generated here]
or
Instruction: [instruction goes here]
Output: [output will be generated here]
```
|
kazuhidet/norurun
|
kazuhidet
| 2023-07-13T04:23:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T04:06:49Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of mascot norurun
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - kazuhidet/norurun
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of mascot norurun using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
nic1122/roberta-base-finetuned-cluener2020-chinese
|
nic1122
| 2023-07-13T03:50:31Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T03:09:03Z |
源自:https://huggingface.co/uer/roberta-base-finetuned-cluener2020-chinese
做了IOB适配,适用于elasticsearch ml
|
ernstliang/my_awesome_billsum_model
|
ernstliang
| 2023-07-13T03:50:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T13:02:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1818
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4938
- Rouge1: 0.1818
- Rouge2: 0.0856
- Rougel: 0.1532
- Rougelsum: 0.1532
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 124 | 2.6861 | 0.131 | 0.0448 | 0.1097 | 0.1098 | 19.0 |
| No log | 2.0 | 248 | 2.5567 | 0.1498 | 0.0578 | 0.124 | 0.1239 | 19.0 |
| No log | 3.0 | 372 | 2.5080 | 0.1728 | 0.0771 | 0.1466 | 0.1465 | 19.0 |
| No log | 4.0 | 496 | 2.4938 | 0.1818 | 0.0856 | 0.1532 | 0.1532 | 19.0 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
rdyzakya/IndoLEGO-ABSA
|
rdyzakya
| 2023-07-13T03:43:17Z | 113 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"id",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T13:28:26Z |
---
language:
- id
metrics:
- f1
pipeline_tag: text2text-generation
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.