modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
peyschen/q-FrozenLake-v1-4x4-noSlippery
|
peyschen
| 2023-11-23T12:02:12Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-23T12:02:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="peyschen/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
twn39/RealVisXL_V2.0
|
twn39
| 2023-11-23T11:50:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"art",
"text-to-image",
"en",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-11-23T11:43:23Z |
---
license: openrail++
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---
link: https://civitai.com/models/139562/realvisxl-v20
|
AfnanTS/bert-base-arabert-finetuned-Arabic-DBpedia
|
AfnanTS
| 2023-11-23T11:50:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-11-22T21:40:26Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-arabert-finetuned-Arabic-DBpedia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-arabert-finetuned-Arabic-DBpedia
This model is a fine-tuned version of [aubmindlab/bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.9491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.3884 | 2.69 | 500 | 8.0043 |
| 7.1297 | 5.38 | 1000 | 7.9452 |
| 6.8379 | 8.06 | 1500 | 7.8969 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
Systran/faster-whisper-large-v2
|
Systran
| 2023-11-23T11:44:31Z | 576,960 | 31 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T09:50:45Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v2 model for CTranslate2
This repository contains the conversion of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v2")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-large-v2 --output_dir faster-whisper-large-v2 \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v2).**
|
tuanio/w2v2_ablation_with_ling_head-0drop-load-best-per-best_on_tp0.025_tl10_fp0.001_fl16
|
tuanio
| 2023-11-23T11:35:37Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"base_model:finetune:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-23T10:30:19Z |
---
license: cc-by-nc-4.0
base_model: nguyenvulebinh/wav2vec2-base-vietnamese-250h
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v2_ablation_with_ling_head-0drop-load-best-per-best_on_tp0.025_tl10_fp0.001_fl16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2_ablation_with_ling_head-0drop-load-best-per-best_on_tp0.025_tl10_fp0.001_fl16
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4603
- Wer: 0.1849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 114.032 | 0.94 | 100 | 98.5210 | 21.4967 |
| 63.4912 | 1.89 | 200 | 9.8990 | 1.0 |
| 5.9426 | 2.83 | 300 | 5.2909 | 1.0 |
| 5.0183 | 3.77 | 400 | 5.2482 | 1.0 |
| 4.6782 | 4.72 | 500 | 5.5314 | 1.0 |
| 4.4732 | 5.66 | 600 | 5.2250 | 1.0 |
| 4.4059 | 6.6 | 700 | 5.1483 | 1.0 |
| 4.3368 | 7.55 | 800 | 4.9275 | 1.0 |
| 4.2178 | 8.49 | 900 | 4.8987 | 1.0 |
| 3.913 | 9.43 | 1000 | 3.7007 | 0.8807 |
| 2.7998 | 10.38 | 1100 | 2.1296 | 0.5331 |
| 1.8405 | 11.32 | 1200 | 1.4873 | 0.4636 |
| 1.2987 | 12.26 | 1300 | 1.0532 | 0.3338 |
| 1.0387 | 13.21 | 1400 | 0.8759 | 0.3348 |
| 0.851 | 14.15 | 1500 | 0.7743 | 0.3604 |
| 0.7128 | 15.09 | 1600 | 0.6523 | 0.2796 |
| 0.605 | 16.04 | 1700 | 0.6352 | 0.2995 |
| 0.5315 | 16.98 | 1800 | 0.5920 | 0.2603 |
| 0.4845 | 17.92 | 1900 | 0.5476 | 0.2503 |
| 0.4257 | 18.87 | 2000 | 0.5398 | 0.2285 |
| 0.4124 | 19.81 | 2100 | 0.5378 | 0.2764 |
| 0.3595 | 20.75 | 2200 | 0.5109 | 0.2147 |
| 0.3958 | 21.7 | 2300 | 0.4825 | 0.2342 |
| 0.3546 | 22.64 | 2400 | 0.4649 | 0.2251 |
| 0.304 | 23.58 | 2500 | 0.4701 | 0.2115 |
| 0.291 | 24.53 | 2600 | 0.4515 | 0.2180 |
| 0.2946 | 25.47 | 2700 | 0.4537 | 0.2012 |
| 0.2588 | 26.42 | 2800 | 0.4423 | 0.1939 |
| 0.2625 | 27.36 | 2900 | 0.4493 | 0.1924 |
| 0.2385 | 28.3 | 3000 | 0.4364 | 0.1724 |
| 0.2327 | 29.25 | 3100 | 0.4382 | 0.1967 |
| 0.26 | 30.19 | 3200 | 0.4454 | 0.1823 |
| 0.2151 | 31.13 | 3300 | 0.4424 | 0.1987 |
| 0.2213 | 32.08 | 3400 | 0.4377 | 0.2085 |
| 0.2226 | 33.02 | 3500 | 0.4375 | 0.2095 |
| 0.208 | 33.96 | 3600 | 0.4358 | 0.1994 |
| 0.2061 | 34.91 | 3700 | 0.4308 | 0.1919 |
| 0.1929 | 35.85 | 3800 | 0.4298 | 0.1905 |
| 0.1786 | 36.79 | 3900 | 0.4139 | 0.1974 |
| 0.172 | 37.74 | 4000 | 0.4183 | 0.1823 |
| 0.1769 | 38.68 | 4100 | 0.4252 | 0.1890 |
| 0.1813 | 39.62 | 4200 | 0.4360 | 0.1880 |
| 0.1676 | 40.57 | 4300 | 0.4325 | 0.1770 |
| 0.1581 | 41.51 | 4400 | 0.4386 | 0.1755 |
| 0.17 | 42.45 | 4500 | 0.4374 | 0.1979 |
| 0.1778 | 43.4 | 4600 | 0.4360 | 0.1726 |
| 0.162 | 44.34 | 4700 | 0.4424 | 0.1822 |
| 0.1605 | 45.28 | 4800 | 0.4500 | 0.2065 |
| 0.1472 | 46.23 | 4900 | 0.4555 | 0.2102 |
| 0.1428 | 47.17 | 5000 | 0.4358 | 0.1733 |
| 0.1393 | 48.11 | 5100 | 0.4406 | 0.1904 |
| 0.1444 | 49.06 | 5200 | 0.4481 | 0.2030 |
| 0.1401 | 50.0 | 5300 | 0.4507 | 0.1952 |
| 0.1311 | 50.94 | 5400 | 0.4353 | 0.1857 |
| 0.1337 | 51.89 | 5500 | 0.4439 | 0.2018 |
| 0.1289 | 52.83 | 5600 | 0.4461 | 0.1946 |
| 0.1234 | 53.77 | 5700 | 0.4395 | 0.2048 |
| 0.1301 | 54.72 | 5800 | 0.4590 | 0.2114 |
| 0.1378 | 55.66 | 5900 | 0.4548 | 0.2144 |
| 0.1251 | 56.6 | 6000 | 0.4477 | 0.1877 |
| 0.1224 | 57.55 | 6100 | 0.4478 | 0.1933 |
| 0.1233 | 58.49 | 6200 | 0.4467 | 0.1841 |
| 0.1237 | 59.43 | 6300 | 0.4399 | 0.1834 |
| 0.1176 | 60.38 | 6400 | 0.4471 | 0.2097 |
| 0.1117 | 61.32 | 6500 | 0.4587 | 0.1970 |
| 0.111 | 62.26 | 6600 | 0.4707 | 0.2102 |
| 0.1239 | 63.21 | 6700 | 0.4518 | 0.1923 |
| 0.1152 | 64.15 | 6800 | 0.4503 | 0.1967 |
| 0.1121 | 65.09 | 6900 | 0.4467 | 0.1944 |
| 0.1175 | 66.04 | 7000 | 0.4486 | 0.1914 |
| 0.1242 | 66.98 | 7100 | 0.4537 | 0.1973 |
| 0.111 | 67.92 | 7200 | 0.4587 | 0.2008 |
| 0.1063 | 68.87 | 7300 | 0.4551 | 0.1929 |
| 0.1133 | 69.81 | 7400 | 0.4547 | 0.1929 |
| 0.1098 | 70.75 | 7500 | 0.4512 | 0.1982 |
| 0.1123 | 71.7 | 7600 | 0.4578 | 0.1955 |
| 0.1144 | 72.64 | 7700 | 0.4533 | 0.1830 |
| 0.1113 | 73.58 | 7800 | 0.4545 | 0.1788 |
| 0.0968 | 74.53 | 7900 | 0.4584 | 0.1725 |
| 0.0951 | 75.47 | 8000 | 0.4646 | 0.1859 |
| 0.0982 | 76.42 | 8100 | 0.4557 | 0.1813 |
| 0.0959 | 77.36 | 8200 | 0.4566 | 0.1742 |
| 0.093 | 78.3 | 8300 | 0.4604 | 0.1880 |
| 0.103 | 79.25 | 8400 | 0.4614 | 0.1908 |
| 0.1101 | 80.19 | 8500 | 0.4586 | 0.1805 |
| 0.1046 | 81.13 | 8600 | 0.4590 | 0.1825 |
| 0.0979 | 82.08 | 8700 | 0.4555 | 0.1762 |
| 0.103 | 83.02 | 8800 | 0.4573 | 0.1780 |
| 0.0958 | 83.96 | 8900 | 0.4575 | 0.1803 |
| 0.0948 | 84.91 | 9000 | 0.4581 | 0.1814 |
| 0.1003 | 85.85 | 9100 | 0.4600 | 0.1830 |
| 0.1066 | 86.79 | 9200 | 0.4609 | 0.1870 |
| 0.0887 | 87.74 | 9300 | 0.4615 | 0.1834 |
| 0.0936 | 88.68 | 9400 | 0.4610 | 0.1819 |
| 0.0892 | 89.62 | 9500 | 0.4595 | 0.1801 |
| 0.1039 | 90.57 | 9600 | 0.4612 | 0.1837 |
| 0.097 | 91.51 | 9700 | 0.4610 | 0.1834 |
| 0.0969 | 92.45 | 9800 | 0.4605 | 0.1844 |
| 0.0946 | 93.4 | 9900 | 0.4596 | 0.1843 |
| 0.0947 | 94.34 | 10000 | 0.4605 | 0.1850 |
| 0.095 | 95.28 | 10100 | 0.4616 | 0.1861 |
| 0.0856 | 96.23 | 10200 | 0.4611 | 0.1853 |
| 0.0983 | 97.17 | 10300 | 0.4603 | 0.1850 |
| 0.0947 | 98.11 | 10400 | 0.4605 | 0.1853 |
| 0.0948 | 99.06 | 10500 | 0.4604 | 0.1853 |
| 0.0917 | 100.0 | 10600 | 0.4603 | 0.1849 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.14.1
|
Systran/faster-whisper-medium.en
|
Systran
| 2023-11-23T11:32:09Z | 23,309 | 2 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T09:52:01Z |
---
language:
- en
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper medium.en model for CTranslate2
This repository contains the conversion of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("medium.en")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-medium.en --output_dir faster-whisper-medium.en \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-medium.en).**
|
922-CA/sd-ddlc-tests
|
922-CA
| 2023-11-23T11:20:28Z | 0 | 0 | null |
[
"monika",
"yuri",
"sayori",
"natsuki",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-14T09:08:11Z |
---
license: creativeml-openrail-m
tags:
- monika
- yuri
- sayori
- natsuki
---
SD 1.5 dreambooth models fine-tuned on Anythingv3. Checkpoint files trained on (number of steps divided by 90) images scraped from boorus, with text encoder trained for around (number of images times 12) steps. Focused on ckpts before moving on to safetensors and lora. (using loras with checkpoints and even TIs can bring out most accurate results).
SD 1.5 Loras also trained on Anythingv3. SDXL Loras trained on animagine-xl.
Textual inversion embedding versions can be found [here](https://huggingface.co/922-CA/gfl-ddlc-TI-tests) (trained off the same or similar datasets).
# yr1-2430 (12/05/2022) [SD 1.5 DB]
* Keyword: yrg
* First attempt
# yr5 (12/08/2022) [SD 1.5 DB]
* Keyword: yrg
* Second version
# sa1-1800 (12/08/2022) [SD 1.5 DB]
* Keyword: syr
* First attempt
# sa2 (12/08/2022) [SD 1.5 DB]
* Keyword: syr
* Second attempt
# na1-1800 (12/06/2022) [SD 1.5 DB]
* Keyword: ntsk
* First attempt
# na2-2160 (12/07/2022) [SD 1.5 DB]
* Keyword: ntsk
* Second attempt
# na3a-1800 (12/07/2022) [SD 1.5 DB]
* Keyword: ntsk
* Third attempt
# ml6-10 (~05/2023) [SD 1.5 lora]
* Keyword: monika \(doki doki literature club\)
* Tenth version
* [Quick raw previews](https://huggingface.co/922-CA/sd-ddlc-tests/tree/main/ml6-10-prev) (Silicon29 and BreakDomain Anime)
# mxl1_11132023a_test [SDXL lora]
* Keyword: monika \(doki doki literature club\)
* One of first attempts, works best with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, strength at ~1.5 seems to give best results (may be underfitted)
# mxl1_11162023_test-000150 [SDXL lora]
* Keyword: monika \(doki doki literature club\)
* Fine tuned for longer (200 epochs vs 50 epochs for previous), tested with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, at 150 epochs seems to have best results
# mxl1_11162023_test [SDXL lora]
* Keyword: monika \(doki doki literature club\)
* Fine tuned for longer (200 epochs vs 50 epochs for previous), tested with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, at 200 epochs might be a little overfit (more epochs can be accessed [here](https://huggingface.co/922CA/mxl1_11162023_test))
* [Quick raw previews](https://huggingface.co/922-CA/sd-ddlc-tests/tree/main/mxl1_11162023_test-prev)
# nxl1_11142023_test [SDXL lora]
* Keyword: natsuki \(doki doki literature club\)
* One of first attempts, works best with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, strength at ~1.5 seems to give best results (may be underfitted)
# nxl1_11162023_test-000150-prev [SDXL lora]
* Keyword: natsuki \(doki doki literature club\)
* Fine tuned for longer (200 epochs vs 50 epochs for previous), tested with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, at 150 epochs seems to have best results (more epochs can be accessed [here](https://huggingface.co/922CA/nxl1_11162023_test))
* [Quick raw previews](https://huggingface.co/922-CA/sd-ddlc-tests/tree/main/nxl1_11162023_test-000150-prev)
# sxl1_11142023_test [SDXL lora]
* Keyword: sayori \(doki doki literature club\)
* One of first attempts, works best with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, strength at ~1.5 seems to give best results (may be underfitted)
# sxl1_11172023_test-000130-prev [SDXL lora]
* Keyword: sayori \(doki doki literature club\)
* Fine tuned for longer (cut off at 130 epochs vs 50 epochs for previous), tested with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, at 130 epochs gave workable results already (more epochs can be accessed [here](https://huggingface.co/922CA/sxl1_11172023_test))
* [Quick raw previews](https://huggingface.co/922-CA/sd-ddlc-tests/tree/main/sxl1_11172023_test-000130-prev)
# yxl1_11142023_test [SDXL lora]
* Keyword: yuri \(doki doki literature club\)
* One of first attempts, works best with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, strength at ~1.5 seems to give best results (may be underfitted)
# yxl1_11172023_test [SDXL lora]
* Keyword: yuri \(doki doki literature club\)
* Fine tuned for longer (200 epochs vs 50 epochs for previous), tested with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, (more epochs can be accessed [here](https://huggingface.co/922CA/yxl1_11172023_test))
* [Quick raw previews](https://huggingface.co/922-CA/sd-ddlc-tests/tree/main/yxl1_11172023_test-prev)
MTBA (previews, any future loras or models trained off better bases- hopefully more SDXL too)
|
kt220/my_review_model
|
kt220
| 2023-11-23T11:19:09Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:tohoku-nlp/bert-base-japanese-whole-word-masking",
"base_model:finetune:tohoku-nlp/bert-base-japanese-whole-word-masking",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-16T09:11:18Z |
---
license: cc-by-sa-4.0
base_model: cl-tohoku/bert-base-japanese-whole-word-masking
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_review_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_review_model
This model is a fine-tuned version of [cl-tohoku/bert-base-japanese-whole-word-masking](https://huggingface.co/cl-tohoku/bert-base-japanese-whole-word-masking) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4346
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2084 | 1.0 | 678 | 0.2145 | 0.9363 |
| 0.1852 | 2.0 | 1356 | 0.2198 | 0.9384 |
| 0.0955 | 3.0 | 2034 | 0.2944 | 0.9362 |
| 0.0469 | 4.0 | 2712 | 0.3892 | 0.9359 |
| 0.0318 | 5.0 | 3390 | 0.4346 | 0.9332 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
xz-huggingface-0/zephyr-7b-sft-lora-20231123-32
|
xz-huggingface-0
| 2023-11-23T11:13:35Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-11-23T08:09:02Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: zephyr-7b-sft-lora-20231123-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-lora-20231123-32
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 128
- total_train_batch_size: 4096
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0705 | 0.67 | 34 | 1.0663 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Yangtze-flowing/phoneme2txt_v0
|
Yangtze-flowing
| 2023-11-23T11:05:29Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-18T13:19:46Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: phoneme2txt_v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phoneme2txt_v0
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5025
- Bleu: 0.1925
- Gen Len: 16.1667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 15 | 3.9690 | 0.0897 | 19.0 |
| No log | 2.0 | 30 | 3.8383 | 0.0894 | 19.0 |
| No log | 3.0 | 45 | 3.7403 | 0.0894 | 19.0 |
| No log | 4.0 | 60 | 3.6702 | 0.0 | 18.7667 |
| No log | 5.0 | 75 | 3.6162 | 0.1205 | 18.2667 |
| No log | 6.0 | 90 | 3.5742 | 0.1611 | 17.5833 |
| No log | 7.0 | 105 | 3.5432 | 0.1724 | 17.1 |
| No log | 8.0 | 120 | 3.5210 | 0.1872 | 16.7 |
| No log | 9.0 | 135 | 3.5080 | 0.1899 | 16.3333 |
| No log | 10.0 | 150 | 3.5025 | 0.1925 | 16.1667 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Systran/faster-whisper-base.en
|
Systran
| 2023-11-23T11:03:52Z | 265,568 | 3 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T09:54:08Z |
---
language:
- en
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper base.en model for CTranslate2
This repository contains the conversion of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.en) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("base.en")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-base.en --output_dir faster-whisper-base.en \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-base.en).**
|
Systran/faster-whisper-base
|
Systran
| 2023-11-23T11:02:28Z | 1,081,139 | 11 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T09:52:40Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper base model for CTranslate2
This repository contains the conversion of [openai/whisper-base](https://huggingface.co/openai/whisper-base) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("base")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-base --output_dir faster-whisper-base \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-base).**
|
vijendra11/llama2_test
|
vijendra11
| 2023-11-23T10:59:53Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-11-23T10:59:50Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
|
Systran/faster-whisper-small
|
Systran
| 2023-11-23T10:57:09Z | 507,934 | 12 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T09:53:51Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper small model for CTranslate2
This repository contains the conversion of [openai/whisper-small](https://huggingface.co/openai/whisper-small) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("small")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-small --output_dir faster-whisper-small \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-small).**
|
Systran/faster-whisper-tiny.en
|
Systran
| 2023-11-23T10:47:07Z | 46,280 | 6 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T09:54:25Z |
---
language:
- en
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper tiny.en model for CTranslate2
This repository contains the conversion of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("tiny.en")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-tiny.en --output_dir faster-whisper-tiny.en \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-tiny.en).**
|
Systran/faster-whisper-tiny
|
Systran
| 2023-11-23T10:42:55Z | 426,562 | 9 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T09:53:30Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper tiny model for CTranslate2
This repository contains the conversion of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("tiny")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-tiny --output_dir faster-whisper-tiny \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-tiny).**
|
owanr/Sentiment-google-t5-v1_1-xl-intra-frequency-human-cross-ent
|
owanr
| 2023-11-23T10:41:42Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:google/t5-v1_1-xl",
"base_model:finetune:google/t5-v1_1-xl",
"license:apache-2.0",
"region:us"
] | null | 2023-11-23T04:07:43Z |
---
license: apache-2.0
base_model: google/t5-v1_1-xl
tags:
- generated_from_trainer
model-index:
- name: Sentiment-google-t5-v1_1-xl-intra-frequency-human-cross-ent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-google-t5-v1_1-xl-intra-frequency-human-cross-ent
This model is a fine-tuned version of [google/t5-v1_1-xl](https://huggingface.co/google/t5-v1_1-xl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.2352 | 1.0 | 176 | 7.0547 |
| 6.9879 | 2.0 | 352 | 7.0586 |
| 7.0297 | 3.0 | 528 | nan |
| 0.0 | 4.0 | 704 | nan |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ThuyNT03/CS341_Car-COQE_UniCOQE_
|
ThuyNT03
| 2023-11-23T10:36:40Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-23T10:10:43Z |
---
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
model-index:
- name: CS341_Car-COQE_UniCOQE_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS341_Car-COQE_UniCOQE_
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
Jingya/lcm-sdxl-neuronx
|
Jingya
| 2023-11-23T10:36:27Z | 0 | 0 | null |
[
"license:openrail++",
"region:us"
] | null | 2023-11-14T17:44:39Z |
---
license: openrail++
---
[`latent-consistency/lcm-sdxl`](https://huggingface.co/latent-consistency/lcm-sdxl) compiled on an AWS Inf2 instance. ***INF2/TRN1 ONLY***
***How to use***
```python
from optimum.neuron import NeuronStableDiffusionXLPipeline
pipe = NeuronStableDiffusionXLPipeline.from_pretrained("Jingya/lcm-sdxl-neuronx")
num_images_per_prompt = 2
prompt = ["a close-up picture of an old man standing in the rain"] * num_images_per_prompt
images = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=8.0).images
```
If you are using a later neuron compiler version, you can compile the checkpoint yourself with the following lines via [`🤗 optimum-neuron`](https://huggingface.co/docs/optimum-neuron/index) (the compilation takes approximately an hour):
```python
from optimum.neuron import NeuronStableDiffusionXLPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
unet_id = "latent-consistency/lcm-sdxl"
num_images_per_prompt = 1
input_shapes = {"batch_size": 1, "height": 1024, "width": 1024, "num_images_per_prompt": num_images_per_prompt}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
stable_diffusion = NeuronStableDiffusionXLPipeline.from_pretrained(
model_id, unet_id=unet_id, export=True, **compiler_args, **input_shapes
)
save_directory = "lcm_sdxl_neuron/"
stable_diffusion.save_pretrained(save_directory)
# Push to hub
stable_diffusion.push_to_hub(save_directory, repository_id="Jingya/lcm-sdxl-neuronx", use_auth_token=True)
```
And feel free to make a pull request and contribute to this repo, thx 🤗!
|
owanr/Sentiment-google-t5-v1_1-xl-inter-frequency-model-cross-ent
|
owanr
| 2023-11-23T10:33:25Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:google/t5-v1_1-xl",
"base_model:finetune:google/t5-v1_1-xl",
"license:apache-2.0",
"region:us"
] | null | 2023-11-23T03:50:08Z |
---
license: apache-2.0
base_model: google/t5-v1_1-xl
tags:
- generated_from_trainer
model-index:
- name: Sentiment-google-t5-v1_1-xl-inter-frequency-model-cross-ent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-google-t5-v1_1-xl-inter-frequency-model-cross-ent
This model is a fine-tuned version of [google/t5-v1_1-xl](https://huggingface.co/google/t5-v1_1-xl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.8691 | 1.0 | 176 | 4.6406 |
| 4.859 | 2.0 | 352 | 4.6406 |
| 4.6027 | 3.0 | 528 | nan |
| 0.0 | 4.0 | 704 | nan |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
owanr/Sentiment-google-t5-v1_1-xl-intra-frequency-model-cross-ent
|
owanr
| 2023-11-23T10:25:10Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:google/t5-v1_1-xl",
"base_model:finetune:google/t5-v1_1-xl",
"license:apache-2.0",
"region:us"
] | null | 2023-11-23T03:46:48Z |
---
license: apache-2.0
base_model: google/t5-v1_1-xl
tags:
- generated_from_trainer
model-index:
- name: Sentiment-google-t5-v1_1-xl-intra-frequency-model-cross-ent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-google-t5-v1_1-xl-intra-frequency-model-cross-ent
This model is a fine-tuned version of [google/t5-v1_1-xl](https://huggingface.co/google/t5-v1_1-xl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.3391 | 1.0 | 176 | 5.1797 |
| 5.2012 | 2.0 | 352 | 5.1797 |
| 5.1715 | 3.0 | 528 | nan |
| 0.0 | 4.0 | 704 | nan |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
owanr/SChem5Labels-google-t5-v1_1-xl-inter-frequency-human-cross-ent
|
owanr
| 2023-11-23T10:16:53Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:google/t5-v1_1-xl",
"base_model:finetune:google/t5-v1_1-xl",
"license:apache-2.0",
"region:us"
] | null | 2023-11-23T03:45:02Z |
---
license: apache-2.0
base_model: google/t5-v1_1-xl
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-google-t5-v1_1-xl-inter-frequency-human-cross-ent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-google-t5-v1_1-xl-inter-frequency-human-cross-ent
This model is a fine-tuned version of [google/t5-v1_1-xl](https://huggingface.co/google/t5-v1_1-xl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 10.1531 | 1.0 | 99 | 8.0625 |
| 9.5477 | 2.0 | 198 | 8.0703 |
| 9.693 | 3.0 | 297 | 8.0703 |
| 9.8234 | 4.0 | 396 | 8.0625 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
owanr/SChem5Labels-google-t5-v1_1-xl-intra-frequency-human-cross-ent
|
owanr
| 2023-11-23T10:13:06Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:google/t5-v1_1-xl",
"base_model:finetune:google/t5-v1_1-xl",
"license:apache-2.0",
"region:us"
] | null | 2023-11-23T03:42:26Z |
---
license: apache-2.0
base_model: google/t5-v1_1-xl
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-google-t5-v1_1-xl-intra-frequency-human-cross-ent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-google-t5-v1_1-xl-intra-frequency-human-cross-ent
This model is a fine-tuned version of [google/t5-v1_1-xl](https://huggingface.co/google/t5-v1_1-xl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 10.1531 | 1.0 | 99 | 8.0703 |
| 9.5492 | 2.0 | 198 | 8.0625 |
| 9.6969 | 3.0 | 297 | 8.0625 |
| 9.825 | 4.0 | 396 | 8.0625 |
| 9.9461 | 5.0 | 495 | 8.0625 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
feynman-integrals-nn/topbox-3layers
|
feynman-integrals-nn
| 2023-11-23T10:11:15Z | 5 | 0 |
transformers
|
[
"transformers",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-10T09:36:14Z |
---
license: cc-by-4.0
---
# topbox-3layers
* [model](https://huggingface.co/feynman-integrals-nn/topbox-3layers)
* [data](https://huggingface.co/datasets/feynman-integrals-nn/topbox/tree/8acb41221c829845af218756baa7985c97927a50)
* [source](https://gitlab.com/feynman-integrals-nn/feynman-integrals-nn/-/tree/main/topbox)
|
mtc/LeoLM-leo-mistral-hessianai-7b-xnli-absinth-qlora-4bit
|
mtc
| 2023-11-23T10:09:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2023-11-23T10:09:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
owanr/SChem5Labels-google-t5-v1_1-xl-inter-frequency-model-cross-ent
|
owanr
| 2023-11-23T10:08:26Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:google/t5-v1_1-xl",
"base_model:finetune:google/t5-v1_1-xl",
"license:apache-2.0",
"region:us"
] | null | 2023-11-23T03:40:41Z |
---
license: apache-2.0
base_model: google/t5-v1_1-xl
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-google-t5-v1_1-xl-inter-frequency-model-cross-ent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-google-t5-v1_1-xl-inter-frequency-model-cross-ent
This model is a fine-tuned version of [google/t5-v1_1-xl](https://huggingface.co/google/t5-v1_1-xl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.8828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 11.4711 | 1.0 | 99 | 8.8828 |
| 11.0023 | 2.0 | 198 | 8.8828 |
| 10.7984 | 3.0 | 297 | 8.8828 |
| 11.1695 | 4.0 | 396 | 8.8828 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
bastiansurya77/whisper-large-v2-indo-100steps
|
bastiansurya77
| 2023-11-23T09:58:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"region:us"
] | null | 2023-11-23T09:58:37Z |
---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
TheBloke/Noromaid-20B-v0.1.1-GPTQ
|
TheBloke
| 2023-11-23T09:56:48Z | 27 | 8 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:NeverSleep/Noromaid-20b-v0.1.1",
"base_model:quantized:NeverSleep/Noromaid-20b-v0.1.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-11-23T08:47:36Z |
---
base_model: NeverSleep/Noromaid-20b-v0.1.1
inference: false
license: cc-by-nc-4.0
model_creator: IkariDev and Undi
model_name: Noromaid 20B v0.1.1
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Noromaid 20B v0.1.1 - GPTQ
- Model creator: [IkariDev and Undi](https://huggingface.co/NeverSleep)
- Original model: [Noromaid 20B v0.1.1](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1)
<!-- description start -->
# Description
This repo contains GPTQ model files for [IkariDev and Undi's Noromaid 20B v0.1.1](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF)
* [IkariDev and Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [IkariDev and Undi's Noromaid 20B v0.1.1](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1).
<!-- licensing end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 10.52 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 10.89 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 12.04 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.41 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 20.35 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 9.51 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 20.80 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Noromaid-20B-v0.1.1-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Noromaid-20B-v0.1.1-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Noromaid-20B-v0.1.1-GPTQ`:
```shell
mkdir Noromaid-20B-v0.1.1-GPTQ
huggingface-cli download TheBloke/Noromaid-20B-v0.1.1-GPTQ --local-dir Noromaid-20B-v0.1.1-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Noromaid-20B-v0.1.1-GPTQ
huggingface-cli download TheBloke/Noromaid-20B-v0.1.1-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Noromaid-20B-v0.1.1-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Noromaid-20B-v0.1.1-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Noromaid-20B-v0.1.1-GPTQ --local-dir Noromaid-20B-v0.1.1-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Noromaid-20B-v0.1.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Noromaid-20B-v0.1.1-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Noromaid-20B-v0.1.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Noromaid-20B-v0.1.1-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Noromaid-20B-v0.1.1-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: IkariDev and Undi's Noromaid 20B v0.1.1

---
# Disclaimer:
## This is a ***TEST*** version, don't expect everything to work!!!
You may use our custom **prompting format**(scroll down to download them!), or simple alpaca. **(Choose which fits best for you!)**
---
# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
Tired of the same merges everytime? Here it is, the Noromaid-20b-v0.1.1 model. Suitable for RP, ERP and general stuff.
[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains fp16 files of Noromaid-20b-v0.1.1.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom format, or Alpaca
### Custom format:
UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
OLD SillyTavern config files: [Context](https://files.catbox.moe/x85uy1.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Training data used:
- [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output.
- [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it!
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
y1m1ng/test_dome
|
y1m1ng
| 2023-11-23T09:54:32Z | 15 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T18:31:05Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
Yntec/ChildrenStoriesAnime
|
Yntec
| 2023-11-23T09:45:36Z | 147 | 7 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Zovya",
"Children Books",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-20T13:56:44Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Zovya
- Children Books
---
# Children's Stories Anime
This version of this model by Zovya has the Waifu 1.4 VAE baked in for better saturation.
Original page:
https://civitai.com/models/64544?modelVersionId=69167
|
florentgbelidji/leoLM-13b-sft-lora-oa
|
florentgbelidji
| 2023-11-23T09:44:51Z | 20 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:LeoLM/leo-hessianai-13b",
"base_model:quantized:LeoLM/leo-hessianai-13b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-11-22T19:29:30Z |
---
base_model: LeoLM/leo-hessianai-13b
tags:
- generated_from_trainer
model-index:
- name: leoLM-13b-sft-lora-oa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# leoLM-13b-sft-lora-oa
This model is a fine-tuned version of [LeoLM/leo-hessianai-13b](https://huggingface.co/LeoLM/leo-hessianai-13b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.14.1
|
alejoa/bert-sst2
|
alejoa
| 2023-11-23T09:43:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-23T00:44:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9139908256880734
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3616
- Accuracy: 0.9140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2447 | 1.0 | 8419 | 0.3422 | 0.9117 |
| 0.1435 | 2.0 | 16838 | 0.3616 | 0.9140 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Shrek29/skin_llama
|
Shrek29
| 2023-11-23T09:43:02Z | 2 | 1 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-11-23T09:42:56Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
|
Systran/faster-whisper-large-v3
|
Systran
| 2023-11-23T09:41:12Z | 708,585 | 325 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"yue",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T09:34:20Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
- yue
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v3 model for CTranslate2
This repository contains the conversion of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v3")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-large-v3 --output_dir faster-whisper-large-v3 \
--copy_files tokenizer.json preprocessor_config.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v3).**
|
kardosdrur/dfm-sentence-encoder-small-parallel-v1
|
kardosdrur
| 2023-11-23T09:37:33Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"electra",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-23T09:35:00Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# kardosdrur/dfm-sentence-encoder-small-parallel-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('kardosdrur/dfm-sentence-encoder-small-parallel-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('kardosdrur/dfm-sentence-encoder-small-parallel-v1')
model = AutoModel.from_pretrained('kardosdrur/dfm-sentence-encoder-small-parallel-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=kardosdrur/dfm-sentence-encoder-small-parallel-v1)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 75876 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 200,
"evaluation_steps": 0,
"evaluator": "dfm_sentence_trf.evaluation.task_evaluator.TaskListEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 600,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: ElectraModel
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Mousaicv/selfrag-lora
|
Mousaicv
| 2023-11-23T09:26:00Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"conversational",
"dataset:gpt4_reward_with_format",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-11-23T09:12:03Z |
---
base_model: mrs-7b
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- gpt4_reward_with_format
model-index:
- name: zephyr-7b-sft-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-lora
This model is a fine-tuned version of zephyr-7b on the gpt4_reward.
It achieves the following results on the evaluation set:
- Loss: 0.0911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1001 | 1.0 | 219 | 0.1006 |
| 0.0969 | 2.0 | 439 | 0.0930 |
| 0.0795 | 2.99 | 657 | 0.0911 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
DContrerasF/poca-SoccerTwos-alt
|
DContrerasF
| 2023-11-23T09:17:23Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-11-17T19:24:39Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DContrerasF/poca-SoccerTwos-alt
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
frankminors123/Chinese-CodeLlama-7B-PT
|
frankminors123
| 2023-11-23T09:13:41Z | 3 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-26T06:46:42Z |
---
license: apache-2.0
language:
- zh
- en
tags:
- code
---
# Chinese-CodeLlama-7B-PT
We have further expanded the vocabulary based on Chinese-LLaMA-2-7B which from 55296 to 75548, it is worth noting that the most of them are code tokens. On [MBPP](https://huggingface.co/datasets/mbpp), we calculated the compression rate of the tokenizer to be 4.509 `bytes/token`, and we will reduce this value in the future work to improve training and inference efficiency.
We pre-trained the model based on LoRA which the rank is 8 and the trainable LoRA layers contain `q_proj` and `v_proj`, at the same time, `embed_tokens` and `lm_head` layers were trained with full parameters. All trainable parameters are float32.
The training data contains approximately 400 million tokens which from high-quality code dataset on HuggingFace.
In addition, we applied `memory_efficient_attention` to the pre-training, which saves us a lot of GPU memory space. If you want to quickly use this technology in your LLaMA model, you can refer to my GitHub: https://github.com/FrankMinions/memory_efficient_adapter.
Our model can be used for SFT, and we hope to contribute more valuable work in the Chinese field.
The second version of our fine-tuned model named [Chinese-CodeLlama-7B-SFT-V2](https://huggingface.co/frankminors123/Chinese-CodeLlama-7B-SFT-V2) has been launched. We use a sequence length of 1k for pre-training (this model), and continue training based on this length during the fine-tuning stage. Based on a larger base period of rotary positional embeddings, it can support up 15k context length extrapolation at inference time.
|
tuanio/w2v2_ablation_with_ling_head-drop0.1-not-load-best-wer-best_on_tp0.025_tl10_fp0.001_fl16
|
tuanio
| 2023-11-23T09:09:29Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"base_model:finetune:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-23T08:07:13Z |
---
license: cc-by-nc-4.0
base_model: nguyenvulebinh/wav2vec2-base-vietnamese-250h
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v2_ablation_with_ling_head-drop0.1-not-load-best-wer-best_on_tp0.025_tl10_fp0.001_fl16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2_ablation_with_ling_head-drop0.1-not-load-best-wer-best_on_tp0.025_tl10_fp0.001_fl16
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4141
- Wer: 0.0914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 119.415 | 0.94 | 100 | 91.5112 | 18.6364 |
| 74.7916 | 1.89 | 200 | 12.2928 | 0.9951 |
| 6.9068 | 2.83 | 300 | 5.2345 | 1.0 |
| 5.1207 | 3.77 | 400 | 5.0365 | 1.0 |
| 4.7306 | 4.72 | 500 | 4.9152 | 1.0 |
| 4.4974 | 5.66 | 600 | 4.9315 | 1.0 |
| 4.3923 | 6.6 | 700 | 4.7918 | 1.0 |
| 4.3447 | 7.55 | 800 | 4.6447 | 1.0 |
| 4.225 | 8.49 | 900 | 4.6061 | 1.0 |
| 3.9805 | 9.43 | 1000 | 3.6422 | 0.8733 |
| 2.8303 | 10.38 | 1100 | 1.7824 | 0.3489 |
| 1.5807 | 11.32 | 1200 | 1.0908 | 0.2162 |
| 1.1284 | 12.26 | 1300 | 0.8473 | 0.1640 |
| 0.8703 | 13.21 | 1400 | 0.7322 | 0.1423 |
| 0.7576 | 14.15 | 1500 | 0.6551 | 0.1325 |
| 0.6256 | 15.09 | 1600 | 0.6027 | 0.1387 |
| 0.594 | 16.04 | 1700 | 0.5550 | 0.1300 |
| 0.5492 | 16.98 | 1800 | 0.5200 | 0.1159 |
| 0.476 | 17.92 | 1900 | 0.5012 | 0.1091 |
| 0.4822 | 18.87 | 2000 | 0.5112 | 0.1074 |
| 0.4351 | 19.81 | 2100 | 0.4985 | 0.1179 |
| 0.4169 | 20.75 | 2200 | 0.4712 | 0.1061 |
| 0.3957 | 21.7 | 2300 | 0.4613 | 0.0988 |
| 0.3885 | 22.64 | 2400 | 0.4610 | 0.1025 |
| 0.3827 | 23.58 | 2500 | 0.4509 | 0.0978 |
| 0.3468 | 24.53 | 2600 | 0.4549 | 0.0951 |
| 0.3451 | 25.47 | 2700 | 0.4556 | 0.1019 |
| 0.3234 | 26.42 | 2800 | 0.4554 | 0.1104 |
| 0.31 | 27.36 | 2900 | 0.4568 | 0.0988 |
| 0.3026 | 28.3 | 3000 | 0.4211 | 0.0965 |
| 0.2905 | 29.25 | 3100 | 0.4305 | 0.0911 |
| 0.2964 | 30.19 | 3200 | 0.4379 | 0.0990 |
| 0.302 | 31.13 | 3300 | 0.4379 | 0.0943 |
| 0.2576 | 32.08 | 3400 | 0.4293 | 0.0933 |
| 0.2771 | 33.02 | 3500 | 0.4239 | 0.0928 |
| 0.268 | 33.96 | 3600 | 0.4228 | 0.0894 |
| 0.2458 | 34.91 | 3700 | 0.4288 | 0.0899 |
| 0.2553 | 35.85 | 3800 | 0.4312 | 0.0966 |
| 0.2424 | 36.79 | 3900 | 0.4162 | 0.0917 |
| 0.2501 | 37.74 | 4000 | 0.4088 | 0.0840 |
| 0.2498 | 38.68 | 4100 | 0.4144 | 0.0921 |
| 0.2273 | 39.62 | 4200 | 0.4154 | 0.0863 |
| 0.23 | 40.57 | 4300 | 0.4157 | 0.0868 |
| 0.2409 | 41.51 | 4400 | 0.4033 | 0.0826 |
| 0.248 | 42.45 | 4500 | 0.4122 | 0.0847 |
| 0.218 | 43.4 | 4600 | 0.4052 | 0.0848 |
| 0.1979 | 44.34 | 4700 | 0.4063 | 0.0887 |
| 0.2091 | 45.28 | 4800 | 0.4078 | 0.0823 |
| 0.2097 | 46.23 | 4900 | 0.4177 | 0.0893 |
| 0.2017 | 47.17 | 5000 | 0.4295 | 0.0887 |
| 0.1899 | 48.11 | 5100 | 0.4177 | 0.0919 |
| 0.195 | 49.06 | 5200 | 0.4109 | 0.0880 |
| 0.179 | 50.0 | 5300 | 0.4089 | 0.0879 |
| 0.1773 | 50.94 | 5400 | 0.4071 | 0.0843 |
| 0.1889 | 51.89 | 5500 | 0.4072 | 0.0885 |
| 0.1987 | 52.83 | 5600 | 0.4033 | 0.0873 |
| 0.1979 | 53.77 | 5700 | 0.4033 | 0.0928 |
| 0.1777 | 54.72 | 5800 | 0.4077 | 0.0898 |
| 0.1742 | 55.66 | 5900 | 0.3969 | 0.0838 |
| 0.1678 | 56.6 | 6000 | 0.3997 | 0.0806 |
| 0.1726 | 57.55 | 6100 | 0.3978 | 0.0885 |
| 0.1602 | 58.49 | 6200 | 0.3967 | 0.0860 |
| 0.1681 | 59.43 | 6300 | 0.4039 | 0.0901 |
| 0.1594 | 60.38 | 6400 | 0.3992 | 0.0856 |
| 0.171 | 61.32 | 6500 | 0.4058 | 0.0890 |
| 0.1691 | 62.26 | 6600 | 0.4078 | 0.0842 |
| 0.1724 | 63.21 | 6700 | 0.4161 | 0.0903 |
| 0.172 | 64.15 | 6800 | 0.4121 | 0.0899 |
| 0.1717 | 65.09 | 6900 | 0.4111 | 0.0878 |
| 0.1775 | 66.04 | 7000 | 0.4109 | 0.0926 |
| 0.1607 | 66.98 | 7100 | 0.4080 | 0.0908 |
| 0.1606 | 67.92 | 7200 | 0.4070 | 0.0930 |
| 0.1801 | 68.87 | 7300 | 0.4096 | 0.0908 |
| 0.16 | 69.81 | 7400 | 0.4030 | 0.0933 |
| 0.1433 | 70.75 | 7500 | 0.4059 | 0.0920 |
| 0.1473 | 71.7 | 7600 | 0.4120 | 0.0979 |
| 0.1396 | 72.64 | 7700 | 0.4062 | 0.0922 |
| 0.1429 | 73.58 | 7800 | 0.4079 | 0.0899 |
| 0.1332 | 74.53 | 7900 | 0.4055 | 0.0851 |
| 0.1429 | 75.47 | 8000 | 0.4081 | 0.0922 |
| 0.1528 | 76.42 | 8100 | 0.4083 | 0.0853 |
| 0.1547 | 77.36 | 8200 | 0.4139 | 0.0945 |
| 0.1384 | 78.3 | 8300 | 0.4111 | 0.0933 |
| 0.1696 | 79.25 | 8400 | 0.4132 | 0.0943 |
| 0.1483 | 80.19 | 8500 | 0.4139 | 0.0906 |
| 0.1547 | 81.13 | 8600 | 0.4156 | 0.0959 |
| 0.149 | 82.08 | 8700 | 0.4119 | 0.0905 |
| 0.1294 | 83.02 | 8800 | 0.4145 | 0.0945 |
| 0.1383 | 83.96 | 8900 | 0.4151 | 0.0917 |
| 0.1356 | 84.91 | 9000 | 0.4165 | 0.0952 |
| 0.1491 | 85.85 | 9100 | 0.4188 | 0.0950 |
| 0.1395 | 86.79 | 9200 | 0.4174 | 0.0950 |
| 0.1439 | 87.74 | 9300 | 0.4151 | 0.0919 |
| 0.1421 | 88.68 | 9400 | 0.4152 | 0.0931 |
| 0.1443 | 89.62 | 9500 | 0.4160 | 0.0944 |
| 0.1429 | 90.57 | 9600 | 0.4138 | 0.0928 |
| 0.1397 | 91.51 | 9700 | 0.4149 | 0.0918 |
| 0.155 | 92.45 | 9800 | 0.4144 | 0.0915 |
| 0.1406 | 93.4 | 9900 | 0.4139 | 0.0921 |
| 0.1328 | 94.34 | 10000 | 0.4140 | 0.0929 |
| 0.1461 | 95.28 | 10100 | 0.4142 | 0.0914 |
| 0.1455 | 96.23 | 10200 | 0.4142 | 0.0913 |
| 0.155 | 97.17 | 10300 | 0.4139 | 0.0914 |
| 0.147 | 98.11 | 10400 | 0.4140 | 0.0918 |
| 0.1298 | 99.06 | 10500 | 0.4140 | 0.0917 |
| 0.1508 | 100.0 | 10600 | 0.4141 | 0.0914 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.14.1
|
joedonino/zephyr-7b-radia-html-events-v6
|
joedonino
| 2023-11-23T09:04:55Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2023-11-23T09:04:34Z |
---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
|
SaGuenter/whisper-large-v2-NSC_Korpora_8-100steps
|
SaGuenter
| 2023-11-23T09:04:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"region:us"
] | null | 2023-11-23T09:04:25Z |
---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
SamJu3/sd-danielle-model-lora-ssd
|
SamJu3
| 2023-11-23T08:59:34Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:segmind/SSD-1B",
"base_model:adapter:segmind/SSD-1B",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-23T04:57:41Z |
---
license: creativeml-openrail-m
base_model: segmind/SSD-1B
dataset: /home/cora3/vscode_project/SweetBrothers/kohya_ss/images/train/oca
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - SamJu3/sd-danielle-model-lora-ssd
These are LoRA adaption weights for segmind/SSD-1B. The weights were fine-tuned on the /home/cora3/vscode_project/SweetBrothers/kohya_ss/images/train/oca dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
zrjin/icefall-asr-zipformer-multi-zh-en-2023-11-22
|
zrjin
| 2023-11-23T08:57:52Z | 0 | 0 | null |
[
"tensorboard",
"onnx",
"region:us"
] | null | 2023-11-22T08:52:54Z |
See https://github.com/k2-fsa/icefall/pull/1265
|
Xwin-LM/Xwin-Math-13B-V1.0
|
Xwin-LM
| 2023-11-23T08:56:32Z | 17 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-20T07:36:40Z |
---
license: llama2
---
# Xwin-Math
<p align="center">
<a href="https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Math"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a>
<a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a>
</p>
Xwin-Math is a series of powerful SFT LLMs for math problem based on LLaMA-2.
## 🔥 News
- 💥 [Nov, 2023] The [Xwin-Math-70B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0) model achieves **31.8 pass@1 on the MATH benchmark** and **87.0 pass@1 on the GSM8K benchmark**. This performance places it first amongst all open-source models!
- 💥 [Nov, 2023] The [Xwin-Math-7B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0) and [Xwin-Math-13B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0) models achieve **66.6 and 76.2 pass@1 on the GSM8K benchmark**, ranking as top-1 among all LLaMA-2 based 7B and 13B open-source models, respectively!
## ✨ Model Card
| Model | GSM8K | MATH | Checkpoint | License |
|:-:|:-:|:-:|:-:|:-:|
|Xwin-Math-7B-V1.0 | 66.6 | 17.4 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|Xwin-Math-13B-V1.0| 76.2 | 21.7 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|Xwin-Math-70B-V1.0| 87.0 | 31.8 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
## 🚀 Benchmarks
### Xwin-Math performance on [MATH](https://github.com/hendrycks/math) and [GSM8K](https://github.com/openai/grade-school-math).
Xwin-Math-70B-V1.0 has achieved **31.8% on MATH** and **87.0% on GSM8K**. These scores are **5.3** and **3.1** points higher, respectively, than the previous state-of-the-art open-source MetaMath and LEMAv1 model.
| **Model** |**MATH (Our test)** | **GSM8K (Our test)** |
|:-:|:-:|:-:|
| GPT-4 (zero-shot) | 52.4 | 94.8 |
| GPT-35-Turbo (8-shot)| 37.1 | 81.0 |
| |
| WizardMath-70B | 23.9 | 81.1 |
| MAmmoTH-70B | 20.8 | 72.6 |
| MetaMath-70B | 26.5 | 82.0 |
| LEMAv1-70B | 25.9 | 83.9 |
|**Xwin-Math-70B-V1.0** |**31.8**|**87.0**|
| |
| WizardMath-13B | 15.0 | 63.7 |
| MAmmoTH-13B | 12.3 | 56.2 |
| MetaMath-13B | 22.7 | 70.9 |
| LEMAv1-13B | 13.6 | 65.0 |
|**Xwin-Math-13B-V1.0** | 21.7 | 76.2 |
| |
| WizardMath-7B | 10.9 | 55.0 |
| MAmmoTH-7B | 9.6 | 50.2 |
| MetaMath-7B | 20.1 | 66.6 |
| LEMAv1-7B | 10.0 | 54.7 |
|**Xwin-Math-7B-V1.0** | 17.4 | 66.6 |
We obtain these results using our flexible evaluation strategy. Due to differences in environment and hardware, the numbers may be different from the reported results, but we ensure that the evaluation is as accurate and fair as possible.
### Xwin-Math performance on other math benchmarks.
Our 70B model shows strong mathematical synthesis capabilities among all open-sourced models. Also note that our model even approaches or surpasses the performance of GPT-35-Turbo on some benchmarks.
| **Model** | SVAMP | ASDiv | NumGlue | Algebra | MAWPS | **Average** |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| GPT-35-Turbo (8-shot)| 80.6 | 84.1 | 81.8 | 90.5 | 91.7 | 85.7 |
| |
| WizardMath-70B | 80.2 | 75.8 | 71.4 | 64.0 | 74.9 | 73.3 |
| MAmmoTH-70B | 71.2 | 73.9 | 62.7 | 58.1 | 72.2 | 67.6 |
| MetaMath-70B | 85.8 | 81.1 | 77.5 | 79.7 | 81.4 | 81.1 |
| LEMAv1-70B-MATH * | 81.6 | 77.1 | 72.1 | 69.4 | 81.8 | 76.5 |
|**Xwin-Math-70B-V1.0** | 84.0 | 84.1 | 81.3 | 78.4 | 90.8 | 83.7 |
\* LEMAv1 has two models, and we report the better LEMAv1-70B-MATH model in these benchmarks.
## 🔨 Evaluation
In order to evaluate a model's mathematical capabilities more flexibly and ensure a fair comparison of results, particularly for the MATH benchmark, we have developed a new evaluation tool. We have also assessed the pass@1 results of recent models on MATH and GSM8K benchmarks, which provides more accurate results.
We hope this toolkit can benefit open-source community by providing more accurate insights and conclusions. For a deeper understanding of our evaluation tool and methods, please visit [here](https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Math/eval)
* "Report" refers to the accuracy stated in the original papers.
* "Repro" indicates the results is reproduced by generating responses and evaluating them using the respective open-source models and scripts.
* "Strict" and "Flex" denote the results we achieved by employing our two strategies to extract answer and evaluate the same responses as "Repro".
| Model | MATH <br> (Report) <br/> |MATH <br> (Repro) <br/> | MATH <br> (Strict) <br/> |MATH <br> (Flex) <br/> | GSM8K <br> (Report) <br/> |GSM8K <br> (Repro) <br/>| GSM8K <br> (Strict) <br/> | GSM8K <br> (Report) <br/> |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| GPT-35-Turbo (8-shot)| 34.1 | - | 23.8 | 37.1 | 80.8 | - | 77.9 | 81.0 |
| |
| WizardMath-70B | 22.7 | 23.0 | 23.9 | 23.9 | 81.6 | 81.4 | 81.1 | 81.1 |
| MAmmoTH-70B | 21.1 | 18.0 | 20.0 | 20.8 | 72.4 | 72.6 | 72.6 | 72.6 |
| MetaMath-70B | 26.6 | 25.9 | 26.3 | 26.5 | 82.3 | 82.3 | 82.0 | 82.0 |
|**Xwin-Math-70B-V1.0** | - | - |**31.8**|**31.8**| - | - |**87.0**|**87.0**|
| |
| WizardMath-13B | 14.0 | 14.2 | 14.9 | 15.0 | 63.9 | 63.9 | 63.7 | 63.7 |
| MAmmoTH-13B | 12.9 | 10.8 | 11.8 | 12.3 | 56.3 | 56.2 | 56.1 | 56.2 |
| MetaMath-13B | 22.4 | 22.5 | 22.6 | 22.7 | 72.3 | 71.0 | 70.9 | 70.9 |
|**Xwin-Math-13B-V1.0** | - | - | 21.6 | 21.7 | - | - | 76.2 | 76.2 |
| |
| WizardMath-7B | 10.7 | 10.3 | 10.9 | 10.9 | 54.9 | 55.2 | 55.0 | 55.0 |
| MAmmoTH-7B | 10.4 | 8.6 | 9.1 | 9.6 | 50.5 | 50.2 | 50.2 | 50.2 |
| MetaMath-7B | 19.8 | 19.6 | 19.9 | 20.1 | 66.5 | 66.6 | 66.6 | 66.6 |
|**Xwin-Math-7B-V1.0** | - | - | 17.3 | 17.4 | - | - | 66.6 | 66.6 |
### Installation
Before you start, please install the requirements.
```bash
pip install -r requirements.txt
```
We tested our result using `python 3.8` and `cuda 11.8`. We recommend you use docker.
```bash
docker run --gpus all -it --rm --ipc=host superbench/dev:cuda11.8
```
### Generate
To generate the model's responses, you can use the `generate.py` script. Please be aware that generating responses is separate from verifying their correctness. After that, we will then check for their correctness.
For the generation process, we use the Vicuna-v1.1 system prompt with chain-of-thought and format instruction. We also employ a greedy decoding strategy and set the maximum sequence length to 2048.
```
"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {instruction} Give your solution in detail. In the end, write your final answer in the format of 'The answer is: <ANSWER>.'. ASSISTANT: "
```
Here is an simple example to generate using [vLLM](https://docs.vllm.ai/en/latest/).
```bash
cd eval
python generate.py --dataset_path dataset/gsm8k.json --model_path path/to/your/model --tensor_parallel_size 4
```
By default the results will be output to the `eval/response`, using the prompt `eval/prompt/xwin_math.json`. If you wish to change the output path or use a different prompt
```bash
python generate.py --dataset_path dataset/gsm8k.json --model_path path/to/your/model --tensor_parallel_size 4 --output_path /your/path --prompt_path /your/path
```
We provide some datasets (in `eval/dataset`):
- `gsm8k.json`: GSM8K.
- `math.json`: MATH.
- `combination.json`: A combination of many benchmarks, can evaluate the OOD capability of the model.
If you wan't to use your own datasets, please format your dataset like this.
```jsonc
[
{
"question": "Janet\u2019s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?",
"answer": "18",
"type": "GSM8K",
"subtype": "",
"level": 0,
},
// ... more data items
]
```
### Evaluate
To verify the accuracy of the answers after generation, you can use the `check.py script.
Here is an simple example
```bash
cd eval
python eval.py /path/to/model/response
```
The result will be saved in `eval/evaluation`
If you do not want to save the results or want to change the save path
```bash
python eval.py --data_path /path/to/model/response --save_path /path/to/save --save_result True
```
Once you run the script, the terminal will display the output as a table. This table will show the number of instances for each benchmark and the corresponding accuracy. Here is a hypothetical example of what the output might look like:
||Type|Subtype|Level|Correct|Incorrect|Total|Accuracy|
|---|---|---|---|---|---|---|---|
|0|MAWPS|addsub|0|359|33|392|0.915816|
|1|MAWPS|multiarith|0|586|14|600|0.976667|
|...|
## Citation
Please consider citing our work if you use the data or code in this repo.
```
@software{xwin-math,
title = {Xwin-Math},
author = {Xwin-Math Team},
url = {https://github.com/Xwin-LM/Xwin-LM/Xwin-Math},
version = {pre-release},
year = {2023},
month = {11},
}
```
## Acknowledgements
Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), and [vLLM](https://github.com/vllm-project/vllm).
|
waldie/Karen_TheEditor_V2_CREATIVE_Mistral_7B-8bpw-h8-exl2
|
waldie
| 2023-11-23T08:52:46Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"llm",
"llama",
"spellcheck",
"grammar",
"conversational",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-23T08:32:46Z |
---
tags:
- llm
- llama
- spellcheck
- grammar
license: llama2
---
quant of [FPHam's](https://huggingface.co/FPHam) [Karen_TheEditor_V2_CREATIVE_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B)
wikitext used as calibration dataset.
|
joddiy/my_awesome_eli5_clm-model
|
joddiy
| 2023-11-23T08:48:46Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-23T08:25:31Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
datasets:
- eli5
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.865 | 1.0 | 1119 | 3.7927 |
| 3.7708 | 2.0 | 2238 | 3.7772 |
| 3.7176 | 3.0 | 3357 | 3.7763 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
PriyankSisodia/bloom_7b_23nov
|
PriyankSisodia
| 2023-11-23T08:48:04Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:bigscience/bloom-7b1",
"base_model:adapter:bigscience/bloom-7b1",
"region:us"
] | null | 2023-11-23T08:48:00Z |
---
library_name: peft
base_model: bigscience/bloom-7b1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
tsubakiky/distilbert-base-uncased-finetuned-clinc
|
tsubakiky
| 2023-11-23T08:47:31Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-23T07:47:24Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9164516129032259
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7725
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2763 | 0.7284 |
| 3.7825 | 2.0 | 636 | 1.8625 | 0.8365 |
| 3.7825 | 3.0 | 954 | 1.1513 | 0.8984 |
| 1.6859 | 4.0 | 1272 | 0.8540 | 0.9135 |
| 0.8984 | 5.0 | 1590 | 0.7725 | 0.9165 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
lukekim420/rules-12.8b
|
lukekim420
| 2023-11-23T08:43:20Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:beomi/polyglot-ko-12.8b-safetensors",
"base_model:adapter:beomi/polyglot-ko-12.8b-safetensors",
"region:us"
] | null | 2023-11-23T08:43:19Z |
---
library_name: peft
base_model: beomi/polyglot-ko-12.8b-safetensors
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
MayIBorn/cola-deberta_initialize_dW_A_with_svd_from_back
|
MayIBorn
| 2023-11-23T08:26:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/deberta-v2-xxlarge",
"base_model:adapter:microsoft/deberta-v2-xxlarge",
"region:us"
] | null | 2023-11-23T08:26:13Z |
---
library_name: peft
base_model: microsoft/deberta-v2-xxlarge
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.7.0.dev0
|
huatougui/my_awesome_model
|
huatougui
| 2023-11-23T08:25:17Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-23T07:26:11Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93204
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2539
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2245 | 1.0 | 1563 | 0.2077 | 0.9247 |
| 0.1215 | 2.0 | 3126 | 0.2539 | 0.9320 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
guyshalev/positions
|
guyshalev
| 2023-11-23T08:22:37Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-23T07:59:59Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: positions
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.41111111640930176
---
# positions
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### kneeling

#### surfer leaning

#### surfer squatting

#### surfer standing

|
MichaelJH/Ryu-AI_by12.8b-400step
|
MichaelJH
| 2023-11-23T08:13:17Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:beomi/polyglot-ko-12.8b-safetensors",
"base_model:adapter:beomi/polyglot-ko-12.8b-safetensors",
"region:us"
] | null | 2023-11-23T08:13:15Z |
---
library_name: peft
base_model: beomi/polyglot-ko-12.8b-safetensors
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
StevenPerrin/ppo-Pyramids
|
StevenPerrin
| 2023-11-23T08:11:32Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-11-23T08:07:03Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: StevenPerrin/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
enaitzb/ppo-SnowballTarget
|
enaitzb
| 2023-11-23T08:08:51Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-11-23T08:08:48Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: enaitzb/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Oufei123/second_try_v3
|
Oufei123
| 2023-11-23T08:07:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"region:us"
] | null | 2023-11-23T08:07:15Z |
---
library_name: peft
base_model: openai/whisper-large-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
IHHI/distilhubert-finetuned-gtzan
|
IHHI
| 2023-11-23T08:01:50Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-11-23T06:09:49Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5608
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0156 | 1.0 | 113 | 1.8462 | 0.54 |
| 1.3084 | 2.0 | 226 | 1.2329 | 0.65 |
| 0.9907 | 3.0 | 339 | 0.9459 | 0.75 |
| 0.8074 | 4.0 | 452 | 0.8194 | 0.72 |
| 0.6345 | 5.0 | 565 | 0.6655 | 0.8 |
| 0.3738 | 6.0 | 678 | 0.5998 | 0.81 |
| 0.4494 | 7.0 | 791 | 0.5768 | 0.85 |
| 0.1849 | 8.0 | 904 | 0.5636 | 0.85 |
| 0.2193 | 9.0 | 1017 | 0.5421 | 0.86 |
| 0.1552 | 10.0 | 1130 | 0.5608 | 0.84 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/airoboros-m-7b-3.1.2-dare-0.85-6.0bpw-h6-exl2
|
LoneStriker
| 2023-11-23T08:01:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-23T07:58:15Z |
---
license: apache-2.0
---
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
weight_mask_rate: 0.85 / use_weight_rescale: True / mask_stratery: random / scaling_coefficient: 1.0
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Intel/neural-chat-7b-v3-1 | 59.06 | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 |
| migtissera/SynthIA-7B-v1.3 | 57.11 | 62.12 | 83.45 | 62.65 | 51.37 | 78.85 | 17.59 | 43.76 |
| bhenrym14/mistral-7b-platypus-fp16 | 56.89 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 | 45.92 |
| jondurbin/airoboros-m-7b-3.1.2 | 56.24 | 61.86 | 83.51 | 61.91 | 53.75 | 77.58 | 13.87 | 41.2 |
| uukuguy/speechless-code-mistral-orca-7b-v1.0 | 55.33 | 59.64 | 82.25 | 61.33 | 48.45 | 77.51 | 8.26 | 49.89 |
| teknium/CollectiveCognition-v1.1-Mistral-7B | 53.87 | 62.12 | 84.17 | 62.35 | 57.62 | 75.37 | 15.62 | 19.85 |
| Open-Orca/Mistral-7B-SlimOrca | 53.34 | 62.54 | 83.86 | 62.77 | 54.23 | 77.43 | 21.38 | 11.2 |
| uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b | 53.34 | 64.33 | 84.4 | 63.72 | 52.52 | 78.37 | 21.38 | 8.66 |
| ehartford/dolphin-2.2.1-mistral-7b | 53.06 | 63.48 | 83.86 | 63.28 | 53.17 | 78.37 | 21.08 | 8.19 |
| teknium/CollectiveCognition-v1-Mistral-7B | 52.55 | 62.37 | 85.5 | 62.76 | 54.48 | 77.58 | 17.89 | 7.22 |
| HuggingFaceH4/zephyr-7b-alpha | 52.4 | 61.01 | 84.04 | 61.39 | 57.9 | 78.61 | 14.03 | 9.82 |
| ehartford/samantha-1.2-mistral-7b | 52.16 | 64.08 | 85.08 | 63.91 | 50.4 | 78.53 | 16.98 | 6.13 |
|
dbailleul/ppo-SnowballTarget
|
dbailleul
| 2023-11-23T07:58:09Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-11-23T07:58:06Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dbailleul/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mirodil/whisper-tiny-en-us-minds14
|
mirodil
| 2023-11-23T07:53:55Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T04:59:06Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
model-index:
- name: whisper-tiny-en-us-minds14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en-us-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3247
- eval_wer_ortho: 36.7057
- eval_wer: 36.6588
- eval_runtime: 20.5021
- eval_samples_per_second: 5.512
- eval_steps_per_second: 0.39
- epoch: 107.14
- step: 3000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
kingabzpro/sdxl-lora-abid
|
kingabzpro
| 2023-11-23T07:50:29Z | 10 | 4 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-11-21T16:46:41Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of Abid Ali Awan wearing casual clothes, taking a selfie, and smiling.
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
LoneStriker/airoboros-m-7b-3.1.2-dare-0.85-4.0bpw-h6-exl2
|
LoneStriker
| 2023-11-23T07:49:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-23T07:47:20Z |
---
license: apache-2.0
---
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
weight_mask_rate: 0.85 / use_weight_rescale: True / mask_stratery: random / scaling_coefficient: 1.0
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Intel/neural-chat-7b-v3-1 | 59.06 | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 |
| migtissera/SynthIA-7B-v1.3 | 57.11 | 62.12 | 83.45 | 62.65 | 51.37 | 78.85 | 17.59 | 43.76 |
| bhenrym14/mistral-7b-platypus-fp16 | 56.89 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 | 45.92 |
| jondurbin/airoboros-m-7b-3.1.2 | 56.24 | 61.86 | 83.51 | 61.91 | 53.75 | 77.58 | 13.87 | 41.2 |
| uukuguy/speechless-code-mistral-orca-7b-v1.0 | 55.33 | 59.64 | 82.25 | 61.33 | 48.45 | 77.51 | 8.26 | 49.89 |
| teknium/CollectiveCognition-v1.1-Mistral-7B | 53.87 | 62.12 | 84.17 | 62.35 | 57.62 | 75.37 | 15.62 | 19.85 |
| Open-Orca/Mistral-7B-SlimOrca | 53.34 | 62.54 | 83.86 | 62.77 | 54.23 | 77.43 | 21.38 | 11.2 |
| uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b | 53.34 | 64.33 | 84.4 | 63.72 | 52.52 | 78.37 | 21.38 | 8.66 |
| ehartford/dolphin-2.2.1-mistral-7b | 53.06 | 63.48 | 83.86 | 63.28 | 53.17 | 78.37 | 21.08 | 8.19 |
| teknium/CollectiveCognition-v1-Mistral-7B | 52.55 | 62.37 | 85.5 | 62.76 | 54.48 | 77.58 | 17.89 | 7.22 |
| HuggingFaceH4/zephyr-7b-alpha | 52.4 | 61.01 | 84.04 | 61.39 | 57.9 | 78.61 | 14.03 | 9.82 |
| ehartford/samantha-1.2-mistral-7b | 52.16 | 64.08 | 85.08 | 63.91 | 50.4 | 78.53 | 16.98 | 6.13 |
|
Yangtze-flowing/my_awesome_opus_books_model_2
|
Yangtze-flowing
| 2023-11-23T07:48:09Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-14T08:03:31Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: my_awesome_opus_books_model_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model_2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
guyshalev/sports
|
guyshalev
| 2023-11-23T07:47:45Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-23T07:47:36Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: sports
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9552238583564758
---
# sports
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### football

#### gymnastics

#### surfing

|
LoneStriker/SynthIA-7B-v1.3-dare-0.85-8.0bpw-h8-exl2
|
LoneStriker
| 2023-11-23T07:45:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-23T07:40:38Z |
---
license: llama2
---
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
weight_mask_rate: 0.85 / use_weight_rescale: True / mask_stratery: random / scaling_coefficient: 1.0
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Intel/neural-chat-7b-v3-1 | 59.06 | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 |
| migtissera/SynthIA-7B-v1.3 | 57.11 | 62.12 | 83.45 | 62.65 | 51.37 | 78.85 | 17.59 | 43.76 |
| bhenrym14/mistral-7b-platypus-fp16 | 56.89 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 | 45.92 |
| jondurbin/airoboros-m-7b-3.1.2 | 56.24 | 61.86 | 83.51 | 61.91 | 53.75 | 77.58 | 13.87 | 41.2 |
| uukuguy/speechless-code-mistral-orca-7b-v1.0 | 55.33 | 59.64 | 82.25 | 61.33 | 48.45 | 77.51 | 8.26 | 49.89 |
| teknium/CollectiveCognition-v1.1-Mistral-7B | 53.87 | 62.12 | 84.17 | 62.35 | 57.62 | 75.37 | 15.62 | 19.85 |
| Open-Orca/Mistral-7B-SlimOrca | 53.34 | 62.54 | 83.86 | 62.77 | 54.23 | 77.43 | 21.38 | 11.2 |
| uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b | 53.34 | 64.33 | 84.4 | 63.72 | 52.52 | 78.37 | 21.38 | 8.66 |
| ehartford/dolphin-2.2.1-mistral-7b | 53.06 | 63.48 | 83.86 | 63.28 | 53.17 | 78.37 | 21.08 | 8.19 |
| teknium/CollectiveCognition-v1-Mistral-7B | 52.55 | 62.37 | 85.5 | 62.76 | 54.48 | 77.58 | 17.89 | 7.22 |
| HuggingFaceH4/zephyr-7b-alpha | 52.4 | 61.01 | 84.04 | 61.39 | 57.9 | 78.61 | 14.03 | 9.82 |
| ehartford/samantha-1.2-mistral-7b | 52.16 | 64.08 | 85.08 | 63.91 | 50.4 | 78.53 | 16.98 | 6.13 |
|
laylabitar/fine_tuned_llama_impression
|
laylabitar
| 2023-11-23T07:40:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-11-23T07:25:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
1Pampu/dvmil2
|
1Pampu
| 2023-11-23T07:38:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-23T07:33:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dvMil2 Dreambooth model trained by 1Pampu with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
tianlinliu0121/zephyr-7b-dpo-full-beta-0.2
|
tianlinliu0121
| 2023-11-23T07:33:14Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T18:43:04Z |
---
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full-beta-0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-full-beta-0.2
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7903
- Rewards/chosen: -3.2220
- Rewards/rejected: -7.3367
- Rewards/accuracies: 0.7659
- Rewards/margins: 4.1147
- Logps/rejected: -282.6258
- Logps/chosen: -314.5996
- Logits/rejected: -2.6943
- Logits/chosen: -2.6970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5631 | 0.26 | 500 | 0.5260 | 0.0288 | -1.2082 | 0.75 | 1.2371 | -251.9833 | -298.3453 | -2.9467 | -2.9577 |
| 0.5432 | 0.52 | 1000 | 0.5888 | -0.0335 | -1.8482 | 0.7540 | 1.8147 | -255.1831 | -298.6568 | -2.8465 | -2.8476 |
| 0.5368 | 0.77 | 1500 | 0.5860 | -0.4836 | -2.3300 | 0.7619 | 1.8464 | -257.5920 | -300.9073 | -2.8455 | -2.8445 |
| 0.0615 | 1.03 | 2000 | 0.6024 | -0.5971 | -2.6919 | 0.7778 | 2.0948 | -259.4018 | -301.4749 | -2.8687 | -2.8639 |
| 0.0817 | 1.29 | 2500 | 0.6655 | -1.3554 | -3.8426 | 0.7738 | 2.4872 | -265.1552 | -305.2667 | -2.8257 | -2.8254 |
| 0.0617 | 1.55 | 3000 | 0.6421 | -1.2552 | -3.7613 | 0.75 | 2.5062 | -264.7488 | -304.7651 | -2.7744 | -2.7683 |
| 0.0765 | 1.81 | 3500 | 0.6582 | -1.1492 | -4.0394 | 0.7659 | 2.8902 | -266.1391 | -304.2354 | -2.7403 | -2.7389 |
| 0.0178 | 2.07 | 4000 | 0.6797 | -1.8485 | -5.2549 | 0.7619 | 3.4064 | -272.2166 | -307.7317 | -2.7310 | -2.7273 |
| 0.0165 | 2.32 | 4500 | 0.7359 | -2.2096 | -6.0498 | 0.7817 | 3.8401 | -276.1910 | -309.5376 | -2.7006 | -2.7001 |
| 0.0094 | 2.58 | 5000 | 0.7864 | -2.8828 | -6.8542 | 0.7738 | 3.9713 | -280.2130 | -312.9036 | -2.7185 | -2.7196 |
| 0.0094 | 2.84 | 5500 | 0.7953 | -3.1897 | -7.3009 | 0.7579 | 4.1112 | -282.4464 | -314.4378 | -2.6987 | -2.7012 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
4bit/ShareGPT4V-7B_Pretrained_vit-large336-l12
|
4bit
| 2023-11-23T07:32:24Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"clip_vision_model",
"feature-extraction",
"arxiv:2311.12793",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2023-11-23T07:32:07Z |
---
license: apache-2.0
inference: false
---
<br>
<br>
# ShareGPT4V Model Card
## Model details
**Model type:**
This is the vision tower of ShareGPT4V-7B fine-tuned with our [ShareGPT4V dataset](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V).
**Model date:**
This vision tower was trained in Nov 2023.
**Paper or resources for more information:**
[[Project](https://ShareGPT4V.github.io/)] [[Paper](https://huggingface.co/papers/2311.12793)] [[Code](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V)]
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Intended use
**Primary intended uses:**
The primary use of this vision tower is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 1.2M high-quality image-text pairs
|
PriyankSisodia/bloom_3B_test20ep_23nov
|
PriyankSisodia
| 2023-11-23T07:15:55Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:bigscience/bloom-3b",
"base_model:adapter:bigscience/bloom-3b",
"region:us"
] | null | 2023-11-23T07:15:50Z |
---
library_name: peft
base_model: bigscience/bloom-3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
KMJJJJ/ppo
|
KMJJJJ
| 2023-11-23T07:15:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-23T07:09:50Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **PandaReachDense-v3**
This is a trained model of a **PPO** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
4bit/ShareGPT4V-7B
|
4bit
| 2023-11-23T07:15:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llava",
"text-generation",
"arxiv:2311.12793",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-11-23T07:09:28Z |
---
inference: false
---
<br>
<br>
# ShareGPT4V-7B Model Card
## Model details
**Model type:**
ShareGPT4V-7B is an open-source chatbot trained by fine-tuning CLP vision tower and LLaMA/Vicuna on GPT4-Vision-assisted [ShareGPT4V](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) data and LLaVA instruction-tuning data.
**Model date:**
ShareGPT4V-7B was trained in Nov 2023.
**Paper or resources for more information:**
[[Project](https://ShareGPT4V.github.io/)] [[Paper](https://huggingface.co/papers/2311.12793)] [[Code](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V)]
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Intended use
**Primary intended uses:**
The primary use of ShareGPT4V-7B is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 1.2M high-quality image-text pairs, i.e., ShareGPT4V-PT data
- 100K GPT4-Vision-generated image-text pairs
- LLaVA instruction-tuning data
## Evaluation dataset
A collection of 11 benchmarks
|
KhateDai/NFT
|
KhateDai
| 2023-11-23T07:10:59Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-23T04:52:14Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
model-index:
- name: NFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NFT
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
trng1305/layoutlmv2-sroie-test
|
trng1305
| 2023-11-23T07:06:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-11-23T06:35:22Z |
---
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-sroie-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-sroie-test
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0312
- Address: {'precision': 0.9915751850906306, 'recall': 0.9941131302789864, 'f1': 0.992842535787321, 'number': 3907}
- Company: {'precision': 0.966491458607096, 'recall': 0.9865861837692823, 'f1': 0.9764354463989379, 'number': 1491}
- Date: {'precision': 1.0, 'recall': 0.985981308411215, 'f1': 0.9929411764705882, 'number': 428}
- Total: {'precision': 0.8783068783068783, 'recall': 0.894878706199461, 'f1': 0.8865153538050735, 'number': 371}
- Overall Precision: 0.9792
- Overall Recall: 0.9858
- Overall F1: 0.9825
- Overall Accuracy: 0.9947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Address | Company | Date | Total | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3558 | 1.0 | 40 | 0.0698 | {'precision': 0.9855256475368207, 'recall': 0.9933452777066804, 'f1': 0.9894200127469727, 'number': 3907} | {'precision': 0.8677685950413223, 'recall': 0.9859154929577465, 'f1': 0.9230769230769231, 'number': 1491} | {'precision': 0.8384458077709611, 'recall': 0.9579439252336449, 'f1': 0.8942202835332607, 'number': 428} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 371} | 0.9412 | 0.9296 | 0.9354 | 0.9808 |
| 0.0489 | 2.0 | 80 | 0.0374 | {'precision': 0.9917547023962896, 'recall': 0.9851548502687484, 'f1': 0.9884437596302004, 'number': 3907} | {'precision': 0.92625, 'recall': 0.993963782696177, 'f1': 0.9589129731478485, 'number': 1491} | {'precision': 0.9699074074074074, 'recall': 0.9789719626168224, 'f1': 0.9744186046511628, 'number': 428} | {'precision': 0.7655367231638418, 'recall': 0.7304582210242587, 'f1': 0.7475862068965516, 'number': 371} | 0.9607 | 0.9716 | 0.9661 | 0.9899 |
| 0.0282 | 3.0 | 120 | 0.0277 | {'precision': 0.9913198876691346, 'recall': 0.993857179421551, 'f1': 0.9925869120654397, 'number': 3907} | {'precision': 0.9633986928104575, 'recall': 0.98859825620389, 'f1': 0.9758358159549818, 'number': 1491} | {'precision': 0.9929078014184397, 'recall': 0.9813084112149533, 'f1': 0.9870740305522915, 'number': 428} | {'precision': 0.8376068376068376, 'recall': 0.7924528301886793, 'f1': 0.8144044321329641, 'number': 371} | 0.9759 | 0.9797 | 0.9778 | 0.9933 |
| 0.0194 | 4.0 | 160 | 0.0259 | {'precision': 0.990063694267516, 'recall': 0.9946250319938572, 'f1': 0.9923391215526048, 'number': 3907} | {'precision': 0.9754152823920266, 'recall': 0.9845741113346748, 'f1': 0.9799732977303072, 'number': 1491} | {'precision': 0.9929245283018868, 'recall': 0.9836448598130841, 'f1': 0.9882629107981221, 'number': 428} | {'precision': 0.8320209973753281, 'recall': 0.8544474393530997, 'f1': 0.8430851063829787, 'number': 371} | 0.9771 | 0.9831 | 0.9801 | 0.9939 |
| 0.0148 | 5.0 | 200 | 0.0259 | {'precision': 0.990316004077472, 'recall': 0.9946250319938572, 'f1': 0.9924658408887755, 'number': 3907} | {'precision': 0.9597141000649773, 'recall': 0.9906103286384976, 'f1': 0.9749174917491749, 'number': 1491} | {'precision': 0.9952941176470588, 'recall': 0.9883177570093458, 'f1': 0.9917936694021102, 'number': 428} | {'precision': 0.8621621621621621, 'recall': 0.8598382749326146, 'f1': 0.8609986504723346, 'number': 371} | 0.9756 | 0.9852 | 0.9803 | 0.9940 |
| 0.0113 | 6.0 | 240 | 0.0255 | {'precision': 0.9910714285714286, 'recall': 0.9943690811364219, 'f1': 0.9927175162897662, 'number': 3907} | {'precision': 0.9659239842726082, 'recall': 0.98859825620389, 'f1': 0.9771295989393438, 'number': 1491} | {'precision': 0.9976415094339622, 'recall': 0.9883177570093458, 'f1': 0.9929577464788731, 'number': 428} | {'precision': 0.9008746355685131, 'recall': 0.8328840970350404, 'f1': 0.8655462184873949, 'number': 371} | 0.9804 | 0.9829 | 0.9816 | 0.9944 |
| 0.0094 | 7.0 | 280 | 0.0267 | {'precision': 0.9908233494774408, 'recall': 0.9948809828512926, 'f1': 0.9928480204342274, 'number': 3907} | {'precision': 0.9627450980392157, 'recall': 0.9879275653923542, 'f1': 0.9751737835153922, 'number': 1491} | {'precision': 0.9952830188679245, 'recall': 0.985981308411215, 'f1': 0.9906103286384976, 'number': 428} | {'precision': 0.8787878787878788, 'recall': 0.8598382749326146, 'f1': 0.8692098092643051, 'number': 371} | 0.9777 | 0.9845 | 0.9811 | 0.9942 |
| 0.0082 | 8.0 | 320 | 0.0274 | {'precision': 0.9915751850906306, 'recall': 0.9941131302789864, 'f1': 0.992842535787321, 'number': 3907} | {'precision': 0.9671268902038133, 'recall': 0.9865861837692823, 'f1': 0.9767596281540504, 'number': 1491} | {'precision': 0.9929577464788732, 'recall': 0.9883177570093458, 'f1': 0.990632318501171, 'number': 428} | {'precision': 0.8898071625344353, 'recall': 0.8706199460916442, 'f1': 0.880108991825613, 'number': 371} | 0.9798 | 0.9845 | 0.9821 | 0.9946 |
| 0.0069 | 9.0 | 360 | 0.0273 | {'precision': 0.9915751850906306, 'recall': 0.9941131302789864, 'f1': 0.992842535787321, 'number': 3907} | {'precision': 0.972203838517538, 'recall': 0.9852448021462106, 'f1': 0.9786808794137242, 'number': 1491} | {'precision': 1.0, 'recall': 0.9813084112149533, 'f1': 0.9905660377358491, 'number': 428} | {'precision': 0.88, 'recall': 0.889487870619946, 'f1': 0.8847184986595173, 'number': 371} | 0.9807 | 0.9848 | 0.9828 | 0.9948 |
| 0.0055 | 10.0 | 400 | 0.0291 | {'precision': 0.9905636317266003, 'recall': 0.9941131302789864, 'f1': 0.9923352069494124, 'number': 3907} | {'precision': 0.9671268902038133, 'recall': 0.9865861837692823, 'f1': 0.9767596281540504, 'number': 1491} | {'precision': 0.9976359338061466, 'recall': 0.985981308411215, 'f1': 0.991774383078731, 'number': 428} | {'precision': 0.9025787965616046, 'recall': 0.8490566037735849, 'f1': 0.875, 'number': 371} | 0.9804 | 0.9831 | 0.9817 | 0.9944 |
| 0.0045 | 11.0 | 440 | 0.0292 | {'precision': 0.9915751850906306, 'recall': 0.9941131302789864, 'f1': 0.992842535787321, 'number': 3907} | {'precision': 0.9696169088507266, 'recall': 0.9845741113346748, 'f1': 0.9770382695507488, 'number': 1491} | {'precision': 1.0, 'recall': 0.985981308411215, 'f1': 0.9929411764705882, 'number': 428} | {'precision': 0.8817204301075269, 'recall': 0.8840970350404312, 'f1': 0.882907133243607, 'number': 371} | 0.9802 | 0.9847 | 0.9825 | 0.9947 |
| 0.0042 | 12.0 | 480 | 0.0310 | {'precision': 0.9913221031138336, 'recall': 0.9941131302789864, 'f1': 0.9927156549520767, 'number': 3907} | {'precision': 0.9683794466403162, 'recall': 0.9859154929577465, 'f1': 0.9770687936191426, 'number': 1491} | {'precision': 1.0, 'recall': 0.9836448598130841, 'f1': 0.9917550058892814, 'number': 428} | {'precision': 0.8763440860215054, 'recall': 0.8787061994609164, 'f1': 0.8775235531628534, 'number': 371} | 0.9795 | 0.9845 | 0.9820 | 0.9945 |
| 0.0038 | 13.0 | 520 | 0.0316 | {'precision': 0.9915751850906306, 'recall': 0.9941131302789864, 'f1': 0.992842535787321, 'number': 3907} | {'precision': 0.9652230971128609, 'recall': 0.9865861837692823, 'f1': 0.9757877280265339, 'number': 1491} | {'precision': 0.9952830188679245, 'recall': 0.985981308411215, 'f1': 0.9906103286384976, 'number': 428} | {'precision': 0.8649350649350649, 'recall': 0.8975741239892183, 'f1': 0.8809523809523809, 'number': 371} | 0.9776 | 0.9860 | 0.9818 | 0.9944 |
| 0.0035 | 14.0 | 560 | 0.0311 | {'precision': 0.9915751850906306, 'recall': 0.9941131302789864, 'f1': 0.992842535787321, 'number': 3907} | {'precision': 0.9658568614576494, 'recall': 0.9865861837692823, 'f1': 0.9761114797611148, 'number': 1491} | {'precision': 1.0, 'recall': 0.985981308411215, 'f1': 0.9929411764705882, 'number': 428} | {'precision': 0.8790322580645161, 'recall': 0.8814016172506739, 'f1': 0.8802153432032301, 'number': 371} | 0.9791 | 0.9850 | 0.9821 | 0.9945 |
| 0.0032 | 15.0 | 600 | 0.0312 | {'precision': 0.9915751850906306, 'recall': 0.9941131302789864, 'f1': 0.992842535787321, 'number': 3907} | {'precision': 0.966491458607096, 'recall': 0.9865861837692823, 'f1': 0.9764354463989379, 'number': 1491} | {'precision': 1.0, 'recall': 0.985981308411215, 'f1': 0.9929411764705882, 'number': 428} | {'precision': 0.8783068783068783, 'recall': 0.894878706199461, 'f1': 0.8865153538050735, 'number': 371} | 0.9792 | 0.9858 | 0.9825 | 0.9947 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
SNV/distilbert-stock-tweet-sentiment-analysis
|
SNV
| 2023-11-23T07:06:38Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-23T06:54:12Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-stock-tweet-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-stock-tweet-sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6123
- Accuracy: 0.7775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6977 | 1.0 | 1000 | 0.5668 | 0.7702 |
| 0.4773 | 2.0 | 2000 | 0.6001 | 0.7715 |
| 0.3586 | 3.0 | 3000 | 0.6123 | 0.7775 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-8.0bpw-h8-exl2
|
LoneStriker
| 2023-11-23T06:49:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-23T06:45:17Z |
---
license: llama2
---
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
weight_mask_rate: 0.85 / use_weight_rescale: True / mask_stratery: random / scaling_coefficient: 1.0
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Intel/neural-chat-7b-v3-1 | 59.06 | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 |
| migtissera/SynthIA-7B-v1.3 | 57.11 | 62.12 | 83.45 | 62.65 | 51.37 | 78.85 | 17.59 | 43.76 |
| bhenrym14/mistral-7b-platypus-fp16 | 56.89 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 | 45.92 |
| jondurbin/airoboros-m-7b-3.1.2 | 56.24 | 61.86 | 83.51 | 61.91 | 53.75 | 77.58 | 13.87 | 41.2 |
| uukuguy/speechless-code-mistral-orca-7b-v1.0 | 55.33 | 59.64 | 82.25 | 61.33 | 48.45 | 77.51 | 8.26 | 49.89 |
| teknium/CollectiveCognition-v1.1-Mistral-7B | 53.87 | 62.12 | 84.17 | 62.35 | 57.62 | 75.37 | 15.62 | 19.85 |
| Open-Orca/Mistral-7B-SlimOrca | 53.34 | 62.54 | 83.86 | 62.77 | 54.23 | 77.43 | 21.38 | 11.2 |
| uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b | 53.34 | 64.33 | 84.4 | 63.72 | 52.52 | 78.37 | 21.38 | 8.66 |
| ehartford/dolphin-2.2.1-mistral-7b | 53.06 | 63.48 | 83.86 | 63.28 | 53.17 | 78.37 | 21.08 | 8.19 |
| teknium/CollectiveCognition-v1-Mistral-7B | 52.55 | 62.37 | 85.5 | 62.76 | 54.48 | 77.58 | 17.89 | 7.22 |
| HuggingFaceH4/zephyr-7b-alpha | 52.4 | 61.01 | 84.04 | 61.39 | 57.9 | 78.61 | 14.03 | 9.82 |
| ehartford/samantha-1.2-mistral-7b | 52.16 | 64.08 | 85.08 | 63.91 | 50.4 | 78.53 | 16.98 | 6.13 |
|
qqplot23/BASE_short
|
qqplot23
| 2023-11-23T06:48:38Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T15:45:45Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: BASE_short
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BASE_short
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3241
- Ppl: 28.7555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 22554
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ppl |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 4.4673 | 1.25 | 4000 | 4.2416 | 71.9939 |
| 3.8603 | 2.5 | 8000 | 3.7253 | 43.0163 |
| 3.638 | 3.75 | 12000 | 3.5322 | 35.4396 |
| 3.5229 | 5.01 | 16000 | 3.4322 | 32.0556 |
| 3.4377 | 6.26 | 20000 | 3.3749 | 30.2611 |
| 3.3972 | 7.51 | 24000 | 3.3411 | 29.2534 |
| 3.3688 | 8.76 | 28000 | 3.3241 | 28.7555 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
jim23/ppo-Huggy
|
jim23
| 2023-11-23T06:48:13Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-11-23T06:48:01Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jim23/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Xorbits/qwen-chat-7B-ggml
|
Xorbits
| 2023-11-23T06:45:00Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-11-23T06:34:57Z |
---
license: apache-2.0
---
## qwen-chat-7B-ggml
This repo contains GGML format model files for qwen-chat-7B.
### Example code
#### Install packages
```bash
pip install xinference[ggml]>=0.4.3
pip install qwen-cpp
```
If you want to run with GPU acceleration, refer to [installation](https://github.com/xorbitsai/inference#installation).
#### Start a local instance of Xinference
```bash
xinference -p 9997
```
#### Launch and inference
```python
from xinference.client import Client
client = Client("http://localhost:9997")
model_uid = client.launch_model(
model_name="qwen-chat",
model_format="ggmlv3",
model_size_in_billions=7,
quantization="q4_0",
)
model = client.get_model(model_uid)
chat_history = []
prompt = "最大的动物是什么?"
model.chat(
prompt,
chat_history,
generate_config={"max_tokens": 1024}
)
```
### More information
[Xinference](https://github.com/xorbitsai/inference) Replace OpenAI GPT with another LLM in your app
by changing a single line of code. Xinference gives you the freedom to use any LLM you need.
With Xinference, you are empowered to run inference with any open-source language models,
speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
<i><a href="https://join.slack.com/t/xorbitsio/shared_invite/zt-1z3zsm9ep-87yI9YZ_B79HLB2ccTq4WA">👉 Join our Slack community!</a></i>
|
SaGuenter/whisper-large-v2-NSC_Korpora_7-100steps
|
SaGuenter
| 2023-11-23T06:35:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"region:us"
] | null | 2023-11-23T06:35:33Z |
---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
EdithKim/a2c
|
EdithKim
| 2023-11-23T06:34:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-23T06:28:41Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
soarhigh/a2c
|
soarhigh
| 2023-11-23T06:34:33Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-23T06:28:25Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.28 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Leslie123/pegasus-claude-training
|
Leslie123
| 2023-11-23T06:23:05Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-xsum",
"base_model:finetune:google/pegasus-xsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-23T04:19:41Z |
---
base_model: google/pegasus-xsum
tags:
- generated_from_trainer
model-index:
- name: pegasus-claude-training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-claude-training
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8418 | 1.0 | 4261 | 1.3951 |
| 1.253 | 2.0 | 8522 | 1.3563 |
| 1.1643 | 3.0 | 12783 | 1.3502 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ramenparty247/ppo-LunarLander-v2
|
ramenparty247
| 2023-11-23T06:15:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-23T04:32:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.72 +/- 13.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
devagonal/mt5-heritage-2
|
devagonal
| 2023-11-23T05:51:34Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-23T03:54:01Z |
---
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
model-index:
- name: mt5-heritage-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-heritage-2
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sujith013/whisper-small-tamil
|
sujith013
| 2023-11-23T05:49:25Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"Tamil-ASR",
"generated_from_trainer",
"ta",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T04:19:53Z |
---
language:
- ta
license: apache-2.0
base_model: openai/whisper-small
tags:
- Tamil-ASR
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper-Small-Tamil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-Small-Tamil
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0128
- Wer: 17.0413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1713 | 4.39 | 250 | 0.0533 | 28.5519 |
| 0.0291 | 8.77 | 500 | 0.0128 | 17.0413 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
tuanio/w2v2_ablation_focal_ctc_a0.5_g1.0-best_on-ling_head-tp0.025_tl10_fp0.001_fl16
|
tuanio
| 2023-11-23T05:30:15Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"base_model:finetune:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-23T03:25:35Z |
---
license: cc-by-nc-4.0
base_model: nguyenvulebinh/wav2vec2-base-vietnamese-250h
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v2_ablation_focal_ctc_a0.5_g1.0-best_on-ling_head-tp0.025_tl10_fp0.001_fl16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2_ablation_focal_ctc_a0.5_g1.0-best_on-ling_head-tp0.025_tl10_fp0.001_fl16
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9952
- Wer: 0.0908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 891.4405 | 0.94 | 100 | 581.3978 | 18.6410 |
| 615.8164 | 1.89 | 200 | 221.5820 | 17.0065 |
| 105.0527 | 2.83 | 300 | 43.9285 | 1.0 |
| 56.2539 | 3.77 | 400 | 40.2262 | 1.0 |
| 51.7117 | 4.72 | 500 | 38.2334 | 1.0 |
| 49.7296 | 5.66 | 600 | 37.4374 | 1.0 |
| 49.0593 | 6.6 | 700 | 36.8541 | 1.0 |
| 48.6631 | 7.55 | 800 | 36.4298 | 1.0 |
| 47.483 | 8.49 | 900 | 36.3610 | 1.0 |
| 46.5326 | 9.43 | 1000 | 34.7439 | 0.9656 |
| 39.0329 | 10.38 | 1100 | 19.4442 | 0.5706 |
| 22.0857 | 11.32 | 1200 | 8.4938 | 0.2356 |
| 14.0187 | 12.26 | 1300 | 5.6815 | 0.1756 |
| 10.601 | 13.21 | 1400 | 4.4978 | 0.1478 |
| 9.0735 | 14.15 | 1500 | 3.8777 | 0.1386 |
| 7.449 | 15.09 | 1600 | 3.3361 | 0.1255 |
| 6.8473 | 16.04 | 1700 | 3.1257 | 0.1285 |
| 6.3913 | 16.98 | 1800 | 2.9602 | 0.1233 |
| 5.8235 | 17.92 | 1900 | 2.6843 | 0.1152 |
| 5.8092 | 18.87 | 2000 | 2.5891 | 0.1091 |
| 5.5489 | 19.81 | 2100 | 2.6685 | 0.1283 |
| 5.4259 | 20.75 | 2200 | 2.6268 | 0.1195 |
| 4.9683 | 21.7 | 2300 | 2.4970 | 0.1146 |
| 4.8524 | 22.64 | 2400 | 2.4337 | 0.1124 |
| 4.8404 | 23.58 | 2500 | 2.3632 | 0.1018 |
| 4.3451 | 24.53 | 2600 | 2.3354 | 0.0964 |
| 4.3297 | 25.47 | 2700 | 2.2977 | 0.1017 |
| 4.0442 | 26.42 | 2800 | 2.3116 | 0.1115 |
| 3.7571 | 27.36 | 2900 | 2.2637 | 0.1078 |
| 3.7335 | 28.3 | 3000 | 2.2070 | 0.1031 |
| 3.736 | 29.25 | 3100 | 2.2637 | 0.0992 |
| 3.7796 | 30.19 | 3200 | 2.2364 | 0.1012 |
| 3.7623 | 31.13 | 3300 | 2.1827 | 0.0983 |
| 3.2842 | 32.08 | 3400 | 2.1322 | 0.1073 |
| 3.4898 | 33.02 | 3500 | 2.0692 | 0.0999 |
| 3.453 | 33.96 | 3600 | 2.0662 | 0.0958 |
| 3.1855 | 34.91 | 3700 | 2.1000 | 0.0908 |
| 3.1468 | 35.85 | 3800 | 2.0887 | 0.0948 |
| 2.9984 | 36.79 | 3900 | 2.0589 | 0.0961 |
| 3.215 | 37.74 | 4000 | 2.0436 | 0.0958 |
| 3.2076 | 38.68 | 4100 | 2.0969 | 0.0978 |
| 2.8793 | 39.62 | 4200 | 2.0420 | 0.0939 |
| 2.9688 | 40.57 | 4300 | 2.0713 | 0.0900 |
| 2.9882 | 41.51 | 4400 | 2.0373 | 0.0940 |
| 3.12 | 42.45 | 4500 | 2.0513 | 0.1008 |
| 2.7528 | 43.4 | 4600 | 2.0500 | 0.0960 |
| 2.441 | 44.34 | 4700 | 2.0692 | 0.0943 |
| 2.6396 | 45.28 | 4800 | 2.0387 | 0.0904 |
| 2.5982 | 46.23 | 4900 | 2.0974 | 0.0975 |
| 2.574 | 47.17 | 5000 | 2.0484 | 0.0933 |
| 2.3482 | 48.11 | 5100 | 2.0370 | 0.0981 |
| 2.4587 | 49.06 | 5200 | 2.0412 | 0.1032 |
| 2.3123 | 50.0 | 5300 | 2.0249 | 0.1020 |
| 2.27 | 50.94 | 5400 | 2.0079 | 0.0909 |
| 2.3862 | 51.89 | 5500 | 2.0595 | 0.0910 |
| 2.4499 | 52.83 | 5600 | 2.0382 | 0.0948 |
| 2.4291 | 53.77 | 5700 | 2.0174 | 0.0926 |
| 2.1468 | 54.72 | 5800 | 2.0347 | 0.0939 |
| 2.1434 | 55.66 | 5900 | 2.0004 | 0.0963 |
| 2.1786 | 56.6 | 6000 | 1.9845 | 0.0878 |
| 2.22 | 57.55 | 6100 | 1.9827 | 0.0880 |
| 2.0233 | 58.49 | 6200 | 1.9880 | 0.0923 |
| 2.1476 | 59.43 | 6300 | 1.9856 | 0.0852 |
| 1.9682 | 60.38 | 6400 | 2.0001 | 0.0838 |
| 2.2104 | 61.32 | 6500 | 2.0052 | 0.0885 |
| 2.1225 | 62.26 | 6600 | 1.9984 | 0.0856 |
| 2.1791 | 63.21 | 6700 | 1.9606 | 0.0838 |
| 2.1231 | 64.15 | 6800 | 1.9905 | 0.0917 |
| 2.0084 | 65.09 | 6900 | 1.9866 | 0.0921 |
| 2.0541 | 66.04 | 7000 | 1.9948 | 0.0933 |
| 1.9073 | 66.98 | 7100 | 1.9885 | 0.0903 |
| 1.9308 | 67.92 | 7200 | 2.0064 | 0.0919 |
| 2.1946 | 68.87 | 7300 | 1.9828 | 0.0916 |
| 1.9435 | 69.81 | 7400 | 1.9889 | 0.0928 |
| 1.8279 | 70.75 | 7500 | 1.9959 | 0.0911 |
| 1.7645 | 71.7 | 7600 | 2.0134 | 0.0929 |
| 1.6908 | 72.64 | 7700 | 2.0119 | 0.0913 |
| 1.7531 | 73.58 | 7800 | 1.9963 | 0.0879 |
| 1.6314 | 74.53 | 7900 | 1.9854 | 0.0915 |
| 1.7651 | 75.47 | 8000 | 1.9984 | 0.0920 |
| 1.8407 | 76.42 | 8100 | 1.9793 | 0.0903 |
| 1.8132 | 77.36 | 8200 | 2.0208 | 0.0912 |
| 1.6622 | 78.3 | 8300 | 2.0106 | 0.0906 |
| 2.1048 | 79.25 | 8400 | 1.9989 | 0.0915 |
| 1.7944 | 80.19 | 8500 | 1.9980 | 0.0913 |
| 1.8029 | 81.13 | 8600 | 1.9870 | 0.0897 |
| 1.8474 | 82.08 | 8700 | 1.9901 | 0.0890 |
| 1.5574 | 83.02 | 8800 | 1.9952 | 0.0905 |
| 1.5757 | 83.96 | 8900 | 1.9982 | 0.0907 |
| 1.6461 | 84.91 | 9000 | 1.9858 | 0.0900 |
| 1.7695 | 85.85 | 9100 | 1.9991 | 0.0905 |
| 1.6583 | 86.79 | 9200 | 2.0011 | 0.0902 |
| 1.7586 | 87.74 | 9300 | 1.9869 | 0.0911 |
| 1.7142 | 88.68 | 9400 | 1.9956 | 0.0888 |
| 1.7371 | 89.62 | 9500 | 1.9968 | 0.0888 |
| 1.6964 | 90.57 | 9600 | 1.9958 | 0.0892 |
| 1.7224 | 91.51 | 9700 | 1.9947 | 0.0891 |
| 1.8655 | 92.45 | 9800 | 1.9976 | 0.0908 |
| 1.6929 | 93.4 | 9900 | 1.9984 | 0.0909 |
| 1.6306 | 94.34 | 10000 | 2.0012 | 0.0911 |
| 1.7218 | 95.28 | 10100 | 2.0010 | 0.0913 |
| 1.7019 | 96.23 | 10200 | 1.9977 | 0.0908 |
| 1.902 | 97.17 | 10300 | 1.9989 | 0.0908 |
| 1.7555 | 98.11 | 10400 | 1.9964 | 0.0909 |
| 1.5272 | 99.06 | 10500 | 1.9957 | 0.0906 |
| 1.8033 | 100.0 | 10600 | 1.9952 | 0.0908 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.14.1
|
StevenPerrin/ppo-SnowballTarget
|
StevenPerrin
| 2023-11-23T05:26:44Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-11-23T05:26:36Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: StevenPerrin/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ketak-ZoomRx/fb_trial
|
Ketak-ZoomRx
| 2023-11-23T05:24:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-11-23T05:23:38Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Ketak-ZoomRx/fb_trial",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Ketak-ZoomRx/fb_trial",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Ketak-ZoomRx/fb_trial",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Ketak-ZoomRx/fb_trial" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
OPTForCausalLM(
(model): OPTModel(
(decoder): OPTDecoder(
(embed_tokens): Embedding(50272, 2560, padding_idx=1)
(embed_positions): OPTLearnedPositionalEmbedding(2050, 2560)
(final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(layers): ModuleList(
(0-31): 32 x OPTDecoderLayer(
(self_attn): OPTAttention(
(k_proj): Linear(in_features=2560, out_features=2560, bias=True)
(v_proj): Linear(in_features=2560, out_features=2560, bias=True)
(q_proj): Linear(in_features=2560, out_features=2560, bias=True)
(out_proj): Linear(in_features=2560, out_features=2560, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=2560, out_features=10240, bias=True)
(fc2): Linear(in_features=10240, out_features=2560, bias=True)
(final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
)
)
)
)
(lm_head): Linear(in_features=2560, out_features=50272, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
livingbox/minimalist-style-v2
|
livingbox
| 2023-11-23T05:20:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-23T05:16:42Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### minimalist_style_v2 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
varun3dec/sd-class-butterflies-32
|
varun3dec
| 2023-11-23T05:18:59Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-11-23T05:18:17Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('varun3dec/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
joedonino/zephyr-7b-radia-html-events-v4
|
joedonino
| 2023-11-23T05:16:46Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2023-11-23T05:16:36Z |
---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
|
trungkienbkhn/faster-whisper-large-v3
|
trungkienbkhn
| 2023-11-23T04:45:42Z | 8 | 2 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"yue",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T04:11:42Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
- yue
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v3 model for CTranslate2
This repository contains the conversion of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v3")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-large-v3 --output_dir faster-whisper-large-v3 \
--copy_files tokenizer.json preprocessor_config.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v3).**
|
habanoz/phi-1_5-lr-5-3epch-airoboros3.1-1k-instruct-V1
|
habanoz
| 2023-11-23T04:44:29Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"en",
"dataset:habanoz/airoboros-3.1-no-mathjson-max-1k",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-23T01:21:28Z |
---
license: apache-2.0
datasets:
- habanoz/airoboros-3.1-no-mathjson-max-1k
language:
- en
library_name: transformers
pipeline_tag: text-generation
base_model: microsoft/phi-1_5
---
phi 1.5 finetune on airoboros-3.1-no-mathjson-max-1k (a subset of airoboros-3.1) using qlora.
**train metrics**
- epoch = 3.0
- train_loss = 1.1384
- train_runtime = 5:25:54.30
- train_samples_per_second = 3.065
- train_steps_per_second = 0.191
**eval metrics**
- epoch = 3.0
- eval_loss = 0.8639
- eval_runtime = 0:00:26.59
- eval_samples_per_second = 7.596
- eval_steps_per_second = 1.918
SFT code: https://github.com/habanoz/qlora.git
command:
```bash
accelerate launch $BASE_DIR/qlora/train.py \
--model_name_or_path $BASE_MODEL \
--working_dir $BASE_DIR/$OUTPUT_NAME-checkpoints \
--output_dir $BASE_DIR/$OUTPUT_NAME-peft \
--merged_output_dir $BASE_DIR/$OUTPUT_NAME \
--final_output_dir $BASE_DIR/$OUTPUT_NAME-final \
--num_train_epochs 3 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 120 \
--save_total_limit 2 \
--data_seed 11422 \
--evaluation_strategy steps \
--per_device_eval_batch_size 4 \
--eval_dataset_size 0.01 \
--eval_steps 120 \
--max_new_tokens 1024 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--do_train \
--do_eval \
--lora_r 64 \
--lora_alpha 16 \
--lora_modules all \
--bits 4 \
--double_quant \
--quant_type nf4 \
--lr_scheduler_type constant \
--dataset habanoz/airoboros-3.1-no-mathjson-max-1k \
--dataset_format airoboros_chat \
--model_max_len 1024 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 16 \
--learning_rate 1e-5 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--lora_dropout 0.0 \
--weight_decay 0.0 \
--seed 11422 \
--gradient_checkpointing False \
--use_flash_attention_2 \
--ddp_find_unused_parameters False \
--trust_remote_code True
```
|
voicexpai/distilbert_Pizza_Intent
|
voicexpai
| 2023-11-23T04:44:20Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-22T17:22:16Z |
---
pipeline_tag: text-classification
---
|
afrideva/Tiny-Vicuna-1B-GGUF
|
afrideva
| 2023-11-23T04:35:25Z | 49,797 | 4 | null |
[
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"base_model:Jiayi-Pan/Tiny-Vicuna-1B",
"base_model:quantized:Jiayi-Pan/Tiny-Vicuna-1B",
"region:us"
] |
text-generation
| 2023-11-23T04:17:26Z |
---
base_model: Jiayi-Pan/Tiny-Vicuna-1B
inference: false
model_creator: Jiayi-Pan
model_name: Tiny-Vicuna-1B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# Jiayi-Pan/Tiny-Vicuna-1B-GGUF
Quantized GGUF model files for [Tiny-Vicuna-1B](https://huggingface.co/Jiayi-Pan/Tiny-Vicuna-1B) from [Jiayi-Pan](https://huggingface.co/Jiayi-Pan)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tiny-vicuna-1b.q2_k.gguf](https://huggingface.co/afrideva/Tiny-Vicuna-1B-GGUF/resolve/main/tiny-vicuna-1b.q2_k.gguf) | q2_k | 482.14 MB |
| [tiny-vicuna-1b.q3_k_m.gguf](https://huggingface.co/afrideva/Tiny-Vicuna-1B-GGUF/resolve/main/tiny-vicuna-1b.q3_k_m.gguf) | q3_k_m | 549.85 MB |
| [tiny-vicuna-1b.q4_k_m.gguf](https://huggingface.co/afrideva/Tiny-Vicuna-1B-GGUF/resolve/main/tiny-vicuna-1b.q4_k_m.gguf) | q4_k_m | 667.81 MB |
| [tiny-vicuna-1b.q5_k_m.gguf](https://huggingface.co/afrideva/Tiny-Vicuna-1B-GGUF/resolve/main/tiny-vicuna-1b.q5_k_m.gguf) | q5_k_m | 782.04 MB |
| [tiny-vicuna-1b.q6_k.gguf](https://huggingface.co/afrideva/Tiny-Vicuna-1B-GGUF/resolve/main/tiny-vicuna-1b.q6_k.gguf) | q6_k | 903.41 MB |
| [tiny-vicuna-1b.q8_0.gguf](https://huggingface.co/afrideva/Tiny-Vicuna-1B-GGUF/resolve/main/tiny-vicuna-1b.q8_0.gguf) | q8_0 | 1.17 GB |
## Original Model Card:
# Tiny Vicuna 1B
TinyLLama 1.1B finetuned with WizardVicuna dataset.
Easy to iterate on for early experiments!
|
Howard001/roberta-large-lora-token-classification
|
Howard001
| 2023-11-23T04:21:23Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:FacebookAI/roberta-large",
"base_model:adapter:FacebookAI/roberta-large",
"region:us"
] | null | 2023-11-23T04:21:16Z |
---
library_name: peft
base_model: roberta-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Faliu/Taxi-v3
|
Faliu
| 2023-11-23T04:14:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-23T04:14:54Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Faliu/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kyujinpy/ko-platypus-kiwi-13B
|
kyujinpy
| 2023-11-23T04:09:32Z | 2,288 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KOR-Orca-Platypus-kiwi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-14T12:10:14Z |
---
language:
- ko
datasets:
- kyujinpy/KOR-Orca-Platypus-kiwi
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **KOR-Orca-Platypus-kiwi🥝**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Model Architecture**
ko-platypus-kiwi-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
I used [kyujinpy/KOR-Orca-Platypus-kiwi](https://huggingface.co/datasets/kyujinpy/KOR-Orca-Platypus-kiwi).
# Model comparisons
| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| **ko-platypus-kiwi-13B🥝** | 48.97 | 42.41 | 54.29 | 41.98 | 40.05 | **66.12** |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/ko-platypus-kiwi-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
|
yesj1234/en_xlsr_100p_run1
|
yesj1234
| 2023-11-23T04:00:27Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"./train_dataset.py",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-23T03:57:59Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- automatic-speech-recognition
- ./train_dataset.py
- generated_from_trainer
model-index:
- name: en_xlsr_100p_run1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en_xlsr_100p_run1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the ./TRAIN_DATASET.PY - NA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.