modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 18:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 18:33:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
shreyasharma/t5-small-ret-conceptnet2
|
shreyasharma
| 2022-12-09T20:26:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-28T08:04:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: t5-small-ret-conceptnet2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-ret-conceptnet2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
- Acc: {'accuracy': 0.8700980392156863}
- Precision: {'precision': 0.811340206185567}
- Recall: {'recall': 0.9644607843137255}
- F1: {'f1': 0.8812989921612542}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------------:|:------------------------------:|:--------------------------:|
| 0.1989 | 1.0 | 721 | 0.1709 | {'accuracy': 0.8700980392156863} | {'precision': 0.811340206185567} | {'recall': 0.9644607843137255} | {'f1': 0.8812989921612542} |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
graydient/diffusers-mattthew-technicolor-50s-diffusion
|
graydient
| 2022-12-09T20:12:14Z | 3 | 1 |
diffusers
|
[
"diffusers",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-09T19:31:04Z |
---
license: cc-by-sa-4.0
---
# 🌈 Diffusers Adaptation: Technicolor-50s Diffusion
## Style Description
- This is a port of [Mattthew's excellent Technicolor 50s Diffusion](https://huggingface.co/mattthew/technicolor-50s-diffusion/tree/main) model to Huggingface Diffusers.
- Please see original highly-saturated postcard-like colors, flat high-key lighting, strong rim-lighting, 40s and 50s lifestyle
|
Cbdlt/unit1-LunarLander-1
|
Cbdlt
| 2022-12-09T20:00:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T19:59:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.72 +/- 20.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rakeshjohny/PPO_LunarLanderV2
|
rakeshjohny
| 2022-12-09T19:51:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T19:50:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 230.53 +/- 18.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Alexao/whisper-small-swe2
|
Alexao
| 2022-12-09T19:24:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"swe",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-09T19:11:59Z |
---
language:
- swe
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small swe - Swedish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small swe - Swedish
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
FCameCode/whisper-tiny-it-8
|
FCameCode
| 2022-12-09T19:08:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"it",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T15:50:09Z |
---
language:
- it
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny it 8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: it
split: test[:10%]
args: 'config: it, split: test'
metrics:
- name: Wer
type: wer
value: 97.56655574043262)
---
# Whisper Tiny it 8
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.011502
- Wer: 56.905158
## Model description
This model is the openai whisper small transformer adapted for Italian audio to text transcription.
As part of the hyperparameter tuning process weight decay set to 0.1, attention dropout, encoder dropout and decoder dropout have been set to 0.1,
the learning rate has been set to 1e-5, the number of decoder attention heads and encoder attention heads have been set to 8.
## Intended uses & limitations
The model is available through its [HuggingFace web app](https://huggingface.co/spaces/GIanlucaRub/whisper-it)
## Training and evaluation data
Data used for training is the initial 10% of train and validation of [Italian Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/it/train) 11.0 from Mozilla Foundation.
The dataset used for evaluation is the initial 10% of test of Italian Common Voice.
## Training procedure
After loading the pre trained model, it has been trained on the dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.525800 | 3.82 | 3000 | 1.011502 |56.905158|
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
HusseinHE/h_sks_hxica
|
HusseinHE
| 2022-12-09T18:59:37Z | 0 | 0 | null |
[
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-12-09T03:36:40Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: hsksk
---
|
romc57/PPO_LunarLanderV2
|
romc57
| 2022-12-09T18:28:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T18:28:22Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.65 +/- 16.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tripplyons/flan-t5-base-xsum
|
tripplyons
| 2022-12-09T18:23:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-12-05T02:21:16Z |
---
license: apache-2.0
---
# google/flan-t5-base finetuned on xsum using LoRA with adapter-transformers
## Usage
Use the original flan-t5-base tokenizer:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("tripplyons/flan-t5-base-xsum")
input_text = "summarize: The ex-Reading defender denied fraudulent trading charges relating to the Sodje Sports Foundation - a charity to raise money for Nigerian sport. Mr Sodje, 37, is jointly charged with elder brothers Efe, 44, Bright, 50 and Stephen, 42. Appearing at the Old Bailey earlier, all four denied the offence. The charge relates to offences which allegedly took place between 2008 and 2014. Sam, from Kent, Efe and Bright, of Greater Manchester, and Stephen, from Bexley, are due to stand trial in July. They were all released on bail."
input_ids = tokenizer([input_text], max_length=512, truncation=True, padding=True, return_tensors='pt')['input_ids']
output = model.generate(input_ids, max_length=512)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
```
|
CreativeEvolution/ppo-LunarLander-v2
|
CreativeEvolution
| 2022-12-09T17:59:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T17:58:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.18 +/- 13.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sandipan1994/t5-small-entailement-Writer-T5-base
|
Sandipan1994
| 2022-12-09T17:49:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-12-09T17:22:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-entailement-Writer-T5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-entailement-Writer-T5-base
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 250
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 42 | 1.8185 |
| No log | 2.0 | 84 | 1.1957 |
| No log | 3.0 | 126 | 0.9771 |
| No log | 4.0 | 168 | 0.8964 |
| No log | 5.0 | 210 | 0.8380 |
| No log | 6.0 | 252 | 0.8109 |
| No log | 7.0 | 294 | 0.7886 |
| No log | 8.0 | 336 | 0.7760 |
| No log | 9.0 | 378 | 0.7577 |
| No log | 10.0 | 420 | 0.7483 |
| No log | 11.0 | 462 | 0.7364 |
| 1.2044 | 12.0 | 504 | 0.7267 |
| 1.2044 | 13.0 | 546 | 0.7205 |
| 1.2044 | 14.0 | 588 | 0.7102 |
| 1.2044 | 15.0 | 630 | 0.7048 |
| 1.2044 | 16.0 | 672 | 0.7015 |
| 1.2044 | 17.0 | 714 | 0.6958 |
| 1.2044 | 18.0 | 756 | 0.6892 |
| 1.2044 | 19.0 | 798 | 0.6877 |
| 1.2044 | 20.0 | 840 | 0.6825 |
| 1.2044 | 21.0 | 882 | 0.6790 |
| 1.2044 | 22.0 | 924 | 0.6732 |
| 1.2044 | 23.0 | 966 | 0.6676 |
| 0.736 | 24.0 | 1008 | 0.6640 |
| 0.736 | 25.0 | 1050 | 0.6631 |
| 0.736 | 26.0 | 1092 | 0.6617 |
| 0.736 | 27.0 | 1134 | 0.6556 |
| 0.736 | 28.0 | 1176 | 0.6551 |
| 0.736 | 29.0 | 1218 | 0.6545 |
| 0.736 | 30.0 | 1260 | 0.6483 |
| 0.736 | 31.0 | 1302 | 0.6493 |
| 0.736 | 32.0 | 1344 | 0.6488 |
| 0.736 | 33.0 | 1386 | 0.6434 |
| 0.736 | 34.0 | 1428 | 0.6427 |
| 0.736 | 35.0 | 1470 | 0.6403 |
| 0.6568 | 36.0 | 1512 | 0.6364 |
| 0.6568 | 37.0 | 1554 | 0.6342 |
| 0.6568 | 38.0 | 1596 | 0.6325 |
| 0.6568 | 39.0 | 1638 | 0.6300 |
| 0.6568 | 40.0 | 1680 | 0.6302 |
| 0.6568 | 41.0 | 1722 | 0.6292 |
| 0.6568 | 42.0 | 1764 | 0.6264 |
| 0.6568 | 43.0 | 1806 | 0.6272 |
| 0.6568 | 44.0 | 1848 | 0.6252 |
| 0.6568 | 45.0 | 1890 | 0.6229 |
| 0.6568 | 46.0 | 1932 | 0.6221 |
| 0.6568 | 47.0 | 1974 | 0.6202 |
| 0.602 | 48.0 | 2016 | 0.6193 |
| 0.602 | 49.0 | 2058 | 0.6196 |
| 0.602 | 50.0 | 2100 | 0.6174 |
| 0.602 | 51.0 | 2142 | 0.6175 |
| 0.602 | 52.0 | 2184 | 0.6162 |
| 0.602 | 53.0 | 2226 | 0.6155 |
| 0.602 | 54.0 | 2268 | 0.6129 |
| 0.602 | 55.0 | 2310 | 0.6139 |
| 0.602 | 56.0 | 2352 | 0.6124 |
| 0.602 | 57.0 | 2394 | 0.6128 |
| 0.602 | 58.0 | 2436 | 0.6109 |
| 0.602 | 59.0 | 2478 | 0.6111 |
| 0.5653 | 60.0 | 2520 | 0.6097 |
| 0.5653 | 61.0 | 2562 | 0.6086 |
| 0.5653 | 62.0 | 2604 | 0.6083 |
| 0.5653 | 63.0 | 2646 | 0.6086 |
| 0.5653 | 64.0 | 2688 | 0.6090 |
| 0.5653 | 65.0 | 2730 | 0.6074 |
| 0.5653 | 66.0 | 2772 | 0.6064 |
| 0.5653 | 67.0 | 2814 | 0.6056 |
| 0.5653 | 68.0 | 2856 | 0.6039 |
| 0.5653 | 69.0 | 2898 | 0.6051 |
| 0.5653 | 70.0 | 2940 | 0.6043 |
| 0.5653 | 71.0 | 2982 | 0.6034 |
| 0.5368 | 72.0 | 3024 | 0.6020 |
| 0.5368 | 73.0 | 3066 | 0.6047 |
| 0.5368 | 74.0 | 3108 | 0.6031 |
| 0.5368 | 75.0 | 3150 | 0.6011 |
| 0.5368 | 76.0 | 3192 | 0.6027 |
| 0.5368 | 77.0 | 3234 | 0.6009 |
| 0.5368 | 78.0 | 3276 | 0.6003 |
| 0.5368 | 79.0 | 3318 | 0.6001 |
| 0.5368 | 80.0 | 3360 | 0.6008 |
| 0.5368 | 81.0 | 3402 | 0.6005 |
| 0.5368 | 82.0 | 3444 | 0.6007 |
| 0.5368 | 83.0 | 3486 | 0.5988 |
| 0.5055 | 84.0 | 3528 | 0.5991 |
| 0.5055 | 85.0 | 3570 | 0.6004 |
| 0.5055 | 86.0 | 3612 | 0.5989 |
| 0.5055 | 87.0 | 3654 | 0.5975 |
| 0.5055 | 88.0 | 3696 | 0.5977 |
| 0.5055 | 89.0 | 3738 | 0.5982 |
| 0.5055 | 90.0 | 3780 | 0.5964 |
| 0.5055 | 91.0 | 3822 | 0.5979 |
| 0.5055 | 92.0 | 3864 | 0.5996 |
| 0.5055 | 93.0 | 3906 | 0.5936 |
| 0.5055 | 94.0 | 3948 | 0.5956 |
| 0.5055 | 95.0 | 3990 | 0.5940 |
| 0.4866 | 96.0 | 4032 | 0.5961 |
| 0.4866 | 97.0 | 4074 | 0.5955 |
| 0.4866 | 98.0 | 4116 | 0.5949 |
| 0.4866 | 99.0 | 4158 | 0.5971 |
| 0.4866 | 100.0 | 4200 | 0.5958 |
| 0.4866 | 101.0 | 4242 | 0.5978 |
| 0.4866 | 102.0 | 4284 | 0.5971 |
| 0.4866 | 103.0 | 4326 | 0.5954 |
| 0.4866 | 104.0 | 4368 | 0.5933 |
| 0.4866 | 105.0 | 4410 | 0.5944 |
| 0.4866 | 106.0 | 4452 | 0.5952 |
| 0.4866 | 107.0 | 4494 | 0.5948 |
| 0.4657 | 108.0 | 4536 | 0.5951 |
| 0.4657 | 109.0 | 4578 | 0.5948 |
| 0.4657 | 110.0 | 4620 | 0.5948 |
| 0.4657 | 111.0 | 4662 | 0.5927 |
| 0.4657 | 112.0 | 4704 | 0.5931 |
| 0.4657 | 113.0 | 4746 | 0.5919 |
| 0.4657 | 114.0 | 4788 | 0.5939 |
| 0.4657 | 115.0 | 4830 | 0.5922 |
| 0.4657 | 116.0 | 4872 | 0.5921 |
| 0.4657 | 117.0 | 4914 | 0.5917 |
| 0.4657 | 118.0 | 4956 | 0.5913 |
| 0.4657 | 119.0 | 4998 | 0.5908 |
| 0.4468 | 120.0 | 5040 | 0.5929 |
| 0.4468 | 121.0 | 5082 | 0.5915 |
| 0.4468 | 122.0 | 5124 | 0.5926 |
| 0.4468 | 123.0 | 5166 | 0.5929 |
| 0.4468 | 124.0 | 5208 | 0.5911 |
| 0.4468 | 125.0 | 5250 | 0.5907 |
| 0.4468 | 126.0 | 5292 | 0.5921 |
| 0.4468 | 127.0 | 5334 | 0.5917 |
| 0.4468 | 128.0 | 5376 | 0.5923 |
| 0.4468 | 129.0 | 5418 | 0.5912 |
| 0.4468 | 130.0 | 5460 | 0.5930 |
| 0.4346 | 131.0 | 5502 | 0.5924 |
| 0.4346 | 132.0 | 5544 | 0.5933 |
| 0.4346 | 133.0 | 5586 | 0.5920 |
| 0.4346 | 134.0 | 5628 | 0.5937 |
| 0.4346 | 135.0 | 5670 | 0.5930 |
| 0.4346 | 136.0 | 5712 | 0.5930 |
| 0.4346 | 137.0 | 5754 | 0.5929 |
| 0.4346 | 138.0 | 5796 | 0.5916 |
| 0.4346 | 139.0 | 5838 | 0.5935 |
| 0.4346 | 140.0 | 5880 | 0.5947 |
| 0.4346 | 141.0 | 5922 | 0.5926 |
| 0.4346 | 142.0 | 5964 | 0.5930 |
| 0.4247 | 143.0 | 6006 | 0.5911 |
| 0.4247 | 144.0 | 6048 | 0.5916 |
| 0.4247 | 145.0 | 6090 | 0.5929 |
| 0.4247 | 146.0 | 6132 | 0.5926 |
| 0.4247 | 147.0 | 6174 | 0.5917 |
| 0.4247 | 148.0 | 6216 | 0.5913 |
| 0.4247 | 149.0 | 6258 | 0.5907 |
| 0.4247 | 150.0 | 6300 | 0.5930 |
| 0.4247 | 151.0 | 6342 | 0.5928 |
| 0.4247 | 152.0 | 6384 | 0.5922 |
| 0.4247 | 153.0 | 6426 | 0.5921 |
| 0.4247 | 154.0 | 6468 | 0.5925 |
| 0.4139 | 155.0 | 6510 | 0.5923 |
| 0.4139 | 156.0 | 6552 | 0.5919 |
| 0.4139 | 157.0 | 6594 | 0.5920 |
| 0.4139 | 158.0 | 6636 | 0.5935 |
| 0.4139 | 159.0 | 6678 | 0.5926 |
| 0.4139 | 160.0 | 6720 | 0.5926 |
| 0.4139 | 161.0 | 6762 | 0.5925 |
| 0.4139 | 162.0 | 6804 | 0.5927 |
| 0.4139 | 163.0 | 6846 | 0.5918 |
| 0.4139 | 164.0 | 6888 | 0.5925 |
| 0.4139 | 165.0 | 6930 | 0.5935 |
| 0.4139 | 166.0 | 6972 | 0.5926 |
| 0.4049 | 167.0 | 7014 | 0.5919 |
| 0.4049 | 168.0 | 7056 | 0.5917 |
| 0.4049 | 169.0 | 7098 | 0.5916 |
| 0.4049 | 170.0 | 7140 | 0.5925 |
| 0.4049 | 171.0 | 7182 | 0.5931 |
| 0.4049 | 172.0 | 7224 | 0.5938 |
| 0.4049 | 173.0 | 7266 | 0.5932 |
| 0.4049 | 174.0 | 7308 | 0.5927 |
| 0.4049 | 175.0 | 7350 | 0.5934 |
| 0.4049 | 176.0 | 7392 | 0.5931 |
| 0.4049 | 177.0 | 7434 | 0.5937 |
| 0.4049 | 178.0 | 7476 | 0.5939 |
| 0.397 | 179.0 | 7518 | 0.5939 |
| 0.397 | 180.0 | 7560 | 0.5932 |
| 0.397 | 181.0 | 7602 | 0.5935 |
| 0.397 | 182.0 | 7644 | 0.5939 |
| 0.397 | 183.0 | 7686 | 0.5935 |
| 0.397 | 184.0 | 7728 | 0.5945 |
| 0.397 | 185.0 | 7770 | 0.5932 |
| 0.397 | 186.0 | 7812 | 0.5931 |
| 0.397 | 187.0 | 7854 | 0.5925 |
| 0.397 | 188.0 | 7896 | 0.5934 |
| 0.397 | 189.0 | 7938 | 0.5941 |
| 0.397 | 190.0 | 7980 | 0.5939 |
| 0.3891 | 191.0 | 8022 | 0.5933 |
| 0.3891 | 192.0 | 8064 | 0.5934 |
| 0.3891 | 193.0 | 8106 | 0.5938 |
| 0.3891 | 194.0 | 8148 | 0.5944 |
| 0.3891 | 195.0 | 8190 | 0.5937 |
| 0.3891 | 196.0 | 8232 | 0.5939 |
| 0.3891 | 197.0 | 8274 | 0.5937 |
| 0.3891 | 198.0 | 8316 | 0.5947 |
| 0.3891 | 199.0 | 8358 | 0.5945 |
| 0.3891 | 200.0 | 8400 | 0.5946 |
| 0.3891 | 201.0 | 8442 | 0.5945 |
| 0.3891 | 202.0 | 8484 | 0.5938 |
| 0.3842 | 203.0 | 8526 | 0.5947 |
| 0.3842 | 204.0 | 8568 | 0.5945 |
| 0.3842 | 205.0 | 8610 | 0.5935 |
| 0.3842 | 206.0 | 8652 | 0.5935 |
| 0.3842 | 207.0 | 8694 | 0.5939 |
| 0.3842 | 208.0 | 8736 | 0.5938 |
| 0.3842 | 209.0 | 8778 | 0.5939 |
| 0.3842 | 210.0 | 8820 | 0.5940 |
| 0.3842 | 211.0 | 8862 | 0.5943 |
| 0.3842 | 212.0 | 8904 | 0.5943 |
| 0.3842 | 213.0 | 8946 | 0.5946 |
| 0.3842 | 214.0 | 8988 | 0.5946 |
| 0.3802 | 215.0 | 9030 | 0.5947 |
| 0.3802 | 216.0 | 9072 | 0.5949 |
| 0.3802 | 217.0 | 9114 | 0.5944 |
| 0.3802 | 218.0 | 9156 | 0.5946 |
| 0.3802 | 219.0 | 9198 | 0.5950 |
| 0.3802 | 220.0 | 9240 | 0.5950 |
| 0.3802 | 221.0 | 9282 | 0.5953 |
| 0.3802 | 222.0 | 9324 | 0.5951 |
| 0.3802 | 223.0 | 9366 | 0.5956 |
| 0.3802 | 224.0 | 9408 | 0.5952 |
| 0.3802 | 225.0 | 9450 | 0.5955 |
| 0.3802 | 226.0 | 9492 | 0.5958 |
| 0.3791 | 227.0 | 9534 | 0.5954 |
| 0.3791 | 228.0 | 9576 | 0.5953 |
| 0.3791 | 229.0 | 9618 | 0.5959 |
| 0.3791 | 230.0 | 9660 | 0.5959 |
| 0.3791 | 231.0 | 9702 | 0.5957 |
| 0.3791 | 232.0 | 9744 | 0.5957 |
| 0.3791 | 233.0 | 9786 | 0.5956 |
| 0.3791 | 234.0 | 9828 | 0.5956 |
| 0.3791 | 235.0 | 9870 | 0.5956 |
| 0.3791 | 236.0 | 9912 | 0.5956 |
| 0.3791 | 237.0 | 9954 | 0.5957 |
| 0.3791 | 238.0 | 9996 | 0.5960 |
| 0.3764 | 239.0 | 10038 | 0.5956 |
| 0.3764 | 240.0 | 10080 | 0.5956 |
| 0.3764 | 241.0 | 10122 | 0.5955 |
| 0.3764 | 242.0 | 10164 | 0.5956 |
| 0.3764 | 243.0 | 10206 | 0.5955 |
| 0.3764 | 244.0 | 10248 | 0.5957 |
| 0.3764 | 245.0 | 10290 | 0.5956 |
| 0.3764 | 246.0 | 10332 | 0.5955 |
| 0.3764 | 247.0 | 10374 | 0.5954 |
| 0.3764 | 248.0 | 10416 | 0.5955 |
| 0.3764 | 249.0 | 10458 | 0.5954 |
| 0.3763 | 250.0 | 10500 | 0.5954 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
steffel/ppo-LunarLander-v2
|
steffel
| 2022-12-09T17:48:06Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T17:47:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.37 +/- 19.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sanjay-Papaiahgari/ppo-LunarLander-v2
|
Sanjay-Papaiahgari
| 2022-12-09T17:41:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T17:40:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 231.53 +/- 72.30
name: mean_reward
verified: false
---
# **MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
deepdml/whisper-small-eu
|
deepdml
| 2022-12-09T17:26:01Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"eu",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T21:19:49Z |
---
license: apache-2.0
language:
- eu
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: openai/whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 eu
type: mozilla-foundation/common_voice_11_0
config: eu
split: test
args: eu
metrics:
- name: Wer
type: wer
value: 19.766305675433596
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small Basque-Euskera
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4485
- Wer: 19.7663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.048 | 4.04 | 1000 | 0.3402 | 21.7816 |
| 0.0047 | 9.03 | 2000 | 0.3862 | 20.1694 |
| 0.0012 | 14.02 | 3000 | 0.4221 | 19.7419 |
| 0.0008 | 19.02 | 4000 | 0.4411 | 19.7174 |
| 0.0006 | 24.01 | 5000 | 0.4485 | 19.7663 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
dcavadia/nintendo-controllers-model-opt3
|
dcavadia
| 2022-12-09T17:02:24Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-09T16:51:39Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: nintendo-controllers-model-opt3
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5333333611488342
---
# nintendo-controllers-model-opt3
Modelo de clasificacion de imagenes con Python.
Las predicciones que se obtienen se realizan a traves de un modelo de aprendizaje profundo llamado transformador de visión (ViT) el cual es capaz de discernir entre un control de Xbox y un control de Playstation. En un ViT, la imagen de entrada se "corta" en subimágenes de igual tamaño y cada una de esas subimágenes pasa por una insercion lineal lo que hace que
cada subimagen sea sólo un vector unidimensional. Despues se le agrega una insercion posicional a cada uno de estos vectores lo cual permite a la red saber dónde se encuentra
cada subimagen originalmente en la imagen. Estos vectores se transmiten, junto con un vector de clasificación especial, a los bloques codificadores transformadores, cada uno de los cuales
se compone de : Una Normalización de Capas (LN), una Autoatención Multicabezal (MSA),una conexión residual, una segunda LN, un Perceptrón Multicapa (MLP)
y otra conexión residual, los cuales se conectan uno detrás de otro. Por último, se utiliza un bloque MLP de clasificación para la clasificación final sólo en el vector de clasificación especial, que al final de todo el proceso, es el que
tiene toda la informacion global de la imagen.
La data que se usa de entrada al modelo es obtenida atraves de una API de buscador de imagenes que las descarga y almacena desde la web, de la cual se recolectan ~150
imagenes por clase. Una vez obtenida las imagenes, se dividen entre un 75% y 15% para usar como entrenamiento y validacion respectivamente.
Para validar la data recolectada, se hace un pequeño muestreo al azar de las imagenes para confirma que las imagenes que consiguio la API, en su mayoria sean igual
a lo que se introdujo como busqueda (microsoft xbox controller y sony playstation controller).
Una vez etiquetada y mapeada la data, se preparan ejemplos en batches, los cuales seran alimentados de forma aleatorea a un modelo ViT ya preentrenado por usando el conjunto
de datos ImageNet-21k. El modelo consta de metodos de entrenamiento, validacion y optimizacion usando PyTorch, en este caso se uso atom como optimizador.
Una vez validadas las predicciones con las etiquetas de las imagenes, se obtuvo un modelo capaz de discernir entre una control de playstation y un control de xbox
con una precision de >53%.
## Imagenes de ejemplo
#### microsoft xbox controller

#### sony playstation controller

|
AbyelT/Whisper-models
|
AbyelT
| 2022-12-09T16:41:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-05T20:59:34Z |
---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small - Swedish
results: []
metrics:
- {wer}
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Swedish
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
parinzee/whisper-small-th-newmm-old
|
parinzee
| 2022-12-09T16:10:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"th",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T15:14:14Z |
---
language:
- th
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Thai Newmm Tokenized - Parinthapat Pengpun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Thai Newmm Tokenized - Parinthapat Pengpun
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2095
- eval_wer: 26.6533
- eval_cer: 8.0405
- eval_runtime: 5652.2819
- eval_samples_per_second: 1.934
- eval_steps_per_second: 0.061
- epoch: 5.06
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ViktorDo/DistilBERT-WIKI_Lifecycle_Finetuned
|
ViktorDo
| 2022-12-09T16:04:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T17:47:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilBERT-WIKI_Lifecycle_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-WIKI_Lifecycle_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0839 | 1.0 | 2082 | 0.1088 |
| 0.0681 | 2.0 | 4164 | 0.0931 |
| 0.0432 | 3.0 | 6246 | 0.0978 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
adisomani/distilbert-base-uncased-finetuned-sqaud
|
adisomani
| 2022-12-09T15:45:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-12-09T11:01:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-sqaud
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sqaud
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 14 | 0.9851 |
| No log | 2.0 | 28 | 0.6955 |
| No log | 3.0 | 42 | 0.5781 |
| No log | 4.0 | 56 | 0.4548 |
| No log | 5.0 | 70 | 0.4208 |
| No log | 6.0 | 84 | 0.3592 |
| No log | 7.0 | 98 | 0.3422 |
| No log | 8.0 | 112 | 0.3424 |
| No log | 9.0 | 126 | 0.4046 |
| No log | 10.0 | 140 | 0.3142 |
| No log | 11.0 | 154 | 0.3262 |
| No log | 12.0 | 168 | 0.2879 |
| No log | 13.0 | 182 | 0.3376 |
| No log | 14.0 | 196 | 0.2870 |
| No log | 15.0 | 210 | 0.2984 |
| No log | 16.0 | 224 | 0.2807 |
| No log | 17.0 | 238 | 0.2889 |
| No log | 18.0 | 252 | 0.2877 |
| No log | 19.0 | 266 | 0.2820 |
| No log | 20.0 | 280 | 0.2831 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
burakyldrm/wav2vec2-burak-new-300-v2-9-medium
|
burakyldrm
| 2022-12-09T15:06:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T23:29:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-burak-new-300-v2-9-medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-burak-new-300-v2-9-medium
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3098
- Wer: 0.1789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 271
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.2366 | 9.43 | 500 | 0.3980 | 0.4652 |
| 0.5066 | 18.87 | 1000 | 0.2423 | 0.2719 |
| 0.2559 | 28.3 | 1500 | 0.2482 | 0.2443 |
| 0.1869 | 37.74 | 2000 | 0.2537 | 0.2395 |
| 0.1498 | 47.17 | 2500 | 0.2877 | 0.2361 |
| 0.1271 | 56.6 | 3000 | 0.2681 | 0.2237 |
| 0.1145 | 66.04 | 3500 | 0.2788 | 0.2189 |
| 0.1043 | 75.47 | 4000 | 0.2800 | 0.2264 |
| 0.094 | 84.91 | 4500 | 0.2992 | 0.2244 |
| 0.0844 | 94.34 | 5000 | 0.2864 | 0.2209 |
| 0.0776 | 103.77 | 5500 | 0.2758 | 0.2175 |
| 0.0714 | 113.21 | 6000 | 0.2792 | 0.2051 |
| 0.0666 | 122.64 | 6500 | 0.2945 | 0.2175 |
| 0.0601 | 132.08 | 7000 | 0.2865 | 0.2092 |
| 0.0579 | 141.51 | 7500 | 0.3168 | 0.2175 |
| 0.0532 | 150.94 | 8000 | 0.3110 | 0.2292 |
| 0.0474 | 160.38 | 8500 | 0.3070 | 0.2175 |
| 0.0446 | 169.81 | 9000 | 0.3206 | 0.2223 |
| 0.0409 | 179.25 | 9500 | 0.3017 | 0.2106 |
| 0.037 | 188.68 | 10000 | 0.3157 | 0.2092 |
| 0.0344 | 198.11 | 10500 | 0.3222 | 0.2058 |
| 0.0345 | 207.55 | 11000 | 0.3047 | 0.2017 |
| 0.0309 | 216.98 | 11500 | 0.3023 | 0.1913 |
| 0.03 | 226.42 | 12000 | 0.2963 | 0.1920 |
| 0.0268 | 235.85 | 12500 | 0.3036 | 0.1872 |
| 0.0249 | 245.28 | 13000 | 0.2926 | 0.1844 |
| 0.0227 | 254.72 | 13500 | 0.3045 | 0.1865 |
| 0.021 | 264.15 | 14000 | 0.3098 | 0.1789 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
kurianbenoy/whisper-ml-first-model
|
kurianbenoy
| 2022-12-09T14:49:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"ml",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-09T13:59:15Z |
---
language:
- ml
license: apache-2.0
tags:
- whisper-event
datasets:
- mozilla-foundation/common_voice_11_0
---
#
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
benderv/ppo-LunarLander-v2
|
benderv
| 2022-12-09T14:37:50Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-12T12:20:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.79 +/- 13.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
klashenrik/ppo-Huggy
|
klashenrik
| 2022-12-09T14:05:55Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-09T14:05:47Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: klashenrik/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mlxen/electra-smallcase-squad
|
mlxen
| 2022-12-09T14:00:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-27T20:07:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-smallcase-squad
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: f1
value: 42.535
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGYwZjExOTg1NGMwMTQxZGVhMjU4NzJiZTBlM2EzZDNmMzBlMGEzNjMwZjkzMTIxOTQzYWFhYjBiZDZhNTAxYSIsInZlcnNpb24iOjF9.PMOlW_iXGS5QjV0XCs4e5AK-ip9LUdXoDKRxFc7-VM_QMhGc0eq7GGLiY6OXCt-WUwRy6RkFhIg2nzid_qMgDw
- type: exact
value: 38.3889
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWIwNGVhYmIyNmUyNzkzMzA1Y2FkMDJmNzE3ZGNlOWNlNjk2Y2IwOTA5MjJkMmEwMmVhNjNkZWU1YTJhN2ViMiIsInZlcnNpb24iOjF9.S6L-PB3ZfllrXwHMfiSMDQm-tLANrBeWrNNekvfX1aZA79hbKdgP-OGPKatMatGJPirs-zPWDYXEIH4pSZeODw
- type: loss
value: 6.50607442855835
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWFlOWNjOTUzMTBkMDc3MDRmYmUwNzk2NjY3MmJjOTNjMWM2NGZmMDY5MTY0MWIwOTIyNWM5ZDkzNmEwNTJkNiIsInZlcnNpb24iOjF9.wxm8AMY3iCdRD3_cZIJ8zLzUh5Cj7C3k4vCoy0raXOExLIYFs4qSRWxFaI21HCJ8NhZ0IirV5ziaTpSRsPlqAw
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-smallcase-squad
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Kuaaangwen/SMM-classifier-1
|
Kuaaangwen
| 2022-12-09T13:54:52Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-09T13:37:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SMM-classifier-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SMM-classifier-1
This model is a fine-tuned version of [Kuaaangwen/bert-base-cased-finetuned-chemistry](https://huggingface.co/Kuaaangwen/bert-base-cased-finetuned-chemistry) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5506
- Accuracy: 0.8333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.2044 | 0.8333 |
| No log | 2.0 | 14 | 0.3574 | 0.8333 |
| No log | 3.0 | 21 | 0.1551 | 0.8333 |
| No log | 4.0 | 28 | 0.9122 | 0.8333 |
| No log | 5.0 | 35 | 0.9043 | 0.8333 |
| No log | 6.0 | 42 | 0.7262 | 0.8333 |
| No log | 7.0 | 49 | 0.5977 | 0.8333 |
| No log | 8.0 | 56 | 0.5567 | 0.8333 |
| No log | 9.0 | 63 | 0.5484 | 0.8333 |
| No log | 10.0 | 70 | 0.5506 | 0.8333 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
hr16/ira-olympus-4000
|
hr16
| 2022-12-09T13:46:47Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-09T13:43:11Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Model Dreambooth concept /content/Ira_Olympus/CRHTMJX/4000 được train bởi hr16 bằng [Shinja Zero SoTA DreamBooth_Stable_Diffusion](https://colab.research.google.com/drive/1G7qx6M_S1PDDlsWIMdbZXwdZik6sUlEh) notebook <br>
Test concept bằng [Shinja Zero no Notebook](https://colab.research.google.com/drive/1Hp1ZIjPbsZKlCtomJVmt2oX7733W44b0) <br>
Hoặc test bằng `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Ảnh mẫu của concept: WIP
|
alicjak/q-FrozenLake-v1-4x4-noSlippery
|
alicjak
| 2022-12-09T13:40:02Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T13:39:54Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.75 +/- 0.43
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="alicjak/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
ZDaPlaY/strawmaryarts_style
|
ZDaPlaY
| 2022-12-09T13:32:45Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-12-09T12:55:19Z |
Contains:
strawmaryarts style - model with anime style
Trigger Words: strawmaryarts style

|
nbonaker/ddpm-celeb-face
|
nbonaker
| 2022-12-09T13:26:14Z | 12 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:ddpm-celeb-face",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-12-08T17:21:14Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: ddpm-celeb-face
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-celeb-face
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `ddpm-celeb-face` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 32
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 50
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/nbonaker/ddpm-celeb-face/tensorboard?#scalars)
|
geninhu/whisper-medium-vi
|
geninhu
| 2022-12-09T13:09:46Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"vi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T05:27:05Z |
---
language:
- vi
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: openai/whisper-medium
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 vi
type: mozilla-foundation/common_voice_11_0
config: vi
split: test
args: vi
metrics:
- name: Wer
type: wer
value: 19.92761570519851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7599
- Wer: 19.9276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0001 | 62.0 | 1000 | 0.6531 | 19.3463 |
| 0.0001 | 124.0 | 2000 | 0.6964 | 19.6973 |
| 0.0 | 187.0 | 3000 | 0.7282 | 19.8947 |
| 0.0 | 249.0 | 4000 | 0.7481 | 19.8837 |
| 0.0 | 312.0 | 5000 | 0.7599 | 19.9276 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Gladiator/roberta-large_ner_wikiann
|
Gladiator
| 2022-12-09T12:56:20Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-09T12:15:54Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large_ner_wikiann
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: en
metrics:
- name: Precision
type: precision
value: 0.8462551098177787
- name: Recall
type: recall
value: 0.8634242895518167
- name: F1
type: f1
value: 0.8547534903250638
- name: Accuracy
type: accuracy
value: 0.9382388000397338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large_ner_wikiann
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2783
- Precision: 0.8463
- Recall: 0.8634
- F1: 0.8548
- Accuracy: 0.9382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3395 | 1.0 | 1250 | 0.2652 | 0.8039 | 0.8308 | 0.8171 | 0.9242 |
| 0.2343 | 2.0 | 2500 | 0.2431 | 0.8354 | 0.8503 | 0.8428 | 0.9329 |
| 0.1721 | 3.0 | 3750 | 0.2315 | 0.8330 | 0.8503 | 0.8416 | 0.9352 |
| 0.1156 | 4.0 | 5000 | 0.2709 | 0.8477 | 0.8634 | 0.8554 | 0.9385 |
| 0.1026 | 5.0 | 6250 | 0.2783 | 0.8463 | 0.8634 | 0.8548 | 0.9382 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Gladiator/bert-large-uncased_ner_wikiann
|
Gladiator
| 2022-12-09T12:54:43Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-09T12:12:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-large-uncased_ner_wikiann
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: en
metrics:
- name: Precision
type: precision
value: 0.8383588049015558
- name: Recall
type: recall
value: 0.8608794005372543
- name: F1
type: f1
value: 0.8494698660714285
- name: Accuracy
type: accuracy
value: 0.9379407966623622
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased_ner_wikiann
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3373
- Precision: 0.8384
- Recall: 0.8609
- F1: 0.8495
- Accuracy: 0.9379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3146 | 1.0 | 1250 | 0.2545 | 0.7956 | 0.8372 | 0.8159 | 0.9285 |
| 0.1973 | 2.0 | 2500 | 0.2438 | 0.8267 | 0.8546 | 0.8404 | 0.9349 |
| 0.1181 | 3.0 | 3750 | 0.2637 | 0.8320 | 0.8588 | 0.8452 | 0.9374 |
| 0.0647 | 4.0 | 5000 | 0.3175 | 0.8389 | 0.8627 | 0.8507 | 0.9387 |
| 0.0443 | 5.0 | 6250 | 0.3373 | 0.8384 | 0.8609 | 0.8495 | 0.9379 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
avojarot/ppo-LunarLander-v2
|
avojarot
| 2022-12-09T12:48:10Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T12:47:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.12 +/- 20.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bjelkenhed/whisper-medium-sv
|
bjelkenhed
| 2022-12-09T12:10:03Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"whisper-event",
"sv",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-06T16:49:19Z |
---
language:
- sv
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
- whisper-event
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Medium Sv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 sv-SE
type: mozilla-foundation/common_voice_11_0
config: sv-SE
split: test
args: sv-SE
metrics:
- name: Wer
type: wer
value: 10.712174146734748
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) trained on
NST Swedish ASR and evaluated on Common Voice 11 testset. It achieves the following results on the evaluation set:
- Loss: 0.2636
- Wer: 10.7122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0746 | 0.2 | 1000 | 0.2904 | 13.4695 |
| 0.0564 | 0.4 | 2000 | 0.3121 | 13.2384 |
| 0.0532 | 0.6 | 3000 | 0.2862 | 11.9726 |
| 0.0387 | 0.8 | 4000 | 0.2629 | 11.6931 |
| 0.0279 | 1.14 | 5000 | 0.2636 | 10.7122 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
massimowww/LunarLander-v2
|
massimowww
| 2022-12-09T11:59:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T11:58:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 199.90 +/- 63.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anuragshas/whisper-small-pa
|
anuragshas
| 2022-12-09T11:54:19Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"whisper-event",
"pa",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-06T10:06:37Z |
---
license: apache-2.0
language:
- pa
tags:
- generated_from_trainer
- whisper-event
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Punjabi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 pa-IN
type: mozilla-foundation/common_voice_11_0
config: pa-IN
split: test
args: pa-IN
metrics:
- name: Wer
type: wer
value: 39.04688700999232
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Punjabi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5991
- Wer: 39.0469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4346 | 5.01 | 50 | 0.3902 | 49.6797 |
| 0.0728 | 11.0 | 100 | 0.3811 | 40.7379 |
| 0.009 | 16.02 | 150 | 0.4924 | 39.5081 |
| 0.0028 | 22.0 | 200 | 0.5309 | 38.7394 |
| 0.0008 | 27.02 | 250 | 0.5687 | 38.6369 |
| 0.0006 | 33.01 | 300 | 0.5859 | 39.0213 |
| 0.0005 | 38.02 | 350 | 0.5954 | 39.0981 |
| 0.0005 | 44.01 | 400 | 0.5991 | 39.0469 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
DarkBeam/Leonardostyl
|
DarkBeam
| 2022-12-09T11:17:31Z | 0 | 2 | null |
[
"region:us"
] | null | 2022-12-09T10:43:58Z |
Trained on 6000 steps with "Dreambooth fast" colab, 30 misc selected images carefully hand cropped, mix of drawings, machines and paintings.
Too bad Huggingface does not show the preview images correctly; see the files section - images are with complete with prompt (just add " ,leonardostyl style" to each one).
|
NathanaelM/ppo-Huggy
|
NathanaelM
| 2022-12-09T10:43:43Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-09T10:43:35Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: NathanaelM/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MontaR/ppo-LunarLander-v2-0.4
|
MontaR
| 2022-12-09T10:18:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T10:18:19Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 276.78 +/- 18.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mamilldo/prdgrp-image-search
|
mamilldo
| 2022-12-09T09:57:15Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-09T09:56:57Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: prdgrp-image-search
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9108911156654358
---
# prdgrp-image-search
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Headphones

#### Kitchen hood

#### Laptop

#### Mobile phone

#### Tablet

|
AlexMo/FIFA_WC22_WINNER_LANGUAGE_MODEL
|
AlexMo
| 2022-12-09T09:46:44Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"nl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-06T09:47:49Z |
---
language:
- nl
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: FIFA_WC22_WINNER_LANGUAGE_MODEL
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: 'null'
split: None
args: 'config: nl, split: test'
metrics:
- name: Wer
type: wer
value: 14.261890699371158
---
# whisper-lt-finetune
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2588
- Wer: 14.2619
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0783 | 1.3 | 1000 | 0.2478 | 15.5647 |
| 0.0287 | 2.6 | 2000 | 0.2441 | 14.3765 |
| 0.0087 | 3.9 | 3000 | 0.2516 | 14.3349 |
| 0.0021 | 5.19 | 4000 | 0.2588 | 14.2619 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
urechandro/q-Taxi-v3
|
urechandro
| 2022-12-09T09:42:28Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T09:23:16Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="urechandro/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Aman6917/autotrain-fine_tune_tscholak-2392374839
|
Aman6917
| 2022-12-09T09:39:56Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"summarization",
"unk",
"dataset:Aman6917/autotrain-data-fine_tune_tscholak",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-12-09T09:30:41Z |
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Aman6917/autotrain-data-fine_tune_tscholak
co2_eq_emissions:
emissions: 11.023749088725205
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2392374839
- CO2 Emissions (in grams): 11.0237
## Validation Metrics
- Loss: 0.128
- Rouge1: 94.982
- Rouge2: 91.105
- RougeL: 94.629
- RougeLsum: 94.535
- Gen Len: 30.359
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Aman6917/autotrain-fine_tune_tscholak-2392374839
```
|
rapantzikos/nvidia-segformer-b0-finetuned-ade-512-512-finetuned-ISIC17
|
rapantzikos
| 2022-12-09T09:38:25Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-11-16T14:34:30Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: nvidia-segformer-b0-finetuned-ade-512-512-finetuned-ISIC17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nvidia-segformer-b0-finetuned-ade-512-512-finetuned-ISIC17
This model is a fine-tuned version of [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1948
- Mean Iou: 0.8064
- Mean Accuracy: 0.8726
- Overall Accuracy: 0.9381
- Per Category Iou: [0.6841604127643356, 0.9285439643646547]
- Per Category Accuracy: [0.7721651141608432, 0.9729809595315688]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------:|:-----------------------------------------:|
| 0.481 | 0.16 | 10 | 0.4235 | 0.6191 | 0.6970 | 0.8761 | [0.3719409076673884, 0.8662862424406493] | [0.42270204900152314, 0.9713331864930521] |
| 0.4147 | 0.32 | 20 | 0.3894 | 0.7067 | 0.8502 | 0.8853 | [0.5464942438498753, 0.8668431573745645] | [0.7965579529885418, 0.9038859083170013] |
| 0.356 | 0.48 | 30 | 0.3148 | 0.7467 | 0.8513 | 0.9107 | [0.5963581593534901, 0.897077797385972] | [0.7603709174964982, 0.9422313184595918] |
| 0.3039 | 0.63 | 40 | 0.3024 | 0.7620 | 0.8671 | 0.9162 | [0.6211722830632663, 0.9028139512386881] | [0.7918407335685692, 0.9422883932404167] |
| 0.2545 | 0.79 | 50 | 0.2849 | 0.7766 | 0.8898 | 0.9201 | [0.6468577863419183, 0.9063792530493855] | [0.8432862096150755, 0.9362151542385662] |
| 0.2635 | 0.95 | 60 | 0.2504 | 0.7828 | 0.8644 | 0.9279 | [0.6487213857926865, 0.9168129696986418] | [0.7671470887645524, 0.9616549114054705] |
| 0.2175 | 1.11 | 70 | 0.2497 | 0.7849 | 0.8682 | 0.9283 | [0.6526705030304356, 0.9171225024239068] | [0.7762677096648272, 0.9602225755678137] |
| 0.2025 | 1.27 | 80 | 0.2400 | 0.7840 | 0.8632 | 0.9288 | [0.6501844204669202, 0.9178944798865282] | [0.7627291445016801, 0.9636411137781736] |
| 0.2035 | 1.43 | 90 | 0.2288 | 0.7931 | 0.8749 | 0.9313 | [0.6657367286733036, 0.9203778068784213] | [0.7885027822639286, 0.9612655167036179] |
| 0.2488 | 1.59 | 100 | 0.2110 | 0.7978 | 0.8719 | 0.9341 | [0.6717638717220313, 0.923859975121704] | [0.7766611302038285, 0.9672003292652145] |
| 0.1954 | 1.75 | 110 | 0.2067 | 0.7962 | 0.8597 | 0.9354 | [0.666599427783381, 0.9258672754383861] | [0.7436428904928473, 0.9757231213956472] |
| 0.1806 | 1.9 | 120 | 0.2047 | 0.7926 | 0.8525 | 0.9349 | [0.6596059897565958, 0.925563006736469] | [0.726197674685608, 0.9787940661520825] |
| 0.161 | 2.06 | 130 | 0.2047 | 0.7903 | 0.8505 | 0.9342 | [0.6558737849234609, 0.9247714617107691] | [0.7223974159771602, 0.9786951901233297] |
| 0.1736 | 2.22 | 140 | 0.2023 | 0.7948 | 0.8588 | 0.9349 | [0.6643652721485811, 0.9252950591002775] | [0.742124317828686, 0.9754152391272543] |
| 0.1947 | 2.38 | 150 | 0.2077 | 0.7985 | 0.8656 | 0.9355 | [0.6712414223331253, 0.9257326708494226] | [0.7585178608332249, 0.9726888331181641] |
| 0.1464 | 2.54 | 160 | 0.1960 | 0.8030 | 0.8680 | 0.9373 | [0.678274892507806, 0.9276935390097538] | [0.7620104248788739, 0.9740685958478499] |
| 0.1644 | 2.7 | 170 | 0.1964 | 0.8064 | 0.8751 | 0.9377 | [0.6847175060674714, 0.9279857318627613] | [0.7791196258677832, 0.9710404169835255] |
| 0.1803 | 2.86 | 180 | 0.1948 | 0.8064 | 0.8726 | 0.9381 | [0.6841604127643356, 0.9285439643646547] | [0.7721651141608432, 0.9729809595315688] |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.0+cu116
- Datasets 2.7.0
- Tokenizers 0.12.1
|
mamilldo/mobile-image-search
|
mamilldo
| 2022-12-09T09:36:29Z | 28 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-09T09:36:14Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: mobile-image-search
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6796116232872009
---
# mobile-image-search
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Apple iPhone 6s

#### Apple iPhone X

#### Samsung Galaxy S10

#### Samsung Galaxy S9

#### Television

|
mamilldo/prd-image-search
|
mamilldo
| 2022-12-09T09:20:42Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-09T09:12:51Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: prd-image-search
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7757009267807007
---
# prd-image-search
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Desktop computer

#### Laptop

#### Samsung Galaxy

#### Television

#### iPhone

|
duja1/reidartest10
|
duja1
| 2022-12-09T09:19:51Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-09T09:18:44Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: reidar123s
---
### reidartest10 Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the None base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
reidar123s (use that on your prompt)

|
ljh1/hello-custom
|
ljh1
| 2022-12-09T09:06:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"emotion",
"endpoints-template",
"en",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-09T07:09:15Z |
---
language:
- en
tags:
- text-classification
- emotion
- endpoints-template
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
---
# Fork of [bhadresh-savani/distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion)
|
Shularp/krirk-finetuned-Helsinki-NLP_opus-mt-en-ar
|
Shularp
| 2022-12-09T09:03:42Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-12-07T07:37:38Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: krirk-finetuned-Helsinki-NLP_opus-mt-en-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# krirk-finetuned-Helsinki-NLP_opus-mt-en-ar
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4468
- Bleu: 26.9281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.6252 | 1.0 | 32 | 1.4686 | 26.1394 |
| 1.4867 | 2.0 | 64 | 1.4496 | 26.8139 |
| 1.4121 | 3.0 | 96 | 1.4468 | 26.9281 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
danielsaggau/bregman_scotus_k8_ep10
|
danielsaggau
| 2022-12-09T08:54:39Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"longformer",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-12-09T08:47:54Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 187841 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.BregmanRankingLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 5000,
"warmup_steps": 187841,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LongformerModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
leoleung93/ppo-Huggy
|
leoleung93
| 2022-12-09T08:52:32Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-09T08:52:24Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: leoleung93/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
QIANWEI/swin-tiny-patch4-window7-224-finetuned-eurosat
|
QIANWEI
| 2022-12-09T08:42:12Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-07T13:39:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9851851851851852
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [nielsr/swin-tiny-patch4-window7-224-finetuned-eurosat](https://huggingface.co/nielsr/swin-tiny-patch4-window7-224-finetuned-eurosat) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0416
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1296 | 1.0 | 190 | 0.0646 | 0.9774 |
| 0.1257 | 2.0 | 380 | 0.0445 | 0.9841 |
| 0.1067 | 3.0 | 570 | 0.0416 | 0.9852 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AleBurzio/bloom-better-riddles
|
AleBurzio
| 2022-12-09T08:33:35Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"generated_from_trainer",
"dataset:pszemraj/riddlesense_plusplus",
"license:bigscience-bloom-rail-1.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-12-09T00:49:55Z |
---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
datasets:
- pszemraj/riddlesense_plusplus
metrics:
- accuracy
model-index:
- name: bloom-better-riddles
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: pszemraj/riddlesense_plusplus
type: pszemraj/riddlesense_plusplus
metrics:
- name: Accuracy
type: accuracy
value: 0.6594206731193033
parameters:
min_length: 16
max_length: 96
no_repeat_ngram_size: 1
do_sample: True
pipeline_tag:
text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom-better-riddles
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the pszemraj/riddlesense_plusplus dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8107
- Accuracy: 0.6594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
luongtran/test
|
luongtran
| 2022-12-09T08:20:03Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-12-09T08:20:03Z |
---
license: bigscience-bloom-rail-1.0
---
|
tapadipti/tds-huggingpics
|
tapadipti
| 2022-12-09T07:34:02Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-09T07:33:49Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: tds-huggingpics
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.875
---
# tds-huggingpics
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bed

#### chair

#### closet

#### couch

#### table

|
jmsalvi/dqn-SpaceInvadersNoFrameskip-v4
|
jmsalvi
| 2022-12-09T07:29:44Z | 10 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-12T04:59:55Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 457.50 +/- 157.18
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jmsalvi -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jmsalvi -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jmsalvi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 150000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.15),
('frame_stack', 3),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1200000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
minhhoque/vit-base-patch16-224-in21k_ft-cifar10test
|
minhhoque
| 2022-12-09T06:59:18Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-08T05:28:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: vit-base-patch16-224-in21k_ft-cifar10test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k_ft-cifar10test
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
shripadbhat/whisper-tiny-mr
|
shripadbhat
| 2022-12-09T06:56:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"mr",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-09T05:13:59Z |
---
language:
- mr
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny Marathi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: mr
split: test
args: mr
metrics:
- name: Wer
type: wer
value: 41.645121785276906
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Marathi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4618
- Wer: 41.6451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6182 | 0.95 | 200 | 0.6224 | 53.6706 |
| 0.4364 | 1.9 | 400 | 0.5200 | 47.2071 |
| 0.3668 | 2.84 | 600 | 0.4830 | 44.4890 |
| 0.294 | 3.79 | 800 | 0.4671 | 42.8562 |
| 0.2729 | 4.74 | 1000 | 0.4642 | 42.1214 |
| 0.2401 | 5.69 | 1200 | 0.4614 | 41.6996 |
| 0.2212 | 6.64 | 1400 | 0.4618 | 41.7778 |
| 0.2093 | 7.58 | 1600 | 0.4618 | 41.6451 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
huam/ppo-LunarLander-v2
|
huam
| 2022-12-09T06:19:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T03:53:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 276.82 +/- 15.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
OFA-Sys/chinese-clip-vit-large-patch14-336px
|
OFA-Sys
| 2022-12-09T06:10:57Z | 368 | 23 |
transformers
|
[
"transformers",
"pytorch",
"chinese_clip",
"zero-shot-image-classification",
"vision",
"arxiv:2211.01335",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2022-11-09T09:40:25Z |
---
tags:
- vision
widget:
- src: https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16/resolve/main/festival.jpg
candidate_labels: 灯笼, 鞭炮, 对联
example_title: festival
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: 音乐表演, 体育运动
example_title: cat & dog
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
candidate_labels: 梅西, C罗, 马奎尔
example_title: football
---
# Chinese-CLIP-ViT-Large-Patch14-336px
## Introduction
This is the large-version of the Chinese CLIP, with ViT-L/14@336px as the image encoder and RoBERTa-wwm-base as the text encoder. Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. For more details, please refer to our technical report https://arxiv.org/abs/2211.01335 and our official github repo https://github.com/OFA-Sys/Chinese-CLIP (Welcome to star! 🔥🔥)
## Use with the official API
We provide a simple code snippet to show how to use the API of Chinese-CLIP to compute the image & text embeddings and similarities.
```python
from PIL import Image
import requests
from transformers import ChineseCLIPProcessor, ChineseCLIPModel
model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-large-patch14-336px")
processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-large-patch14-336px")
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
# Squirtle, Bulbasaur, Charmander, Pikachu in English
texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]
# compute image feature
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
# compute text features
inputs = processor(text=texts, padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
# compute image-text similarity scores
inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # probs: [[0.0219, 0.0316, 0.0043, 0.9423]]
```
However, if you are not satisfied with only using the API, feel free to check our github repo https://github.com/OFA-Sys/Chinese-CLIP for more details about training and inference.
<br><br>
## Results
**MUGE Text-to-Image Retrieval**:
<table border="1" width="100%">
<tr align="center">
<th>Setup</th><th colspan="4">Zero-shot</th><th colspan="4">Finetune</th>
</tr>
<tr align="center">
<td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MR</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MR</td>
</tr>
<tr align="center">
<td width="120%">Wukong</td><td>42.7</td><td>69.0</td><td>78.0</td><td>63.2</td><td>52.7</td><td>77.9</td><td>85.6</td><td>72.1</td>
</tr>
<tr align="center">
<td width="120%">R2D2</td><td>49.5</td><td>75.7</td><td>83.2</td><td>69.5</td><td>60.1</td><td>82.9</td><td>89.4</td><td>77.5</td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP</td><td>63.0</td><td>84.1</td><td>89.2</td><td>78.8</td><td>68.9</td><td>88.7</td><td>93.1</td><td>83.6</td>
</tr>
</table>
<br>
**Flickr30K-CN Retrieval**:
<table border="1" width="120%">
<tr align="center">
<th>Task</th><th colspan="6">Text-to-Image</th><th colspan="6">Image-to-Text</th>
</tr>
<tr align="center">
<th>Setup</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th>
</tr>
<tr align="center">
<td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td>
</tr>
<tr align="center">
<td width="120%">Wukong</td><td>51.7</td><td>78.9</td><td>86.3</td><td>77.4</td><td>94.5</td><td>97.0</td><td>76.1</td><td>94.8</td><td>97.5</td><td>92.7</td><td>99.1</td><td>99.6</td>
</tr>
<tr align="center">
<td width="120%">R2D2</td><td>60.9</td><td>86.8</td><td>92.7</td><td>84.4</td><td>96.7</td><td>98.4</td><td>77.6</td><td>96.7</td><td>98.9</td><td>95.6</td><td>99.8</td><td>100.0</td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP</td><td>71.2</td><td>91.4</td><td>95.5</td><td>83.8</td><td>96.9</td><td>98.6</td><td>81.6</td><td>97.5</td><td>98.8</td><td>95.3</td><td>99.7</td><td>100.0</td>
</tr>
</table>
<br>
**COCO-CN Retrieval**:
<table border="1" width="100%">
<tr align="center">
<th>Task</th><th colspan="6">Text-to-Image</th><th colspan="6">Image-to-Text</th>
</tr>
<tr align="center">
<th>Setup</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th>
</tr>
<tr align="center">
<td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td>
</tr>
<tr align="center">
<td width="120%">Wukong</td><td>53.4</td><td>80.2</td><td>90.1</td><td>74.0</td><td>94.4</td><td>98.1</td><td>55.2</td><td>81.0</td><td>90.6</td><td>73.3</td><td>94.0</td><td>98.0</td>
</tr>
<tr align="center">
<td width="120%">R2D2</td><td>56.4</td><td>85.0</td><td>93.1</td><td>79.1</td><td>96.5</td><td>98.9</td><td>63.3</td><td>89.3</td><td>95.7</td><td>79.3</td><td>97.1</td><td>98.7</td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP</td><td>69.2</td><td>89.9</td><td>96.1</td><td>81.5</td><td>96.9</td><td>99.1</td><td>63.0</td><td>86.6</td><td>92.9</td><td>83.5</td><td>97.3</td><td>99.2</td>
</tr>
</table>
<br>
**Zero-shot Image Classification**:
<table border="1" width="100%">
<tr align="center">
<th>Task</th><th>CIFAR10</th><th>CIFAR100</th><th>DTD</th><th>EuroSAT</th><th>FER</th><th>FGVC</th><th>KITTI</th><th>MNIST</th><th>PC</th><th>VOC</th>
</tr>
<tr align="center">
<td width="150%">GIT</td><td>88.5</td><td>61.1</td><td>42.9</td><td>43.4</td><td>41.4</td><td>6.7</td><td>22.1</td><td>68.9</td><td>50.0</td><td>80.2</td>
</tr>
<tr align="center">
<td width="150%">ALIGN</td><td>94.9</td><td>76.8</td><td>66.1</td><td>52.1</td><td>50.8</td><td>25.0</td><td>41.2</td><td>74.0</td><td>55.2</td><td>83.0</td>
</tr>
<tr align="center">
<td width="150%">CLIP</td><td>94.9</td><td>77.0</td><td>56.0</td><td>63.0</td><td>48.3</td><td>33.3</td><td>11.5</td><td>79.0</td><td>62.3</td><td>84.0</td>
</tr>
<tr align="center">
<td width="150%">Wukong</td><td>95.4</td><td>77.1</td><td>40.9</td><td>50.3</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td>
</tr>
<tr align="center">
<td width="150%">CN-CLIP</td><td>96.0</td><td>79.7</td><td>51.2</td><td>52.0</td><td>55.1</td><td>26.2</td><td>49.9</td><td>79.4</td><td>63.5</td><td>84.9</td>
</tr>
</table>
<br>
## Citation
If you find Chinese CLIP helpful, feel free to cite our paper. Thanks for your support!
```
@article{chinese-clip,
title={Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese},
author={Yang, An and Pan, Junshu and Lin, Junyang and Men, Rui and Zhang, Yichang and Zhou, Jingren and Zhou, Chang},
journal={arXiv preprint arXiv:2211.01335},
year={2022}
}
```
<br>
|
Hyuk/wav2vec2-korean-v1
|
Hyuk
| 2022-12-09T05:48:00Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-07T00:16:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-korean-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-korean-v1
This model is a fine-tuned version of [Hyuk/wav2vec2-korean-v1](https://huggingface.co/Hyuk/wav2vec2-korean-v1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
birgermoell/whisper-small-sv-fast
|
birgermoell
| 2022-12-09T05:37:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"sv",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T17:22:17Z |
---
language:
- sv
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Swedish Fast
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 sv-SE
type: mozilla-foundation/common_voice_11_0
config: sv-SE
split: test
args: sv-SE
metrics:
- name: Wer
type: wer
value: 62.69218363616815
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Swedish Fast
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 sv-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8738
- Wer: 62.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 2.0512 | 6.01 | 1000 | 2.5997 | 87.1949 |
| 0.4367 | 12.02 | 2000 | 1.8089 | 68.1271 |
| 0.0806 | 18.03 | 3000 | 1.7969 | 63.5711 |
| 0.0194 | 25.01 | 4000 | 1.8435 | 63.4663 |
| 0.0121 | 31.02 | 5000 | 1.8738 | 62.6922 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
birgermoell/whisper-tiny-sv-fast
|
birgermoell
| 2022-12-09T05:26:28Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"sv",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T14:43:04Z |
---
language:
- sv
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny Swedish Fast
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 sv-SE
type: mozilla-foundation/common_voice_11_0
config: sv-SE
split: test
args: sv-SE
metrics:
- name: Wer
type: wer
value: 73.01634232878185
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Swedish Fast
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 sv-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4344
- Wer: 73.0163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5547 | 6.01 | 1000 | 1.9244 | 113.4448 |
| 0.7244 | 12.02 | 2000 | 1.4593 | 81.0128 |
| 0.3583 | 18.03 | 3000 | 1.4019 | 74.3415 |
| 0.2157 | 25.01 | 4000 | 1.4249 | 73.8953 |
| 0.1897 | 31.02 | 5000 | 1.4344 | 73.0163 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Gladiator/albert-large-v2_ner_conll2003
|
Gladiator
| 2022-12-09T05:07:13Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-09T04:42:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: albert-large-v2_ner_conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9396018069265518
- name: Recall
type: recall
value: 0.9451363177381353
- name: F1
type: f1
value: 0.9423609363201612
- name: Accuracy
type: accuracy
value: 0.9874810170943499
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2_ner_conll2003
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0584
- Precision: 0.9396
- Recall: 0.9451
- F1: 0.9424
- Accuracy: 0.9875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2034 | 1.0 | 878 | 0.0653 | 0.9114 | 0.9278 | 0.9195 | 0.9837 |
| 0.0561 | 2.0 | 1756 | 0.0602 | 0.9316 | 0.9280 | 0.9298 | 0.9845 |
| 0.0303 | 3.0 | 2634 | 0.0536 | 0.9380 | 0.9424 | 0.9402 | 0.9872 |
| 0.0177 | 4.0 | 3512 | 0.0535 | 0.9393 | 0.9456 | 0.9425 | 0.9877 |
| 0.011 | 5.0 | 4390 | 0.0584 | 0.9396 | 0.9451 | 0.9424 | 0.9875 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
schrilax/PPO-LunarLander-v2
|
schrilax
| 2022-12-09T04:48:19Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-09T04:47:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.46 +/- 22.84
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jibi2906/my-finetuned-distilbert
|
jibi2906
| 2022-12-09T04:38:42Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-12-09T04:38:30Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-finetuned-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-finetuned-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6482
- Validation Loss: 1.3103
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.6482 | 1.3103 | 0 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Gladiator/roberta-large_ner_conll2003
|
Gladiator
| 2022-12-09T04:24:37Z | 7,986 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-09T03:45:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large_ner_conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9622389306599833
- name: Recall
type: recall
value: 0.9692022887916526
- name: F1
type: f1
value: 0.9657080573488722
- name: Accuracy
type: accuracy
value: 0.9939449398387913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large_ner_conll2003
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0345
- Precision: 0.9622
- Recall: 0.9692
- F1: 0.9657
- Accuracy: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1227 | 1.0 | 878 | 0.0431 | 0.9511 | 0.9559 | 0.9535 | 0.9914 |
| 0.0295 | 2.0 | 1756 | 0.0334 | 0.9541 | 0.9657 | 0.9599 | 0.9930 |
| 0.0163 | 3.0 | 2634 | 0.0327 | 0.9616 | 0.9682 | 0.9649 | 0.9938 |
| 0.0073 | 4.0 | 3512 | 0.0342 | 0.9624 | 0.9692 | 0.9658 | 0.9939 |
| 0.0042 | 5.0 | 4390 | 0.0345 | 0.9622 | 0.9692 | 0.9657 | 0.9939 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
flamesbob/skyfireModel
|
flamesbob
| 2022-12-09T02:53:53Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-12-09T01:59:01Z |
---
license: creativeml-openrail-m
---
|
chailatte/steven-universe
|
chailatte
| 2022-12-09T02:53:19Z | 0 | 1 | null |
[
"license:unknown",
"region:us"
] | null | 2022-12-09T02:42:44Z |
---
license: unknown
---
Token: su_mdl
Class: style
Example: 1girl, grin, solo, female focus, smile, sparkling eyes, shiny hair, su_mdl style
I get good results using these negative prompts:
bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
With a CFG Scale of 11.
This is trained on top of Anything.ckpt using 100 screenshots from Steven Universe at 10k steps.
|
YesIfwRONG/Zero
|
YesIfwRONG
| 2022-12-09T02:48:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-12-09T02:48:01Z |
This is a capstone project serving for training the model and exploring implementation on AIs.
|
PiyarSquare/sd_asim_simpsons
|
PiyarSquare
| 2022-12-09T01:38:31Z | 0 | 41 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-12-08T22:16:20Z |
---
license: creativeml-openrail-m
---
### 💥🎨 The Simpsons dreambooth model.
This is a fine-tuned Stable Diffusion model based on The Simpsons.
Use **asim style** in your prompts.
The model has some trouble with double pupils and no pupils.
Using "cross-eyed" in the negative prompt appears to help?
### Sample images:
Samples are made with [dynamic prompts](https://github.com/adieyal/sd-dynamic-prompts), Euler 80 steps @ CFG 12. Negative prompts: watermark, text, signature, cross-eyed



For people / characters:
asim style. dramatic beautiful { headshot | portrait } of \_\_person\_\_ {outside { in a garden | in a desert | on a mountain top | at a roman ruin} {at sunrise | at sunset | on an overcast afternoon | in the rain | in the snow | at night} | inside {a fancy living room | on a movie set | a vast empty dark space | a kaleidoscope | an ancient library} with {spotlights | neon lights | soft mood lighting | firefly lights } }. detailed background.

For animals:
asim style. dramatic closeup national geographic image of a \_\_animal\_\_ in its natural habitat. at {sunrise|sunset|night}. detailed background.

asim style. + random prompt from the internet of cool looking structures: steampunk library, tower of babel, tree house, haunted victorian.


biomes:
asim style. a beautiful {summer | autumn | winter | spring } landscape panorama painting of \_\_biome\_\_ {at sunrise | at sunset | on an overcast afternoon | in the rain | in the snow | at night}
famous places:
asim style. a beautiful panorama view of \_\_places\_\_ {at sunrise | at sunset | on a cloudy afternoon | in the rain | covered in snow}.

flowers:
asim style. a beautiful vase of \_\_flower\_\_ flowers. on a balcony table at { sunrise | sunset | night} . nearby a {bottle of {beer | wine} and a half-empty glass | bowl of fruit}.


asim style. + random prompt from the internet. The model mixes well with existing prompts with artists and styles, though not so well with keywords like "photo-realistic."
Based on StableDiffusion 1.5 model (full weights).
### Training
Made with [automatic1111 webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) + [d8ahazard dreambooth extension](https://github.com/d8ahazard/sd_dreambooth_extension) + [nitrosocke guide](https://github.com/nitrosocke/dreambooth-training-guide).
100 hand-cut training images.
About 70% people, 20% landscapes and 10% animals and objects.
Maybe one too many Cletus.
Detailed captions were written for each image such as: "A wide shot of a 40-year-old Caucasian man with glasses and a mustache. Dressed in a fishing hat, pink shirt, an olive fishing vest with pockets and brown trousers, sitting in a canoe on a lake. The man is fishing with a red fishing rod. There are trees and mountains in the background at sunset with a few clouds in the sky."
Learning rate was 1.72e-6 for 10,000 steps without prior preservation.
Useful tips from the reddit stablediffusion and the discussions on d8ahazard's extension.
Notes on training on [d8ahazard dreambooth extension discussion](https://github.com/d8ahazard/sd_dreambooth_extension/discussions/443).
I am excited to see what people do with this and I would like to improve the eyes, if anyone has suggestions.
|
ypluit/stt_kr_citrinet1024_PublicCallCenter_1000H
|
ypluit
| 2022-12-09T01:24:30Z | 2 | 0 |
nemo
|
[
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"Citrinet1024",
"NeMo",
"pytorch",
"kr",
"dataset:RealCallData",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-12-09T01:18:30Z |
---
language:
- kr
license: cc-by-4.0
library_name: nemo
datasets:
- RealCallData
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- Citrinet1024
- NeMo
- pytorch
model-index:
- name: stt_kr_citrinet1024_PublicCallCenter_1000H
results: []
---
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [1], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained("ypluit/stt_kr_citrinet1024_PublicCallCenter_1000H")
```
### Transcribing using Python
First, let's get a sample
```
get any korean telephone voice wave file
```
Then simply do:
```
asr_model.transcribe(['sample-kr.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="model" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000Hz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
See nemo toolkit and reference papers.
## Training
Learned about 10 days on 2 A6000
### Datasets
Private call center real data (650hour)
## Performance
0.13 CER
## Limitations
This model was trained with 650 hours of Korean telephone voice data for customer service in a call center. might be Poor performance for general-purpose dialogue and specific accents.
## References
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
SatCat/ppo-Huggy
|
SatCat
| 2022-12-09T01:14:23Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-09T01:14:17Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: SatCat/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
izumi-lab/electra-small-japanese-discriminator
|
izumi-lab
| 2022-12-09T00:41:39Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"ja",
"dataset:wikipedia",
"arxiv:2003.10555",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東京大学で[MASK]の研究をしています。
---
# ELECTRA small Japanese discriminator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA small in the [original ELECTRA implementation](https://github.com/google-research/electra); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The corpus file is 2.9GB, consisting of approximately 20M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555) except size; 128 tokens per instance, 128 instances per batch, and 1M training steps.
The size of the generator is the same of the discriminator.
## Citation
```
@article{Suzuki-etal-2023-ipm,
title = {Constructing and analyzing domain-specific language model for financial text mining}
author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
journal = {Information Processing & Management},
volume = {60},
number = {2},
pages = {103194},
year = {2023},
doi = {10.1016/j.ipm.2022.103194}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
izumi-lab/bert-small-japanese-fin
|
izumi-lab
| 2022-12-09T00:41:24Z | 7,535 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"finance",
"ja",
"arxiv:2003.10555",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
tags:
- finance
widget:
- text: 流動[MASK]は、1億円となりました。
---
# BERT small Japanese finance
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as BERT small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on Wikipedia corpus and financial corpus.
The Wikipedia corpus is generated from the Japanese Wikipedia dump file as of June 1, 2021.
The corpus file is 2.9GB, consisting of approximately 20M sentences.
The financial corpus consists of 2 corpora:
- Summaries of financial results from October 9, 2012, to December 31, 2020
- Securities reports from February 8, 2018, to December 31, 2020
The financial corpus file is 5.2GB, consisting of approximately 27M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as BERT small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 128 tokens per instance, 128 instances per batch, and 1.45M training steps.
## Citation
```
@article{Suzuki-etal-2023-ipm,
title = {Constructing and analyzing domain-specific language model for financial text mining}
author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
journal = {Information Processing & Management},
volume = {60},
number = {2},
pages = {103194},
year = {2023},
doi = {10.1016/j.ipm.2022.103194}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
izumi-lab/bert-small-japanese
|
izumi-lab
| 2022-12-09T00:40:57Z | 1,069 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"arxiv:2003.10555",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東京大学で[MASK]の研究をしています。
---
# BERT small Japanese finance
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as BERT small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The corpus file is 2.9GB, consisting of approximately 20M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as BERT small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 128 tokens per instance, 128 instances per batch, and 1.45M training steps.
## Citation
```
@article{Suzuki-etal-2023-ipm,
title = {Constructing and analyzing domain-specific language model for financial text mining}
author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
journal = {Information Processing & Management},
volume = {60},
number = {2},
pages = {103194},
year = {2023},
doi = {10.1016/j.ipm.2022.103194}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
AleBurzio/bloom-560M-riddles
|
AleBurzio
| 2022-12-09T00:39:35Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"generated_from_trainer",
"dataset:pszemraj/riddlesense_plusplus",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-12-06T00:26:33Z |
---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
datasets:
- pszemraj/riddlesense_plusplus
model-index:
- name: tst-modeling
results: []
parameters:
min_length: 16
max_length: 96
no_repeat_ngram_size: 1
do_sample: True
pipeline_tag:
text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tst-modeling
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the pszemraj/riddlesense_plusplus dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
izumi-lab/electra-small-paper-japanese-fin-discriminator
|
izumi-lab
| 2022-12-09T00:39:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"finance",
"ja",
"arxiv:2003.10555",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
tags:
- finance
widget:
- text: 流動[MASK]は1億円となりました。
---
# ELECTRA small Japanese finance discriminator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The Wikipedia corpus file is 2.9GB, consisting of approximately 20M sentences.
The financial corpus consists of 2 corpora:
- Summaries of financial results from October 9, 2012, to December 31, 2020
- Securities reports from February 8, 2018, to December 31, 2020
The financial corpus file is 5.2GB, consisting of approximately 27M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 128 tokens per instance, 128 instances per batch, and 1M training steps.
## Citation
```
@article{Suzuki-etal-2023-ipm,
title = {Constructing and analyzing domain-specific language model for financial text mining}
author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
journal = {Information Processing & Management},
volume = {60},
number = {2},
pages = {103194},
year = {2023},
doi = {10.1016/j.ipm.2022.103194}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
izumi-lab/electra-small-paper-japanese-discriminator
|
izumi-lab
| 2022-12-09T00:38:44Z | 1 | 1 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"ja",
"dataset:wikipedia",
"arxiv:2003.10555",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東京大学で[MASK]の研究をしています。
---
# ELECTRA small Japanese discriminator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The corpus file is 2.9GB, consisting of approximately 20M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 128 tokens per instance, 128 instances per batch, and 1M training steps.
The size of the generator is 1/4 of the size of the discriminator.
## Citation
```
@article{Suzuki-etal-2023-ipm,
title = {Constructing and analyzing domain-specific language model for financial text mining}
author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
journal = {Information Processing & Management},
volume = {60},
number = {2},
pages = {103194},
year = {2023},
doi = {10.1016/j.ipm.2022.103194}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
teddy322/wav2vec2-large-xls-r-300m-kor-11385-2
|
teddy322
| 2022-12-09T00:30:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:zeroth_korean_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-05T08:41:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- zeroth_korean_asr
model-index:
- name: wav2vec2-large-xls-r-300m-kor-11385-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kor-11385-2
This model is a fine-tuned version of [teddy322/wav2vec2-large-xls-r-300m-kor-11385](https://huggingface.co/teddy322/wav2vec2-large-xls-r-300m-kor-11385) on the zeroth_korean_asr dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2059
- eval_wer: 0.1471
- eval_runtime: 136.7247
- eval_samples_per_second: 3.342
- eval_steps_per_second: 0.424
- epoch: 6.47
- step: 4400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 12
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
wenjalan/starbot-transformers
|
wenjalan
| 2022-12-09T00:30:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-12-06T01:31:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: starbot-transformers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starbot-transformers
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3942 | 1.0 | 2992 | 3.3385 |
| 3.2566 | 2.0 | 5984 | 3.2760 |
| 3.4112 | 3.0 | 8976 | 3.4710 |
| 3.4887 | 4.0 | 11968 | 3.5264 |
| 3.4856 | 5.0 | 14960 | 3.5181 |
| 3.4359 | 6.0 | 17952 | 3.5079 |
| 3.4115 | 7.0 | 20944 | 3.4954 |
| 3.3657 | 8.0 | 23936 | 3.4482 |
| 3.3018 | 9.0 | 26928 | 3.4207 |
| 3.2435 | 10.0 | 29920 | 3.4079 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
matttrent/sd-image-variations-diffusers
|
matttrent
| 2022-12-09T00:18:55Z | 4 | 6 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"image-to-image",
"dataset:ChristophSchuhmann/improved_aesthetics_6plus",
"license:other",
"diffusers:StableDiffusionImageVariationEmbedsPipeline",
"region:us"
] |
image-to-image
| 2022-12-03T18:50:46Z |
---
thumbnail: >-
https://repository-images.githubusercontent.com/523487884/fdb03a69-8353-4387-b5fc-0d85f888a63f
datasets:
- ChristophSchuhmann/improved_aesthetics_6plus
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- image-to-image
duplicated_from: lambdalabs/sd-image-variations-diffusers
---
# Stable Diffusion Image Variations Model Card
This version of Stable Diffusion has been fine tuned from [CompVis/stable-diffusion-v1-3-original](https://huggingface.co/CompVis/stable-diffusion-v-1-3-original) to accept CLIP image embedding rather than text embeddings. This allows the creation of "image variations" similar to DALLE-2 using Stable Diffusion. This version of the weights has been ported to huggingface Diffusers, to use this with the Diffusers library requires the [Lambda Diffusers repo](https://github.com/LambdaLabsML/lambda-diffusers).

## Example
First clone [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers) and install any requirements (in a virtual environment in the example below):
```bash
git clone https://github.com/LambdaLabsML/lambda-diffusers.git
cd lambda-diffusers
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```
Then run the following python code:
```python
from pathlib import Path
from lambda_diffusers import StableDiffusionImageEmbedPipeline
from PIL import Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = StableDiffusionImageEmbedPipeline.from_pretrained("lambdalabs/sd-image-variations-diffusers")
pipe = pipe.to(device)
im = Image.open("your/input/image/here.jpg")
num_samples = 4
image = pipe(num_samples*[im], guidance_scale=3.0)
image = image["sample"]
base_path = Path("outputs/im2im")
base_path.mkdir(exist_ok=True, parents=True)
for idx, im in enumerate(image):
im.save(base_path/f"{idx:06}.jpg")
```
# Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
This model is fine tuned from Stable Diffusion v1-3 where the text encoder has been replaced with an image encoder. The training procedure is the same as for Stable Diffusion except for the fact that images are encoded through a ViT-L/14 image-encoder including the final projection layer to the CLIP shared embedding space.
- **Hardware:** 4 x A6000 GPUs (provided by [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud))
- **Optimizer:** AdamW
- **Gradient Accumulations**: 1
- **Steps**: 87,000
- **Batch:** 6 x 4 = 24
- **Learning rate:** warmup to 0.0001 for 1,000 steps and then kept constant
Training was done using a [modified version of the original Stable Diffusion training code]((https://github.com/justinpinkney/stable-diffusion), the original version of the weights is [here](https://huggingface.co/lambdalabs/stable-diffusion-image-conditioned).
# Uses
_The following section is adapted from the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v1-4)_
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
*This model card was written by: Justin Pinkney and is based on the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v1-4).*
|
mtlulka/ppo-Huggy_unit1_bonus
|
mtlulka
| 2022-12-09T00:12:48Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-09T00:12:40Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: maciekov01/ppo-Huggy_unit1_bonus
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
log0/ppo-LunarLander-v2
|
log0
| 2022-12-08T23:50:27Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-08T21:27:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 182.69 +/- 91.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
qiaokuoyuan/autotrain-medical-2387774761
|
qiaokuoyuan
| 2022-12-08T23:15:46Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"zh",
"dataset:qiaokuoyuan/autotrain-data-medical-cfa966ee",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-08T23:15:13Z |
---
tags:
- autotrain
- token-classification
language:
- zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- qiaokuoyuan/autotrain-data-medical-cfa966ee
co2_eq_emissions:
emissions: 0.7237073793849912
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2387774761
- CO2 Emissions (in grams): 0.7237
## Validation Metrics
- Loss: 0.032
- Accuracy: 0.990
- Precision: 0.000
- Recall: 0.000
- F1: 0.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/qiaokuoyuan/autotrain-medical-2387774761
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("qiaokuoyuan/autotrain-medical-2387774761", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("qiaokuoyuan/autotrain-medical-2387774761", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
mnavas/ppo-doggy
|
mnavas
| 2022-12-08T23:03:56Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-08T23:03:48Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: mnavas/ppo-doggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HideOnBush/BERTModified-fullsize-finetuned-wikitext-test
|
HideOnBush
| 2022-12-08T22:43:41Z | 0 | 0 | null |
[
"pytorch",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2022-12-08T19:49:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERTModified-fullsize-finetuned-wikitext-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTModified-fullsize-finetuned-wikitext-test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7813
- Precision: 0.1094
- Recall: 0.1094
- F1: 0.1094
- Accuracy: 0.1094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 9.2391 | 1.0 | 4382 | 8.1610 | 0.0373 | 0.0373 | 0.0373 | 0.0373 |
| 7.9147 | 2.0 | 8764 | 7.6870 | 0.0635 | 0.0635 | 0.0635 | 0.0635 |
| 7.5164 | 3.0 | 13146 | 7.4388 | 0.0727 | 0.0727 | 0.0727 | 0.0727 |
| 7.2439 | 4.0 | 17528 | 7.2088 | 0.0930 | 0.0930 | 0.0930 | 0.0930 |
| 7.1068 | 5.0 | 21910 | 7.0455 | 0.0943 | 0.0943 | 0.0943 | 0.0943 |
| 6.9711 | 6.0 | 26292 | 6.9976 | 0.1054 | 0.1054 | 0.1054 | 0.1054 |
| 6.8486 | 7.0 | 30674 | 6.8850 | 0.1054 | 0.1054 | 0.1054 | 0.1054 |
| 6.78 | 8.0 | 35056 | 6.7990 | 0.1153 | 0.1153 | 0.1153 | 0.1153 |
| 6.73 | 9.0 | 39438 | 6.8041 | 0.1074 | 0.1074 | 0.1074 | 0.1074 |
| 6.6921 | 10.0 | 43820 | 6.7412 | 0.1251 | 0.1251 | 0.1251 | 0.1251 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.2
|
jinghua2tang/ppo-Huggy
|
jinghua2tang
| 2022-12-08T22:38:13Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-08T22:38:06Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: jinghua2tang/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mtlulka/ppo-LunarLander_unit1_base
|
mtlulka
| 2022-12-08T22:35:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-08T22:35:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO_MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.01 +/- 13.07
name: mean_reward
verified: false
---
# **PPO_MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **PPO_MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
evageon/whisper-tiny-ar
|
evageon
| 2022-12-08T22:34:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T15:41:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-ar
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8394
- Wer: 86.0500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.0265 | 1.0 | 122 | 1.0110 | 98.4608 |
| 0.9208 | 2.0 | 244 | 0.9148 | 88.3812 |
| 0.8169 | 3.0 | 366 | 0.8394 | 86.0500 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
burakyldrm/wav2vec2-burak-new-300-v2-8-medium
|
burakyldrm
| 2022-12-08T22:30:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T13:12:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-burak-new-300-v2-8-medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-burak-new-300-v2-8-medium
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4311
- Wer: 0.2602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 151
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 2.7459 | 9.43 | 500 | 0.3843 | 0.4597 |
| 0.579 | 18.87 | 1000 | 0.3006 | 0.3668 |
| 0.2662 | 28.3 | 1500 | 0.3760 | 0.3503 |
| 0.1936 | 37.74 | 2000 | 0.3631 | 0.3214 |
| 0.157 | 47.17 | 2500 | 0.3838 | 0.3063 |
| 0.1307 | 56.6 | 3000 | 0.3671 | 0.3056 |
| 0.1138 | 66.04 | 3500 | 0.3700 | 0.2959 |
| 0.1002 | 75.47 | 4000 | 0.4164 | 0.3014 |
| 0.0874 | 84.91 | 4500 | 0.4001 | 0.2973 |
| 0.0791 | 94.34 | 5000 | 0.3883 | 0.2911 |
| 0.0667 | 103.77 | 5500 | 0.4220 | 0.2780 |
| 0.0581 | 113.21 | 6000 | 0.4163 | 0.2670 |
| 0.0506 | 122.64 | 6500 | 0.4065 | 0.2753 |
| 0.043 | 132.08 | 7000 | 0.4279 | 0.2643 |
| 0.0386 | 141.51 | 7500 | 0.4284 | 0.2650 |
| 0.0341 | 150.94 | 8000 | 0.4311 | 0.2602 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ksaml/ppo-Huggy
|
ksaml
| 2022-12-08T22:26:22Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-08T22:26:13Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ksaml/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
reyrobs/whisper-small-hi-2000-temp
|
reyrobs
| 2022-12-08T22:15:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T21:59:54Z |
---
tags:
- generated_from_trainer
model-index:
- name: whisper-small-hi-2000-temp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi-2000-temp
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000599
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
mnavas/hf-rl-landing
|
mnavas
| 2022-12-08T21:53:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-08T21:52:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.76 +/- 22.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jegormeister/setfit-model
|
jegormeister
| 2022-12-08T21:51:42Z | 2 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-12-08T21:45:27Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 188 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 188,
"warmup_steps": 19,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
dim/rl_course_1
|
dim
| 2022-12-08T21:46:50Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-08T21:46:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.31 +/- 14.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
marik0/ppo-LunarLander-v2
|
marik0
| 2022-12-08T21:34:43Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-06T14:04:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.72 +/- 18.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ruzarx/ppo-Huggy
|
ruzarx
| 2022-12-08T21:18:24Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-08T21:09:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ruzarx/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
matthh/ppo-Huggy
|
matthh
| 2022-12-08T21:08:49Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-08T21:08:43Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: matthh/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
aychang/distilbert-base-cased-trec-coarse
|
aychang
| 2022-12-08T20:36:13Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:trec",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: mit
tags:
- text-classification
datasets:
- trec
model-index:
- name: aychang/distilbert-base-cased-trec-coarse
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: trec
type: trec
config: default
split: test
metrics:
- type: accuracy
value: 0.97
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGNmZTQ1Mjk3YTQ0NTdiZmY2NGM2NDM2Yzc2OTI4NGNiZDg4MmViN2I0ZGZiYWJlMTg1ZDU0MTc2ZTg1NjcwZiIsInZlcnNpb24iOjF9.4x_Ze9S5MbAeIHZ4p1EFmWev8RLkAIYWKqouAzYOxTNqdfFN0HnqULiM19EMP42v658vl_fR3-Ig0xG45DioCA
- type: precision
value: 0.9742915631870833
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjA2MWVjMDc3MDYyY2M3NzY4NGNhY2JlNzJjMGQzZDUzZjE3ZWI1MjVmMzc4ODM2ZTQ4YmRhOTVkZDU0MzJiNiIsInZlcnNpb24iOjF9.EfmXJ6w5_7dK6ys03hpADP9h_sWuPAHgxpltUtCkJP4Ys_Gh8Ak4pGS149zt5AdP_zkvsWlXwAvx5BDMEoB2AA
- type: precision
value: 0.97
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDVjOGFjM2RkMDMxZTFiMzE1ZDM4OTRjMzkwOWE2NTJmMmUwMDdiZDg5ZjExYmFmZjg2Y2Y5NzcxZWVkODkwZSIsInZlcnNpb24iOjF9.BtO7DqJsUhSXE-_tJZJOPPd421VmZ3KR9-KkrhJkLNenoV2Xd6Pu6i5y6HZQhFB-9WfEhU9cCsIPQ1ioZ7dyDA
- type: precision
value: 0.9699546283251607
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGQ0Mzc2MTE2YjkwNGY1MDEzNWQwYmNlZDMzZjBmNWM0ODExYjM1OTQyZGJkNjI2OTA5MDczZjFmOGM5MmMzMyIsInZlcnNpb24iOjF9.fGi2qNpOjWd1ci3p_E1p80nOqabiKiQqpQIxtk5aWxe_Nzqh3XiOCBF8vswCRvX8qTKdCc2ZEJ4s8dZMeltfCA
- type: recall
value: 0.972626762268805
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQwMWZiYjIyMGVhN2M1ZDE5M2EzZmQ1ODRlYzE0MzJhZmU3ZTM1MmIyNTg5ZjBlMDcyMmQ0NmYzZjFmMmM4NSIsInZlcnNpb24iOjF9.SYDxsRw0xoQuQhei0YBdUbBxG891gqLafVFLdPMCJtQIktqCTrPW0sMKtis7GA-FEbNQVu8lp92znvlryNiFCw
- type: recall
value: 0.97
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQ0MjczYjFhZDdiMjdkMWVlZTAzYWU0ODVhNjkxN2I1N2Y1Y2IyOTNlYWQxM2UxODIyNDZhZDM3MWIwMTgzZCIsInZlcnNpb24iOjF9.C5cfDTz_H4Y7nEO4Eq_XFy92CSbo3IBuL5n8wBKkTuB6hSgctTHOdOJzV8gWyMJ9gRcNqxp_yVU4BEB_I_0KAA
- type: recall
value: 0.97
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDZmYWM3OWExZWI1ZjRiZjczYWQwOWI5NWQzNDNkODcyMjBhMmVkYjY0MGZjYzlhNWQ0Y2MyMjc3OWEyZjY4NCIsInZlcnNpb24iOjF9.65WM5ihNfbKOCNZ6apX7iVAC2Ge_cwz9Xwa5oJHFq3Ci97eBFqK-qtADdB_SFRcSQUoNodaBeIhNfe0hVddxCA
- type: f1
value: 0.9729834427867218
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWQyZGZmYjU4NjE4M2YzMTUxOWVkYjU0YTFmYzE3MmQ2NjhmNDY1MGRmNGQ1MWZjYjM1Mzg5Y2RmNTk5YmZiMSIsInZlcnNpb24iOjF9.WIF-fmV0SZ6-lcg3Rz6TjbVl7nLvy_ftDi8PPhDIP1V61jgR1AcjLFeEgeZLxSFMdmU9yqG2DWYubF0luK0jCg
- type: f1
value: 0.97
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDM0NDY0YzI2ZTBjYWVmZmVkOTI4ODkzM2RhNWM2ZjkwYTU3N2FjNjA4NjUwYWVjODNhMGEwMzdhYmE2YmIwYyIsInZlcnNpb24iOjF9.sihEhcsOeg8dvpuGgC-KCp1PsRNyguAif2uTBv5ELtRnM5KmMaHzRqpdpdc88Dj_DeuY6Y6qPQJt_dGk2q1rDQ
- type: f1
value: 0.9694196751375908
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTQ5ZjdiM2NiNDNkZTY5ZjNjNWUzZmI1MzgwMjhhNDEzMTEzZjFiNDhmZDllYmI0NjIwYjY0ZjcxM2M0ODE3NSIsInZlcnNpb24iOjF9.x4oR_PL0ALHYl-s4S7cPNPm4asSX3s3h30m-TKe7wpyZs0x6jwOqF-Tb1kgd4IMLl23pzsezmh72e_PmBFpRCg
- type: loss
value: 0.14272506535053253
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODU3NGFiMzIxYWI4NzYxMzUxZGE5ZTZkYTlkN2U5MTI1NzA5NTBiNGM3Y2Q5YmVmZjU0MmU5MjJlZThkZTllMCIsInZlcnNpb24iOjF9.3QeWbECpJ0MHV5gC0_ES6PpwplLsCHPKuToErB1MSG69xNWVyMjKu1-1YEWZOU6dGfwKGh_HvwucY5kC9qwWBQ
---
# TREC 6-class Task: distilbert-base-cased
## Model description
A simple base distilBERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/distilbert-base-cased-trec-coarse"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/distilbert-base-cased-trec-coarse"
texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC https://huggingface.co/datasets/trec
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=500,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.97,
'eval_f1': array([0.98220641, 0.91620112, 1. , 0.97709924, 0.98678414,
0.97560976]),
'eval_loss': 0.14275787770748138,
'eval_precision': array([0.96503497, 0.96470588, 1. , 0.96969697, 0.98245614,
0.96385542]),
'eval_recall': array([1. , 0.87234043, 1. , 0.98461538, 0.99115044,
0.98765432]),
'eval_runtime': 0.9731,
'eval_samples_per_second': 513.798}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.