modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 18:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 18:33:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-10_sixties-0_s366
|
jonatasgrosman
| 2022-12-11T17:33:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:32:55Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-10_sixties-0_s366
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-0_sixties-10_s539
|
jonatasgrosman
| 2022-12-11T17:24:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:24:43Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-0_sixties-10_s539
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Yanjie24/t5-samsung-5e
|
Yanjie24
| 2022-12-11T17:24:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-12-11T16:52:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: t5-samsung-5e
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: train
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 43.1484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-samsung-5e
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7108
- Rouge1: 43.1484
- Rouge2: 20.4563
- Rougel: 36.6379
- Rougelsum: 40.196
- Gen Len: 16.7677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.873 | 1.0 | 1841 | 1.7460 | 41.7428 | 19.2191 | 35.2428 | 38.8578 | 16.7286 |
| 1.8627 | 2.0 | 3682 | 1.7268 | 42.4494 | 19.8301 | 36.1459 | 39.5271 | 16.6039 |
| 1.8293 | 3.0 | 5523 | 1.7223 | 42.8908 | 19.9782 | 36.1848 | 39.8482 | 16.7164 |
| 1.8163 | 4.0 | 7364 | 1.7101 | 43.2291 | 20.3177 | 36.6418 | 40.2878 | 16.8472 |
| 1.8174 | 5.0 | 9205 | 1.7108 | 43.1484 | 20.4563 | 36.6379 | 40.196 | 16.7677 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-0_sixties-10_s227
|
jonatasgrosman
| 2022-12-11T17:22:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:22:13Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-0_sixties-10_s227
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
RajMoodley/ppo-Huggy
|
RajMoodley
| 2022-12-11T17:21:18Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-11T17:21:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: RajMoodley/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s682
|
jonatasgrosman
| 2022-12-11T17:19:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:19:46Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s682
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s197
|
jonatasgrosman
| 2022-12-11T17:14:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:14:12Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s197
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
PakanunNoa/ppo-Huggy
|
PakanunNoa
| 2022-12-11T17:02:56Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-11T17:02:48Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: PakanunNoa/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s510
|
jonatasgrosman
| 2022-12-11T17:00:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:00:23Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s510
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
nemanjar/ppo-LunarLander-v2
|
nemanjar
| 2022-12-11T16:57:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-07T20:28:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.80 +/- 16.52
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s273
|
jonatasgrosman
| 2022-12-11T16:56:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T16:56:35Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s273
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-10_sixties-0_s693
|
jonatasgrosman
| 2022-12-11T16:53:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T16:53:08Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-10_sixties-0_s693
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-10_sixties-0_s362
|
jonatasgrosman
| 2022-12-11T16:50:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T16:49:52Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-10_sixties-0_s362
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jjj-hf123/ppo-LunarLander-v2
|
jjj-hf123
| 2022-12-11T16:44:05Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T16:43:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.10 +/- 24.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-0_sixties-10_s304
|
jonatasgrosman
| 2022-12-11T16:39:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T16:38:44Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-0_sixties-10_s304
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s872
|
jonatasgrosman
| 2022-12-11T16:32:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T16:32:23Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s872
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s852
|
jonatasgrosman
| 2022-12-11T16:29:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T16:28:56Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s852
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s102
|
jonatasgrosman
| 2022-12-11T16:24:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T16:24:17Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s102
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
enyaelvis/Goody
|
enyaelvis
| 2022-12-11T16:08:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-12-11T16:08:04Z |
git lfs install
git clone https://huggingface.co/enyaelvis/Goody
|
EffyLi/bert-base-uncased-finetuned-ner
|
EffyLi
| 2022-12-11T16:08:02Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-11T16:00:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9144678979771328
- name: Recall
type: recall
value: 0.9305291419621882
- name: F1
type: f1
value: 0.9224286110341003
- name: Accuracy
type: accuracy
value: 0.9825726404753206
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0618
- Precision: 0.9145
- Recall: 0.9305
- F1: 0.9224
- Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 220 | 0.0809 | 0.8923 | 0.9051 | 0.8987 | 0.9784 |
| No log | 2.0 | 440 | 0.0643 | 0.9108 | 0.9262 | 0.9184 | 0.9817 |
| 0.1657 | 3.0 | 660 | 0.0618 | 0.9145 | 0.9305 | 0.9224 | 0.9826 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0
- Datasets 2.7.1
- Tokenizers 0.11.0
|
ntinosmg/ppo-Huggy
|
ntinosmg
| 2022-12-11T16:02:27Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-11T16:02:16Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ntinosmg/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kjul/ppo-LunarLander-v2
|
kjul
| 2022-12-11T16:00:18Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T15:52:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -166.90 +/- 40.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
parinzee/whisper-base-th-newmm
|
parinzee
| 2022-12-11T15:46:59Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"th",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-10T09:28:58Z |
---
language:
- th
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
model-index:
- name: Whisper Base Thai Newmm Tokenized - Parinthapat Pengpun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Thai Newmm Tokenized - Parinthapat Pengpun
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 11.0 and the FLEURS datasets.
It achieves the following results on the evaluation set:
- eval_loss: 0.5888
- eval_wer: 67.3381
- eval_cer: 32.4281
- eval_runtime: 6393.9778
- eval_samples_per_second: 1.709
- eval_steps_per_second: 0.214
- epoch: 1.0
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 7500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
khaled5321/PPO-LunarLander-v2
|
khaled5321
| 2022-12-11T15:25:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-10T19:54:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 296.58 +/- 16.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tomthefreak/Mud-Forest
|
tomthefreak
| 2022-12-11T15:04:02Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-12-11T14:38:29Z |
---
license: creativeml-openrail-m
---
3D Fantasy Horror textual embedding for Stable Diffusion 2.1.
This embedding is trained on 63 images generated via SD 2.1. The generations used previous embeddings "Macro Terror" and "Verdict Rubicon" among others. Stylistic influences and prompting terminology influenced by Beksinski, Giger and NBC's Hannibal TV show.
Training images were generated through img2img diffusion on images of near black noise in order to bias the resulting exposures of the generations. These images were colorgraded then captioned manually prior to training.
Example generations:

_Prompt: Mud Forest, Steps: 10, Sampler: DPM++ SDE Karras, CFG scale: 3, Seed: 1244879260, Size: 768x768, Model hash: 4bdfc29c_

_Prompt: Mud Forest, Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 5, Seed: 2168042904, Size: 768x768, Model hash: 4bdfc29c_

_Prompt: Mud Forest, Steps: 10, Sampler: DPM++ SDE Karras, CFG scale: 3, Seed: 2168042915, Size: 768x768, Model hash: 4bdfc29c_
|
AI-MeisterBin/ko-sentence-bert-MeisterBin
|
AI-MeisterBin
| 2022-12-11T14:52:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"roberta",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-12-11T10:19:49Z |
심리상담 챗봇 메아리를 만들기 위한 버트 모델입니다.
챗봇
https://ai-meisterbin-project-chatbot-main-chatbot-qj3hxl.streamlit.app/
깃허브
https://github.com/AI-MeisterBin/project_chatbot
|
ScrappyCoco666/ppo-Huggy-1
|
ScrappyCoco666
| 2022-12-11T14:25:43Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-11T14:25:35Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ScrappyCoco666/ppo-Huggy-1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sohm/ppo-LunarLander-v2
|
sohm
| 2022-12-11T14:04:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-10T22:54:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.39 +/- 18.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
polejowska/convnext-tiny-224-eurosat
|
polejowska
| 2022-12-11T14:00:13Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-11T13:48:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9537037037037037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-eurosat
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3153
- Accuracy: 0.9537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.863 | 0.98 | 33 | 1.5775 | 0.7619 |
| 1.039 | 1.98 | 66 | 0.8142 | 0.9008 |
| 0.5825 | 2.98 | 99 | 0.4442 | 0.9339 |
| 0.3228 | 3.98 | 132 | 0.3153 | 0.9537 |
| 0.2641 | 4.98 | 165 | 0.2868 | 0.9524 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
paulkm/autotrain-lottery_v2-2420075389
|
paulkm
| 2022-12-11T13:36:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"zh",
"dataset:paulkm/autotrain-data-lottery_v2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T13:31:07Z |
---
tags:
- autotrain
- text-classification
language:
- zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- paulkm/autotrain-data-lottery_v2
co2_eq_emissions:
emissions: 0.06047934032845949
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2420075389
- CO2 Emissions (in grams): 0.0605
## Validation Metrics
- Loss: 0.122
- Accuracy: 0.965
- Precision: 0.976
- Recall: 0.946
- AUC: 0.988
- F1: 0.961
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/paulkm/autotrain-lottery_v2-2420075389
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("paulkm/autotrain-lottery_v2-2420075389", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("paulkm/autotrain-lottery_v2-2420075389", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
gyronee/ppo-LunarLander-V2
|
gyronee
| 2022-12-11T13:30:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T13:30:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.84 +/- 14.15
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Eilons/ppo-LunarLander-v2
|
Eilons
| 2022-12-11T12:07:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T12:06:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.38 +/- 22.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
janzw/ppo-lunar-lander-v2_r5
|
janzw
| 2022-12-11T12:03:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T12:03:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.49 +/- 16.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ahmetfirat/ppo-LunarLander-v2
|
ahmetfirat
| 2022-12-11T12:02:27Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T11:30:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.93 +/- 12.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sanchit-gandhi/whisper-small-sl-1k-steps
|
sanchit-gandhi
| 2022-12-11T11:22:31Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"sl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T10:15:40Z |
---
language:
- sl
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Slovenian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 sl
type: mozilla-foundation/common_voice_11_0
config: sl
split: test
args: sl
metrics:
- name: Wer
type: wer
value: 26.588921282798832
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Slovenian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 sl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4625
- Wer: 26.5889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0027 | 13.01 | 1000 | 0.4625 | 26.5889 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 2.0.0.dev20221210+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
harryrudolph/ppo-Huggy
|
harryrudolph
| 2022-12-11T11:07:00Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-11T11:06:45Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: harryrudolph/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vantezzen/pankocat
|
vantezzen
| 2022-12-11T10:55:24Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-11T10:44:50Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Pnkct1 Dreambooth model trained by vantezzen with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
polejowska/convnext-tiny-224-finetuned-eurosat-vitconfig-test-1
|
polejowska
| 2022-12-11T10:12:45Z | 28 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-11T09:59:58Z |
---
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: convnext-tiny-224-finetuned-eurosat-vitconfig-test-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-vitconfig-test-1
This model is a fine-tuned version of [](https://huggingface.co/) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Alan1999/ppo-LunarLander-v2
|
Alan1999
| 2022-12-11T09:24:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T09:23:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.83 +/- 15.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bnriiitb/whisper-small-te
|
bnriiitb
| 2022-12-11T09:11:11Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"te",
"dataset:Chai_Bisket_Stories_16-08-2021_14-17",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-21T19:28:59Z |
---
language:
- te
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- Chai_Bisket_Stories_16-08-2021_14-17
metrics:
- wer
model-index:
- name: Whisper Small Telugu - Naga Budigam
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Chai_Bisket_Stories_16-08-2021_14-17
type: Chai_Bisket_Stories_16-08-2021_14-17
config: None
split: None
args: 'config: te, split: test'
metrics:
- name: Wer
type: wer
value: 77.48711850971065
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Telugu - Naga Budigam
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Chai_Bisket_Stories_16-08-2021_14-17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7063
- Wer: 77.4871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2933 | 2.62 | 500 | 0.3849 | 86.6429 |
| 0.0692 | 5.24 | 1000 | 0.3943 | 82.7190 |
| 0.0251 | 7.85 | 1500 | 0.4720 | 82.4415 |
| 0.0098 | 10.47 | 2000 | 0.5359 | 81.6092 |
| 0.0061 | 13.09 | 2500 | 0.5868 | 75.9413 |
| 0.0025 | 15.71 | 3000 | 0.6235 | 76.6944 |
| 0.0009 | 18.32 | 3500 | 0.6634 | 78.3987 |
| 0.0005 | 20.94 | 4000 | 0.6776 | 77.1700 |
| 0.0002 | 23.56 | 4500 | 0.6995 | 78.2798 |
| 0.0001 | 26.18 | 5000 | 0.7063 | 77.4871 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
SerdarHelli/SDF-StyleGAN-3D
|
SerdarHelli
| 2022-12-11T09:01:38Z | 0 | 4 | null |
[
"Shape modeling",
"Volumetric models",
"dataset:shapenet",
"arxiv:2206.12055",
"license:other",
"region:us"
] | null | 2022-12-08T07:19:24Z |
---
license: other
tags:
- Shape modeling
- Volumetric models
datasets:
- shapenet
---
### Model Description
- SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation
- Zheng, Xin-Yang and Liu, Yang and Wang, Peng-Shuai and Tong, Xin, 2022
The proposed deeplearning model for 3D shape generation called signed distance field (SDF) - SDF-StyleGAN, whicH is based on StyleGAN2. The goal of this approach is to minimize the visual and geometric differences between the generated shapes and a collection of existing shapes.
### Documents
- [GitHub Repo](https://github.com/Zhengxinyang/SDF-StyleGAN)
- [Paper - SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation](https://arxiv.org/pdf/2206.12055.pdf)
### Datasets
ShapeNet is a comprehensive 3D shape dataset created for research in computer graphics, computer vision, robotics and related diciplines.
- [Offical Dataset of ShapeNet](https://shapenet.org/)
- [author's data preparation script](https://github.com/Zhengxinyang/SDF-StyleGAN)
- [author's training data](https://pan.baidu.com/s/1nVS7wlcOz62nYBgjp_M8Yg?pwd=oj1b)
### How to use
Training snippets are published under the official GitHub repository above.
### BibTeX Entry and Citation Info
```
@inproceedings{zheng2022sdfstylegan,
title = {SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation},
author = {Zheng, Xin-Yang and Liu, Yang and Wang, Peng-Shuai and Tong, Xin},
booktitle = {Comput. Graph. Forum (SGP)},
year = {2022},
}
```
|
CarpetCleaningofFriscoTX/CarpetCleaningofFriscoTX
|
CarpetCleaningofFriscoTX
| 2022-12-11T08:54:26Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:53:52Z |
---
license: other
---
Carpet Cleaning of Frisco TX
http://carpetcleaningoffrisco.com/
972-674-8941
Our truck mounted Floor covering Cleaning of Frisco TX administrations will be precisely exact thing you want in the event that your rugs require an uncompromising purging. Our portable professionals are generally outfitted with extra hardware within their trucks that will permit them to play out an incredibly strong profound disinfection of your ground surface. Pet stain and scent evacuation is effectively dealt with when you have Rug Cleaning of Frisco TX on your side. Try not to worry over a little wreck that your doggies made. Our cleaners will make quick work of it and eliminate the splotch and smell in a matter of moments. You should simply settle on the fast decision.
|
CarpetCleaningLewisvilleTX/UpholsteryCleaningLewisvilleTX
|
CarpetCleaningLewisvilleTX
| 2022-12-11T08:51:06Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:50:33Z |
---
license: other
---
Upholstery Cleaning Lewisville TX
https://carpetcleaninglewisville.com/upholstery-cleaning.html
972-338-5376
We have all been in situations where someone accidentally spills wine on your couch during a family get-together or where children misbehave and throw food all over your furniture.You can't go back to these times.However, Carpet Cleaning Lewisville, TX can get rid of all of these unsightly stains and give your upholstery a new look and scent.
|
CarpetCleaningLewisvilleTX/RugCleaningLewisvilleTX
|
CarpetCleaningLewisvilleTX
| 2022-12-11T08:48:53Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:48:23Z |
---
license: other
---
Rug Cleaning Lewisville TX
https://carpetcleaninglewisville.com/rug-cleaning.html
972-338-5376
If you're looking for a rug cleaning company near me in Lewisville, Texas, we found the right one for you.The best rug cleaning services at the most affordable prices are available from Carpet Cleaning Lewisville, TX.Simply pick up the phone and give us a call to receive exceptional service.
|
CarpetCleaningLewisvilleTX/AirDuctCleaningLewisvilleTX
|
CarpetCleaningLewisvilleTX
| 2022-12-11T08:48:00Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:47:23Z |
---
license: other
---
Air Duct Cleaning Lewisville TX
https://carpetcleaninglewisville.com/air-duct.html
972-338-5376
To ensure that your home's air is clean, air ducts need to be cleaned on a regular basis.Because you have Carpet Cleaning Lewisville, TX, it won't cost you as much as it did before.In addition to professional and thorough cleaning, we will offer you the best deals on air duct cleaning.For more information, contact us right away.
|
CultivatorX/Chinese-Digital-Art
|
CultivatorX
| 2022-12-11T08:44:04Z | 0 | 25 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-12-11T06:48:37Z |
---
language:
- en
thumbnail: "https://s3.amazonaws.com/moonup/production/uploads/1670742434498-633a20a88f27255b6b56290b.png"
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# Chinese Digital Art Diffusion
**Trigger Words: CNDigitalArt Style**
This is a fine-tuned Stable Diffusion model trained on some of the **Chinese Digital Arts** style that usually uses on Chinese Interactive Reading (Visual Novel) platforms such as **Orange Light** [66rpg.com](https://66rpg.com) or **NetEase Interactive Reading Platform** [avg.163.com](https://avg.163.com/).
_if you don't know what that is, don't worry, it's just one of those really big thing in China that majority of Westerners had no clue about._

Use the tokens **_CNDigitalArt Style_** in your prompts to test and experiment it yourself.
**EXAMPLES:**
_These results were tested on the 2000 Steps model [ **CNDigitalArt_2000.ckpt**](https://huggingface.co/CultivatorX/Chinese-Digital-Art/blob/main/CNDigitalArt_2000.ckpt).
I just did 20 batches of -1 seeds in random for each of the prompt (most of which isn't that good) but it does have some really good ones.
Prompt: **a portrait of Megan Fox in CNDigitalArt Style**
Negative prompt: _lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, two faces, two heads_
Steps: 20, Sampler: Euler, CFG scale: 7, Seed: 593563256, Face restoration: GFPGAN, Size: 512x512, Model hash: 2258c119

Prompt: **a portrait of Scarlett Johansson in CNDigitalArt Style**
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, two faces, two heads
Steps: 20, Sampler: Euler, CFG scale: 7, Seed: 4272335413, Face restoration: GFPGAN, Size: 512x512, Model hash: 2258c119
=====================================================================
=====================================================================
Prompt: **a portrait of Emma Watson in CNDigitalArt Style**
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, two faces, two heads
Steps: 20, Sampler: Euler, CFG scale: 7, Seed: 3813059825, Face restoration: GFPGAN, Size: 512x512, Model hash: 2258c119

Prompt: **a portrait of Zendaya in CNDigitalArt Style**
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, two faces, two heads
Steps: 20, Sampler: Euler, CFG scale: 7, Seed: 962052606, Face restoration: GFPGAN, Size: 512x512, Model hash: 2258c119
|
RichardsonTXCarpetCleaning/AirDuctCleaningRichardsonTX
|
RichardsonTXCarpetCleaning
| 2022-12-11T08:38:59Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:38:34Z |
---
license: other
---
Air Duct Cleaning Richardson TX
https://carpetcleaning-richardson.com/air-duct-cleaning.html
(972) 454-9815
Do you require a cleaning service from professionals with years of experience?If so, contact us right away.We have been working to improve customers' homes' climates for a long time and can also assist you.Because our equipment can reach far to remove all harmful material from your ducts, we do not leave any area unclean.
|
RichardsonTXCarpetCleaning/TileandGroutCleaningRichardsonTX
|
RichardsonTXCarpetCleaning
| 2022-12-11T08:36:55Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:36:05Z |
---
license: other
---
Tile and Grout Cleaning Richardson TX
https://carpetcleaning-richardson.com/tile-and-grout-cleaning.html
(972) 454-9815
We have a Cheap Tile Cleaning service that brightens your floor and gives your home a clean look if you've been putting off cleaning your tiles because of the cost.Carpet cleaning in Richardson, Texas, doesn't just clean carpets.We cover everything when it comes to cleaning your home, from your ducts and vents to your tile and grout.
|
RichardsonTXCarpetCleaning/UpholsteryCleaningRichardsonTX
|
RichardsonTXCarpetCleaning
| 2022-12-11T08:34:22Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:28:17Z |
---
license: other
---
Upholstery Cleaning Richardson TX
https://carpetcleaning-richardson.com/upholstery-cleaning.html
(972) 454-9815
Your furniture is the most expensive item in your home, along with probably your jewelry and electronics, cars, and other possessions.It's possible that some of this furniture was passed down through generations.You want to take care of it so that future generations can continue to enjoy it.Call Richardson TX Carpet Cleaning right away if you require steam cleaning for your upholstery!
|
luigisaetta/whisper-medium-it
|
luigisaetta
| 2022-12-11T08:19:08Z | 18 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"whisper-event",
"it",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T18:00:42Z |
---
language:
- it
license: apache-2.0
tags:
- generated_from_trainer
- whisper-event
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: luigisaetta/whisper-medium-it
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 it
type: mozilla-foundation/common_voice_11_0
config: it
split: test
args: it
metrics:
- name: Wer
type: wer
value: 5.7191
---
# luigisaetta/whisper-medium-it
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1452
- Wer: 5.7191
## Model description
This model is a fine-tuning of the OpenAI Whisper Medium model, on the specified dataset.
## Intended uses & limitations
This model has been developed as part of the Hugging Face Whisper Fine Tuning sprint, December 2022.
It is meant to spread the knowledge on how these models are built and can be used to develop solutions
where it is needed ASR on the Italian Language.
It has not been extensively tested. It is possible that on other datasets the accuracy will be lower.
Please, test it before using it.
## Training and evaluation data
Trained and tested on Mozilla Common Voice, vers. 11
## Training procedure
The script **run.sh**, and the Python file, used for the training are saved in the repository.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1216 | 0.2 | 1000 | 0.2289 | 10.0594 |
| 0.1801 | 0.4 | 2000 | 0.1851 | 7.6593 |
| 0.1763 | 0.6 | 3000 | 0.1615 | 6.5258 |
| 0.1337 | 0.8 | 4000 | 0.1506 | 6.0427 |
| 0.0742 | 1.05 | 5000 | 0.1452 | 5.7191 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
GreenCarpetCleaningGarland/GreenCarpetCleaningGarland
|
GreenCarpetCleaningGarland
| 2022-12-11T08:12:46Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:12:22Z |
---
license: other
---
Green Carpet Cleaning Garland
http://garlandcarpetcleaner.com/
(972) 256-8544
One of methods we follow at cover cleaning is "Steam Cleaning Administration" that depends on utilizing minimal high temp water and more steam, centering steam - which infiltrating into profound on spots and stain to dissolve every one of them even the hardest ones and kill all poisons from your rug. Then, at that point, the job of our compelling green items starts to clear this large number of components, returning your floor covering shimmered and bright. At last, we utilize our excellent dry machines, so your rug will be full dry inside no time. We have specific floor covering steam cleaners, so they know how to follow the high amazing skill simultaneously, safeguarding your rug from any harms.
|
polixonrio/whisper-small-fy-NL
|
polixonrio
| 2022-12-11T08:09:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"fy",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-10T17:27:53Z |
---
language:
- fy
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Western Frisian (Netherlands)
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 fy-NL
type: mozilla-foundation/common_voice_11_0
config: fy-NL
split: test
args: fy-NL
metrics:
- name: Wer
type: wer
value: 22.29686271707282
---
# Whisper Small Western Frisian (Netherlands)
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 fy-NL dataset.
This is an attempt for cross lingual transfer from Dutch to Frisian, since Whisper doesn't support Frisian.
It achieves the following results on the evaluation set:
- Loss: 0.5443
- Wer: 22.2969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0067 | 10.01 | 1000 | 0.4810 | 23.0115 |
| 0.0008 | 21.0 | 2000 | 0.5200 | 22.3576 |
| 0.0004 | 31.01 | 3000 | 0.5443 | 22.2969 |
| 0.0003 | 42.0 | 4000 | 0.5610 | 22.3719 |
| 0.0002 | 52.01 | 5000 | 0.5674 | 22.3898 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
CarpetCleaningMesquiteTX/AirDuctCleaningMesquiteTX
|
CarpetCleaningMesquiteTX
| 2022-12-11T08:00:43Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:00:17Z |
---
license: other
---
Air Duct Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/air-duct-cleaning.html
(469) 213-8132
Cleaning the air ducts is very important.We ensure that your carpets, tile flooring, and rugs are kept clean and in good condition.We can deal with a variety of heater and air conditioner cleaning issues in addition to cleaning air ducts.Your air ducts can be cleaned quickly and inexpensively of dust and debris.No matter how big or small the job is, our team of certified and professionally trained technicians will complete it correctly.
|
CarpetCleaningMesquiteTX/RugCleaningMesquiteTX
|
CarpetCleaningMesquiteTX
| 2022-12-11T07:58:08Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:57:46Z |
---
license: other
---
Rug Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/rug-cleaning.html
(469) 213-8132
Carpet and area rug manufacturers recommend using the free hot water extraction system from Our Rug Cleaning.Carpet Cleaning Mesquite TX can also clean some area rugs at a lower temperature, depending on how many fibers they have. These rugs need to be cleaned with cool water routines.Using a high-controlled cleaning process and a deposit-free cleaning result, we remove all dirt, sand, coarseness, and grime from the area rugs.
|
CarpetCleaningMesquiteTX/CarpetCleaningMesquiteTX
|
CarpetCleaningMesquiteTX
| 2022-12-11T07:57:15Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:56:56Z |
---
license: other
---
Carpet Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/
(469) 213-8132
The most ideal way to discard these bugs is expert and master steam cleaning with a truck mount. Cover Cleaning Mesquite TX will give you the total cleaning Administration that you expect from truly capable administrators. Our cleaners assurance to constantly give total, compelling, high audit cover administration and cleaning all over Mesquite TX and its district. We have bewildering cleaning counselors who are accessible to return to work for cleaning administrations over the course of the day nearby.
|
CarpetCleaningMckinneyTX/CarpetCleaningMckinneyTX
|
CarpetCleaningMckinneyTX
| 2022-12-11T07:53:59Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:53:36Z |
---
license: other
---
Carpet Cleaning Mckinney TX
https://carpetcleaningmckinneytx.com/
(469) 702-1202
Individuals search for elite administrations to keep their homes tidy and cutting-edge. We are certain about what we do in light of the fact that, we consolidate our long stretches of involvement in the cutting edge gear, drawing out the ideal outcome. For instance, our steam clean floor coverings technique guarantees the oil stains on your rug are for all time cleaned out with little water. Your rug will have insignificant drying time and be back on the floor quicker than expected.
|
FortWorthCarpetCleaning/UpholsteryCleaningFortWorthTX
|
FortWorthCarpetCleaning
| 2022-12-11T07:51:04Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:50:42Z |
---
license: other
---
Upholstery Cleaning Fort Worth TX
https://txfortworthcarpetcleaning.com/upholstery-cleaning.html
(817) 523-1237
When you sit on your upholstery, you inhale allergens, dirt, and dust that are trapped in its fibers.Therefore, if you want to ensure the safety of your upholstery—especially if you have children or pets—you need to hire experts in carpet cleaning for upholstery in Worth, Texas.We have the best upholstery cleaners who will come to your house and do an excellent job of cleaning it.Understanding the various fibers of your furniture is important to our technicians because it helps them choose effective and safe cleaning methods.When you hire us, we promise to give you a lot of attention and care, and we won't start cleaning your upholstery until we make sure the products we use are safe for the kind of fabric it is made of.
|
FortWorthCarpetCleaning/RugCleaningFortWorthTX
|
FortWorthCarpetCleaning
| 2022-12-11T07:49:51Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:49:30Z |
---
license: other
---
Rug Cleaning Fort Worth TX
https://txfortworthcarpetcleaning.com/rug-cleaning.html
(817) 523-1237
Carpet cleaning Fort Worth TX is nearby and able to provide you with professional cleaning services if you require an efficient and high-quality rug cleaning service.Simply contact our professionals, and your rug will regain its vibrant color and stunning appearance.We use products and equipment that enable us to provide you with the best results, such as rug shampooing, which enables us to restore your rug's beautiful appearance and the amazing scent that permeates your entire home.Call us for $20 off these services if you need them.
|
FortWorthCarpetCleaning/CarpetCleaningFortWorthTX
|
FortWorthCarpetCleaning
| 2022-12-11T07:49:00Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:48:41Z |
---
license: other
---
Carpet Cleaning Fort Worth TX
https://txfortworthcarpetcleaning.com/carpet-cleaning.html
(817) 523-1237
Carpet cleaning Fort Worth TX always focuses on making your home appear beautiful, particularly if this beauty is dependent on the appearance of your carpets, furniture, rugs, and tiles and ducts.We are the business that works to make your life in your home better. With our help, you can have a healthy and beautiful home.Call us if your current carpet has numerous stains and odors and you are unable to use it again due to its poor appearance and are considering purchasing a new one.
|
CarpetCleaningArlingtonTX/CarpetCleaningArlingtonTX
|
CarpetCleaningArlingtonTX
| 2022-12-11T07:39:36Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:39:07Z |
---
license: other
---
Carpet Cleaning Arlington TX
https://carpetcleaning-arlington-tx.com/
(817) 381-5072
At Rug Cleaning Plano in TX we likewise have a truck mounted cover cleaning framework. These versatile vehicles have a force to be reckoned with of hardware. They generally have these on them and they can finish any occupation properly. Whether it is a little home, an enormous house or a gigantic modern intricate, the undertaking is rarely too large or intense.
|
CarpetCleaningPlanoTX/AirVentCleaningPlanoTX
|
CarpetCleaningPlanoTX
| 2022-12-11T07:34:27Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:34:07Z |
---
license: other
---
Air Vent Cleaning Plano TX
https://carpetcleaningplanotx.com/air-vent-cleaning.html
(469) 444-1903
Cleaning air vents need not be difficult.Carpet Cleaning Plano in Texas is a team of experienced air vent cleaners who know how to do the job right.Professionals with certifications make up our team of technicians, who will arrive in our cutting-edge mobile cleaning units.
|
CarpetCleaningPlanoTX/AirDuctCleaningPlanoTX
|
CarpetCleaningPlanoTX
| 2022-12-11T07:33:31Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:33:09Z |
---
license: other
---
Air Duct Cleaning Plano TX
https://carpetcleaningplanotx.com/air-duct-cleaning.html
(469) 444-1903
Airborne irritants are bad for your health, according to studies and other health research for a long time.Mold, pollen, and dust are examples.Your capacity to breathe is seriously impacted by these.Allergies and other respiratory issues are brought on by these pollutants.They may occasionally carry out attacks that can be fatal.What is the most important way to keep the air in your home, place of business, or place of business clean?It is cleaning air ducts.
|
CarpetCleaningPlanoTX/UpholsteryCleaningPlanoTX
|
CarpetCleaningPlanoTX
| 2022-12-11T07:31:41Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:31:20Z |
---
license: other
---
Upholstery Cleaning Plano TX
https://carpetcleaningplanotx.com/upholstery-cleaning.html
(469) 444-1903
We remove stains from sofas.When you have a nice, comfortable sofa in your home, spills are common.On that new couch, game day weekends can be difficult.When they are excited about who is winning on the playing field, friends, family, and pets can cause havoc.After a party, upholstery cleaning is not a problem.We can arrive with our mobile unit, which simplifies the task.
|
CarpetCleaningPlanoTX/RugCleaningPlanoTX
|
CarpetCleaningPlanoTX
| 2022-12-11T07:30:50Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:30:22Z |
---
license: other
---
Rug Cleaning Plano TX
https://carpetcleaningplanotx.com/rug-cleaning.html
(469) 444-1903
Put your carpets, rugs, and other cleaning needs at risk.Avoid immersing them in hazardous and wasteful chemical processes in particular.We use cutting-edge Green Rug Cleaners services at carpet cleaning Plano, Texas.Texas cannot match these.Rug cleaning is safe and good for the environment thanks to our cutting-edge washing technology.This will not harm your property or put your friends, family, or pets in danger.
|
muhtasham/medium-mlm-tweet-target-tweet
|
muhtasham
| 2022-12-11T07:30:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T07:25:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: medium-mlm-tweet-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7593582887700535
- name: F1
type: f1
value: 0.7637254221785755
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-mlm-tweet-target-tweet
This model is a fine-tuned version of [muhtasham/medium-mlm-tweet](https://huggingface.co/muhtasham/medium-mlm-tweet) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9066
- Accuracy: 0.7594
- F1: 0.7637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4702 | 4.9 | 500 | 0.8711 | 0.7540 | 0.7532 |
| 0.0629 | 9.8 | 1000 | 1.2918 | 0.7701 | 0.7668 |
| 0.0227 | 14.71 | 1500 | 1.4801 | 0.7727 | 0.7696 |
| 0.0181 | 19.61 | 2000 | 1.5118 | 0.7888 | 0.7870 |
| 0.0114 | 24.51 | 2500 | 1.6747 | 0.7754 | 0.7745 |
| 0.0141 | 29.41 | 3000 | 1.8765 | 0.7674 | 0.7628 |
| 0.0177 | 34.31 | 3500 | 1.9066 | 0.7594 | 0.7637 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CarpetCleaningPlanoTX/CarpetStainRemovalPlanoTX
|
CarpetCleaningPlanoTX
| 2022-12-11T07:29:56Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:29:29Z |
---
license: other
---
Carpet Stain Removal Plano TX
https://carpetcleaningplanotx.com/carpet-stain-removal.html
(469) 444-1903
Carpet Cleaning Plano in Texas is the company of choice for the majority of customers when it comes to stain removal.We have the best-trained staff and professional technology.We will get rid of even the worst stain.That is if it comes from your upholstery, fabrics, curtains, and carpets.Try us out today, and you'll see why the majority of people prefer us to everyone else.
|
CandyCarpetCleaningIrving/DryerVentCleaningIrvingTX
|
CandyCarpetCleaningIrving
| 2022-12-11T07:22:36Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:21:49Z |
---
license: other
---
Dryer Vent Cleaning Irving TX
(214) 744-3341
https://carpetcleaninginirving.com/dryer-vent.html
We can assist you if you need Lint Buildup Removal in Irving, Texas.Our cleaning technicians have a lot of knowledge and experience to help you.Your washing machine won't dry your clothes as well as it used to when it had a lot of this material in it.
|
CandyCarpetCleaningIrving/AirDuctCleaningIrvingTX
|
CandyCarpetCleaningIrving
| 2022-12-11T07:19:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-12-11T07:18:37Z |
Air Duct Cleaning Irving TX
https://carpetcleaninginirving.com/air-duct.html
(214) 744-3341
service for cleaning your home's ducts that gets rid of harmful substances that could make you sick.It's likely that you've been sneezing a lot at home when the air conditioner or heater is on.If that is the case, your ducts most likely contain mold, pollen, dirt, or dirt.
|
CandyCarpetCleaningIrving/TileGroutCleaningIrvingTX
|
CandyCarpetCleaningIrving
| 2022-12-11T07:18:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-12-11T07:17:20Z |
Tile Grout Cleaning Irving TX
license: other
https://carpetcleaninginirving.com/tile-grout.html
(214) 744-3341
We are available and can assist you at any time if you require Tile and Grout Cleaners in Irving, Texas who view this occupation as a career and make significant investments in comprehending the most effective ways to serve their customers.It's possible that the household cleaners you use are actually making your tile dirty.This includes your mop, which occasionally mixes grease, spills, and dirt with the grout.
|
muhtasham/base-mlm-imdb-target-tweet
|
muhtasham
| 2022-12-11T07:16:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T07:11:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: base-mlm-imdb-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7754010695187166
- name: F1
type: f1
value: 0.77889743305892
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/base-mlm-imdb](https://huggingface.co/muhtasham/base-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7516
- Accuracy: 0.7754
- F1: 0.7789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3412 | 4.9 | 500 | 1.0525 | 0.7888 | 0.7891 |
| 0.0365 | 9.8 | 1000 | 1.4590 | 0.7540 | 0.7572 |
| 0.0127 | 14.71 | 1500 | 1.4788 | 0.7888 | 0.7890 |
| 0.0137 | 19.61 | 2000 | 1.7516 | 0.7754 | 0.7789 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CandyCarpetCleaningIrving/RugCleaningIrvingTX
|
CandyCarpetCleaningIrving
| 2022-12-11T07:15:12Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:12:39Z |
---
license: other
---
Rug Cleaning Irving TX
https://carpetcleaninginirving.com/rug.html
(214) 744-3341
We can help you with Area Rug Cleaning in Irving, Texas, if you need it.We have developed superior cleaning techniques that can bring out the beauty of this home accent, especially if it hasn't been cleaned in a while.
|
EmadSalem/SpeakToChatGPT
|
EmadSalem
| 2022-12-11T07:08:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-12-10T13:09:48Z |
---
title: SpeakToChatGPT
emoji: 📊
colorFrom: blue
colorTo: blue
sdk: gradio
sdk_version: 3.12.0
app_file: app.py
pinned: false
duplicated_from: yizhangliu/chatGPT
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Sanjay-Papaiahgari/ppo-Huggy
|
Sanjay-Papaiahgari
| 2022-12-11T07:06:57Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-11T07:06:49Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Sanjay-Papaiahgari/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CleaningCarpetDallas/DryerVentCleaningDallasTX
|
CleaningCarpetDallas
| 2022-12-11T07:04:43Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:04:23Z |
---
license: other
---
http://cleaningcarpetdallas.com/dryer-vent-cleaning.html
(972) 643-8799
Another skill that our Dallas technicians have mastered is cleaning dryer vents.Do you believe that the level of operation of your drying machine is lower than its normal and typical performance?Please let us know if you think there may be clogged ducts and vents so we can assist you.
|
muhtasham/mini-mlm-imdb-target-tweet
|
muhtasham
| 2022-12-11T07:03:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T07:00:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: mini-mlm-imdb-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.767379679144385
- name: F1
type: f1
value: 0.7668830990510893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/mini-mlm-imdb](https://huggingface.co/muhtasham/mini-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3042
- Accuracy: 0.7674
- F1: 0.7669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8543 | 4.9 | 500 | 0.6920 | 0.7674 | 0.7571 |
| 0.3797 | 9.8 | 1000 | 0.7231 | 0.7727 | 0.7709 |
| 0.1668 | 14.71 | 1500 | 0.9171 | 0.7594 | 0.7583 |
| 0.068 | 19.61 | 2000 | 1.1558 | 0.7647 | 0.7642 |
| 0.0409 | 24.51 | 2500 | 1.3042 | 0.7674 | 0.7669 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Shiry/Whisper_hebrew_medium
|
Shiry
| 2022-12-11T07:00:26Z | 35 | 1 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"he",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-03T15:11:25Z |
---
language:
- he
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Medium Hebrew
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs he_il
type: google/fleurs
config: he_il
split: test
args: he_il
metrics:
- name: Wer
type: wer
value: 34
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Hebrew
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the google/fleurs he_il dataset.
It achieves the following results on the evaluation set:
- Wer: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
CleaningCarpetDallas/UpholsteryCleaningDallasTX
|
CleaningCarpetDallas
| 2022-12-11T06:58:59Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T06:58:36Z |
---
license: other
---
http://cleaningcarpetdallas.com/upholstery-cleaning.html
(972) 643-8799
Spots and stains on your microfiber sofa, couch, or loveseat can seriously ruin the appearance of your living room.You won't stand out with your gourmet and designer rugs, grandfather clocks, and artwork, and you'll also make your friends laugh.
|
muhtasham/base-vanilla-target-tweet
|
muhtasham
| 2022-12-11T06:56:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T06:46:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: base-vanilla-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7780748663101604
- name: F1
type: f1
value: 0.7772664883136655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-vanilla-target-tweet
This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8380
- Accuracy: 0.7781
- F1: 0.7773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3831 | 4.9 | 500 | 0.9800 | 0.7807 | 0.7785 |
| 0.0414 | 9.8 | 1000 | 1.4175 | 0.7754 | 0.7765 |
| 0.015 | 14.71 | 1500 | 1.6411 | 0.7754 | 0.7708 |
| 0.0166 | 19.61 | 2000 | 1.5930 | 0.7941 | 0.7938 |
| 0.0175 | 24.51 | 2500 | 1.3934 | 0.7888 | 0.7852 |
| 0.0191 | 29.41 | 3000 | 1.9407 | 0.7647 | 0.7658 |
| 0.0137 | 34.31 | 3500 | 1.8380 | 0.7781 | 0.7773 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/small-vanilla-target-tweet
|
muhtasham
| 2022-12-11T06:40:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T06:37:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: small-vanilla-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7540106951871658
- name: F1
type: f1
value: 0.7525253900501888
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-vanilla-target-tweet
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8718
- Accuracy: 0.7540
- F1: 0.7525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5858 | 4.9 | 500 | 0.8189 | 0.7380 | 0.7364 |
| 0.1039 | 9.8 | 1000 | 1.1965 | 0.7594 | 0.7568 |
| 0.0264 | 14.71 | 1500 | 1.5387 | 0.7433 | 0.7460 |
| 0.0142 | 19.61 | 2000 | 1.6758 | 0.7620 | 0.7551 |
| 0.0113 | 24.51 | 2500 | 1.8718 | 0.7540 | 0.7525 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/mini-vanilla-target-tweet
|
muhtasham
| 2022-12-11T06:37:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T06:33:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: mini-vanilla-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7540106951871658
- name: F1
type: f1
value: 0.7568814825340653
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-vanilla-target-tweet
This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5603
- Accuracy: 0.7540
- F1: 0.7569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9285 | 4.9 | 500 | 0.7493 | 0.7273 | 0.7207 |
| 0.4468 | 9.8 | 1000 | 0.7630 | 0.7460 | 0.7437 |
| 0.2194 | 14.71 | 1500 | 0.8997 | 0.7406 | 0.7455 |
| 0.1062 | 19.61 | 2000 | 1.0822 | 0.7433 | 0.7435 |
| 0.0568 | 24.51 | 2500 | 1.2225 | 0.7620 | 0.7622 |
| 0.0439 | 29.41 | 3000 | 1.3475 | 0.7513 | 0.7527 |
| 0.0304 | 34.31 | 3500 | 1.4999 | 0.7433 | 0.7399 |
| 0.0247 | 39.22 | 4000 | 1.5603 | 0.7540 | 0.7569 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/medium-mlm-tweet-target-imdb
|
muhtasham
| 2022-12-11T05:41:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T05:08:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: medium-mlm-tweet-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93632
- name: F1
type: f1
value: 0.9671128739051397
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-mlm-tweet-target-imdb
This model is a fine-tuned version of [muhtasham/medium-mlm-tweet](https://huggingface.co/muhtasham/medium-mlm-tweet) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3691
- Accuracy: 0.9363
- F1: 0.9671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3135 | 0.64 | 500 | 0.2323 | 0.9056 | 0.9505 |
| 0.2094 | 1.28 | 1000 | 0.2166 | 0.9187 | 0.9576 |
| 0.1622 | 1.92 | 1500 | 0.2011 | 0.9206 | 0.9587 |
| 0.112 | 2.56 | 2000 | 0.3647 | 0.9032 | 0.9491 |
| 0.093 | 3.2 | 2500 | 0.5445 | 0.8788 | 0.9355 |
| 0.0692 | 3.84 | 3000 | 0.2071 | 0.9452 | 0.9718 |
| 0.0545 | 4.48 | 3500 | 0.2308 | 0.9548 | 0.9769 |
| 0.0482 | 5.12 | 4000 | 0.3297 | 0.9373 | 0.9676 |
| 0.0464 | 5.75 | 4500 | 0.3698 | 0.926 | 0.9616 |
| 0.0308 | 6.39 | 5000 | 0.3691 | 0.9363 | 0.9671 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
aungmyatv8/ppo-LunarLander-v2
|
aungmyatv8
| 2022-12-11T05:23:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T05:04:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.93 +/- 21.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sagawa/ZINC-t5-v2
|
sagawa
| 2022-12-11T05:11:31Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"dataset:sagawa/ZINC-canonicalized",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-06T01:33:39Z |
---
license: mit
datasets:
- sagawa/ZINC-canonicalized
metrics:
- accuracy
model-index:
- name: ZINC-deberta
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: sagawa/ZINC-canonicalized
type: sagawa/ZINC-canonicalized
metrics:
- name: Accuracy
type: accuracy
value: 0.9475839734077454
---
# ZINC-t5
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/ZINC-canonicalized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1228
- Accuracy: 0.9476
## Model description
We trained t5 on SMILES from ZINC using the task of masked-language modeling (MLM). Compared to ZINC-t5, ZINC-t5-v2 uses a character-level tokenizer, and it was also trained on ZINC.
## Intended uses & limitations
This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
As an example, We finetuned this model to predict products. The model is [here](https://huggingface.co/sagawa/ZINC-t5-productpredicition), and you can use the demo [here](https://huggingface.co/spaces/sagawa/predictproduct-t5).
Using its encoder, we trained a regression model to predict a reaction yield. You can use this demo [here](https://huggingface.co/spaces/sagawa/predictyield-t5).
## Training and evaluation data
We downloaded [ZINC data](https://drive.google.com/drive/folders/1lSPCqh31zxTVEhuiPde7W3rZG8kPgp-z) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 22992522, and they were randomly split into train:validation=10:1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-03
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Step | Accuracy | Validation Loss |
|:-------------:|:------:|:--------:|:---------------:|
| 0.2090 | 100000 | 0.9264 | 0.1860 |
| 0.1628 | 200000 | 0.9349 | 0.1613 |
| 0.1632 | 300000 | 0.9395 | 0.1467 |
| 0.1451 | 400000 | 0.9435 | 0.1345 |
| 0.1311 | 500000 | 0.9465 | 0.1261 |
|
muhtasham/base-mlm-imdb-target-imdb
|
muhtasham
| 2022-12-11T04:41:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T04:02:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: base-mlm-imdb-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.89184
- name: F1
type: f1
value: 0.942828146143437
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-mlm-imdb-target-imdb
This model is a fine-tuned version of [muhtasham/base-mlm-imdb](https://huggingface.co/muhtasham/base-mlm-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4659
- Accuracy: 0.8918
- F1: 0.9428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2453 | 0.64 | 500 | 0.1892 | 0.9334 | 0.9656 |
| 0.1764 | 1.28 | 1000 | 0.1267 | 0.9581 | 0.9786 |
| 0.117 | 1.92 | 1500 | 0.1926 | 0.9290 | 0.9632 |
| 0.0727 | 2.56 | 2000 | 0.3109 | 0.9182 | 0.9574 |
| 0.0665 | 3.2 | 2500 | 0.4659 | 0.8918 | 0.9428 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/medium-mlm-imdb-target-imdb
|
muhtasham
| 2022-12-11T04:00:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T03:44:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: medium-mlm-imdb-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9064
- name: F1
type: f1
value: 0.9509022240872849
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-mlm-imdb-target-imdb
This model is a fine-tuned version of [muhtasham/medium-mlm-imdb](https://huggingface.co/muhtasham/medium-mlm-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3883
- Accuracy: 0.9064
- F1: 0.9509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2923 | 0.64 | 500 | 0.1860 | 0.9310 | 0.9642 |
| 0.2049 | 1.28 | 1000 | 0.0830 | 0.9708 | 0.9852 |
| 0.1569 | 1.92 | 1500 | 0.1258 | 0.9547 | 0.9768 |
| 0.1067 | 2.56 | 2000 | 0.5306 | 0.8643 | 0.9272 |
| 0.0837 | 3.2 | 2500 | 0.3883 | 0.9064 | 0.9509 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
redevaaa/fin3
|
redevaaa
| 2022-12-11T03:59:45Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:fin",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-11T03:32:16Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- fin
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: fin3
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: fin
type: fin
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.944
- name: Recall
type: recall
value: 0.9402390438247012
- name: F1
type: f1
value: 0.9421157684630739
- name: Accuracy
type: accuracy
value: 0.9921209540034072
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin3
This model is a fine-tuned version of [nlpaueb/sec-bert-base](https://huggingface.co/nlpaueb/sec-bert-base) on the fin dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0748
- Precision: 0.944
- Recall: 0.9402
- F1: 0.9421
- Accuracy: 0.9921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 129 | 0.0669 | 0.8821 | 0.9243 | 0.9027 | 0.9883 |
| No log | 2.0 | 258 | 0.0568 | 0.9289 | 0.9363 | 0.9325 | 0.9913 |
| No log | 3.0 | 387 | 0.0565 | 0.9141 | 0.9323 | 0.9231 | 0.9904 |
| 0.0556 | 4.0 | 516 | 0.0617 | 0.9237 | 0.9163 | 0.92 | 0.9904 |
| 0.0556 | 5.0 | 645 | 0.0658 | 0.9243 | 0.9243 | 0.9243 | 0.9904 |
| 0.0556 | 6.0 | 774 | 0.0695 | 0.944 | 0.9402 | 0.9421 | 0.9921 |
| 0.0556 | 7.0 | 903 | 0.0731 | 0.932 | 0.9283 | 0.9301 | 0.9917 |
| 0.0016 | 8.0 | 1032 | 0.0750 | 0.9283 | 0.9283 | 0.9283 | 0.9917 |
| 0.0016 | 9.0 | 1161 | 0.0737 | 0.944 | 0.9402 | 0.9421 | 0.9921 |
| 0.0016 | 10.0 | 1290 | 0.0748 | 0.944 | 0.9402 | 0.9421 | 0.9921 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/small-mlm-imdb-target-imdb
|
muhtasham
| 2022-12-11T03:43:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T03:31:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: small-mlm-imdb-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.91736
- name: F1
type: f1
value: 0.9568990695539701
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-imdb-target-imdb
This model is a fine-tuned version of [muhtasham/small-mlm-imdb](https://huggingface.co/muhtasham/small-mlm-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3145
- Accuracy: 0.9174
- F1: 0.9569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.315 | 0.64 | 500 | 0.1711 | 0.9310 | 0.9642 |
| 0.2248 | 1.28 | 1000 | 0.1385 | 0.9471 | 0.9728 |
| 0.1824 | 1.92 | 1500 | 0.1044 | 0.9610 | 0.9801 |
| 0.1326 | 2.56 | 2000 | 0.2382 | 0.9294 | 0.9634 |
| 0.1056 | 3.2 | 2500 | 0.5074 | 0.8698 | 0.9304 |
| 0.0804 | 3.84 | 3000 | 0.3145 | 0.9174 | 0.9569 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/mini-mlm-imdb-target-imdb
|
muhtasham
| 2022-12-11T03:30:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T03:23:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: mini-mlm-imdb-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.95016
- name: F1
type: f1
value: 0.9744431226155804
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-mlm-imdb-target-imdb
This model is a fine-tuned version of [muhtasham/mini-mlm-imdb](https://huggingface.co/muhtasham/mini-mlm-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1353
- Accuracy: 0.9502
- F1: 0.9744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3856 | 0.64 | 500 | 0.1902 | 0.9298 | 0.9636 |
| 0.2794 | 1.28 | 1000 | 0.2200 | 0.9127 | 0.9544 |
| 0.2369 | 1.92 | 1500 | 0.1269 | 0.9539 | 0.9764 |
| 0.1963 | 2.56 | 2000 | 0.2422 | 0.9079 | 0.9517 |
| 0.1765 | 3.2 | 2500 | 0.3789 | 0.8644 | 0.9273 |
| 0.1486 | 3.84 | 3000 | 0.1353 | 0.9502 | 0.9744 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-imdb-target-imdb
|
muhtasham
| 2022-12-11T03:22:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T03:18:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: tiny-mlm-imdb-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88952
- name: F1
type: f1
value: 0.9415301240526694
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-imdb-target-imdb
This model is a fine-tuned version of [muhtasham/tiny-mlm-imdb](https://huggingface.co/muhtasham/tiny-mlm-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2699
- Accuracy: 0.8895
- F1: 0.9415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5432 | 0.64 | 500 | 0.3567 | 0.8578 | 0.9235 |
| 0.366 | 1.28 | 1000 | 0.3687 | 0.8414 | 0.9138 |
| 0.32 | 1.92 | 1500 | 0.2648 | 0.8922 | 0.9430 |
| 0.2868 | 2.56 | 2000 | 0.3868 | 0.8314 | 0.9079 |
| 0.2671 | 3.2 | 2500 | 0.3092 | 0.8774 | 0.9347 |
| 0.248 | 3.84 | 3000 | 0.2699 | 0.8895 | 0.9415 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/medium-vanilla-target-imdb
|
muhtasham
| 2022-12-11T02:36:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T02:20:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: medium-vanilla-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8964
- name: F1
type: f1
value: 0.945370175068551
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-vanilla-target-imdb
This model is a fine-tuned version of [google/bert_uncased_L-8_H-512_A-8](https://huggingface.co/google/bert_uncased_L-8_H-512_A-8) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4330
- Accuracy: 0.8964
- F1: 0.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3068 | 0.64 | 500 | 0.2373 | 0.9061 | 0.9507 |
| 0.2143 | 1.28 | 1000 | 0.1204 | 0.9534 | 0.9761 |
| 0.1655 | 1.92 | 1500 | 0.1557 | 0.942 | 0.9701 |
| 0.1107 | 2.56 | 2000 | 0.2791 | 0.9268 | 0.9620 |
| 0.0905 | 3.2 | 2500 | 0.4330 | 0.8964 | 0.9454 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Alex2135/ppo-LunarLander-v2
|
Alex2135
| 2022-12-11T02:21:23Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T01:51:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.03 +/- 40.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
muhtasham/small-vanilla-target-imdb
|
muhtasham
| 2022-12-11T02:19:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T02:09:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: small-vanilla-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.81456
- name: F1
type: f1
value: 0.8978044264174235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-vanilla-target-imdb
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7710
- Accuracy: 0.8146
- F1: 0.8978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3417 | 0.64 | 500 | 0.1678 | 0.9286 | 0.9630 |
| 0.2401 | 1.28 | 1000 | 0.1262 | 0.9525 | 0.9757 |
| 0.1907 | 1.92 | 1500 | 0.2724 | 0.8963 | 0.9453 |
| 0.1397 | 2.56 | 2000 | 0.2378 | 0.9247 | 0.9609 |
| 0.11 | 3.2 | 2500 | 0.7710 | 0.8146 | 0.8978 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ScrappyCoco666/ppo-LunarLander-v2-5
|
ScrappyCoco666
| 2022-12-11T02:14:08Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T02:13:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 302.61 +/- 18.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
redevaaa/fin1
|
redevaaa
| 2022-12-11T02:12:04Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:fin",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-11T01:38:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- fin
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: fin1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: fin
type: fin
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.8315412186379928
- name: Recall
type: recall
value: 0.9243027888446215
- name: F1
type: f1
value: 0.8754716981132076
- name: Accuracy
type: accuracy
value: 0.985175455057234
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the fin dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0778
- Precision: 0.8315
- Recall: 0.9243
- F1: 0.8755
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 129 | 0.0860 | 0.8535 | 0.9283 | 0.8893 | 0.9904 |
| No log | 2.0 | 258 | 0.1513 | 0.7993 | 0.9203 | 0.8556 | 0.9799 |
| No log | 3.0 | 387 | 0.0977 | 0.8221 | 0.9203 | 0.8684 | 0.9831 |
| 0.0017 | 4.0 | 516 | 0.0783 | 0.8286 | 0.9243 | 0.8738 | 0.9848 |
| 0.0017 | 5.0 | 645 | 0.0778 | 0.8315 | 0.9243 | 0.8755 | 0.9852 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sd-concepts-library/pokemon-rgby-sprite
|
sd-concepts-library
| 2022-12-11T02:10:06Z | 0 | 7 | null |
[
"license:mit",
"region:us"
] | null | 2022-12-11T02:02:35Z |
---
license: mit
---
### Pokemon RGBY sprite on Stable Diffusion
Pokémon Red/Green/Blue/Yellow battle sprite concept (GameBoy 56x56 upscaled to 512x512)
This is the `<pkmn-rgby>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





































































































































































































































































































































































































































































|
muhtasham/mini-vanilla-target-imdb
|
muhtasham
| 2022-12-11T02:08:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T01:57:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: mini-vanilla-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87528
- name: F1
type: f1
value: 0.9334925984386332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-vanilla-target-imdb
This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4773
- Accuracy: 0.8753
- F1: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4272 | 0.64 | 500 | 0.2066 | 0.92 | 0.9583 |
| 0.299 | 1.28 | 1000 | 0.2608 | 0.8906 | 0.9422 |
| 0.2533 | 1.92 | 1500 | 0.1706 | 0.9337 | 0.9657 |
| 0.2126 | 2.56 | 2000 | 0.3601 | 0.8576 | 0.9233 |
| 0.1913 | 3.2 | 2500 | 0.3955 | 0.8594 | 0.9244 |
| 0.1541 | 3.84 | 3000 | 0.1432 | 0.9484 | 0.9735 |
| 0.1432 | 4.48 | 3500 | 0.2027 | 0.9346 | 0.9662 |
| 0.1256 | 5.12 | 4000 | 0.3797 | 0.8898 | 0.9417 |
| 0.1026 | 5.75 | 4500 | 0.4773 | 0.8753 | 0.9335 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/tiny-vanilla-target-imdb
|
muhtasham
| 2022-12-11T01:56:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T01:49:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: tiny-vanilla-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.83488
- name: F1
type: f1
value: 0.9100104638995464
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-vanilla-target-imdb
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4589
- Accuracy: 0.8349
- F1: 0.9100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5912 | 0.64 | 500 | 0.4160 | 0.8295 | 0.9068 |
| 0.3949 | 1.28 | 1000 | 0.4095 | 0.8228 | 0.9028 |
| 0.3386 | 1.92 | 1500 | 0.2948 | 0.8804 | 0.9364 |
| 0.2993 | 2.56 | 2000 | 0.4798 | 0.7868 | 0.8807 |
| 0.2791 | 3.2 | 2500 | 0.4555 | 0.8205 | 0.9014 |
| 0.2585 | 3.84 | 3000 | 0.2815 | 0.8859 | 0.9395 |
| 0.2371 | 4.48 | 3500 | 0.4446 | 0.8316 | 0.9081 |
| 0.2189 | 5.12 | 4000 | 0.6102 | 0.7693 | 0.8696 |
| 0.1989 | 5.75 | 4500 | 0.4589 | 0.8349 | 0.9100 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
jlondonobo/whisper-large-v2-pt
|
jlondonobo
| 2022-12-11T01:36:51Z | 7 | 11 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-09T01:47:30Z |
---
language:
- pt
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large v2 Portuguese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 pt
type: mozilla-foundation/common_voice_11_0
config: pt
split: test
args: pt
metrics:
- name: Wer
type: wer
value: 5.590020342630419
---
# Whisper Large V2 Portuguese 🇧🇷🇵🇹
Bem-vindo ao **whisper large-v2** para transcrição em português 👋🏻
Transcribe Portuguese audio to text with the highest precision.
- Loss: 0.282
- Wer: 5.590
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the [mozilla-foundation/common_voice_11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) dataset. If you want a lighter model, you may be interested in [jlondonobo/whisper-medium-pt](https://huggingface.co/jlondonobo/whisper-medium-pt). It achieves faster inference with almost no difference in WER.
### Comparable models
Reported **WER** is based on the evaluation subset of Common Voice.
| Model | WER | # Parameters |
|--------------------------------------------------|:--------:|:------------:|
| [jlondonobo/whisper-large-v2-pt](https://huggingface.co/jlondonobo/whisper-large-v2-pt) | **5.590** 🤗 | 1550M |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 6.300 | 1550M |
| [jlondonobo/whisper-medium-pt](https://huggingface.co/jlondonobo/whisper-medium-pt) | 6.579 | 769M |
| [jonatasgrosman/wav2vec2-large-xlsr-53-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-portuguese) | 11.310 | 317M |
| [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) | 20.080 | 317M |
### Training hyperparameters
We used the following hyperparameters for training:
- `learning_rate`: 1e-05
- `train_batch_size`: 16
- `eval_batch_size`: 8
- `seed`: 42
- `gradient_accumulation_steps`: 2
- `total_train_batch_size`: 32
- `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_steps`: 500
- `training_steps`: 5000
- `mixed_precision_training`: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0828 | 1.09 | 1000 | 0.1868 | 6.778 |
| 0.0241 | 3.07 | 2000 | 0.2057 | 6.109 |
| 0.0084 | 5.06 | 3000 | 0.2367 | 6.029 |
| 0.0015 | 7.04 | 4000 | 0.2469 | 5.709 |
| 0.0009 | 9.02 | 5000 | 0.2821 | 5.590 🤗|
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
eublefar/bigbird-dialogue-score
|
eublefar
| 2022-12-11T01:18:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"big_bird",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-10T13:26:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bigbird-dialogue-score
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-dialogue-score
This model is a fine-tuned version of [google/bigbird-roberta-large](https://huggingface.co/google/bigbird-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2129
- eval_f1: 0.9290
- eval_precision: 0.9173
- eval_recall: 0.9410
- eval_runtime: 311.0516
- eval_samples_per_second: 49.304
- eval_steps_per_second: 6.163
- epoch: 1.0
- step: 5432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 6
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
odiaz1066/PPO-LunarLander
|
odiaz1066
| 2022-12-11T00:58:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T00:58:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.66 +/- 14.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.