modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-27 18:28:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 523
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-27 18:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-2_sixties-8_s869
|
jonatasgrosman
| 2022-12-11T18:32:58Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T18:32:47Z |
---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_vp-100k_age_teens-2_sixties-8_s869
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
amitkayal/whisper-tiny-hi
|
amitkayal
| 2022-12-11T18:31:44Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-05T14:00:20Z |
---
language:
- hi
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper-tiny-hi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 43.88685085406397
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-hi
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7990
- Wer: 43.8869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1747 | 7.02 | 1000 | 0.5674 | 41.6800 |
| 0.0466 | 14.03 | 2000 | 0.7042 | 43.7378 |
| 0.0174 | 22.0 | 3000 | 0.7990 | 43.8869 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.10.0
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-2_sixties-8_s481
|
jonatasgrosman
| 2022-12-11T18:29:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T18:29:37Z |
---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_vp-100k_age_teens-2_sixties-8_s481
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
SamoaJon/ppo-LunarLander-v2-TEST
|
SamoaJon
| 2022-12-11T18:27:15Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T18:26:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.44 +/- 17.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-2_sixties-8_s468
|
jonatasgrosman
| 2022-12-11T18:21:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T18:21:08Z |
---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_vp-100k_age_teens-2_sixties-8_s468
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Conflictx/AnimeScreencap
|
Conflictx
| 2022-12-11T18:18:49Z | 0 | 91 | null |
[
"text-to-image",
"v2.0",
"Embedding",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-12-06T18:45:55Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- v2.0
- Embedding
---
Textual Inversion Embedding by ConflictX For SD 2.x trained on 768x768 images from anime sources.
Install by downloading the step embedding, and put it in the \embeddings folder
A beautiful artstyle, this one focused on warm environments, with a focus on movie stylized anime.
This one has a bit more difficulty to get faces right, but it is possible.
Use keyword: AnimeScreenCap
Images:





Mixes with my other embeddings:
Vikingpunk:

Chempunk:

Kipaki:

Candypunk:

|
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-10_sixties-0_s613
|
jonatasgrosman
| 2022-12-11T18:18:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T18:18:33Z |
---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_vp-100k_age_teens-10_sixties-0_s613
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Conflictx/VikingPunk
|
Conflictx
| 2022-12-11T18:18:26Z | 0 | 96 | null |
[
"text-to-image",
"v2.0",
"Embedding",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-12-02T20:59:02Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- v2.0
- Embedding
---
Textual Inversion Embedding by ConflictX For SD 2.x trained on 768x768 images from midjourney.
Install by downloading the step embedding, and put it in the \embeddings folder
Similar to the Egyptian styled one, this one is more focused on cooler environments and viking+cyberpunk themes. Works fine for space environments as well, like Alien.
Use keyword: VikingPunk








|
Conflictx/Chempunk
|
Conflictx
| 2022-12-11T18:18:02Z | 0 | 60 | null |
[
"text-to-image",
"v2.0",
"Embedding",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-12-02T23:28:09Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- v2.0
- Embedding
---
Textual Inversion Embedding by ConflictX For SD 2.x trained on 768x768 images from midjourney and other sources.
Install by downloading the step embedding, and put it in the \embeddings folder
Another themed one, this one is more focused on toxic environments and dystopian+dieselpunk themes.
Use keyword: ChemPunk







|
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-10_sixties-0_s261
|
jonatasgrosman
| 2022-12-11T18:13:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T18:13:26Z |
---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_vp-100k_age_teens-10_sixties-0_s261
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-0_sixties-10_s464
|
jonatasgrosman
| 2022-12-11T18:08:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T18:08:00Z |
---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_vp-100k_age_teens-0_sixties-10_s464
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
rherrmann/ppo-LunarLander-v2
|
rherrmann
| 2022-12-11T18:07:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T18:05:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.35 +/- 14.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Hossein-Bodaghi/CA_Market
|
Hossein-Bodaghi
| 2022-12-11T18:05:14Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-12-11T18:05:12Z |
---
license: cc-by-nc-sa-4.0
---
|
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-5_sixties-5_s169
|
jonatasgrosman
| 2022-12-11T18:00:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:59:48Z |
---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_vp-100k_age_teens-5_sixties-5_s169
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Glen/ppo-LunarLander-v2
|
Glen
| 2022-12-11T17:58:19Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T17:57:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.00 +/- 20.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-5_sixties-5_s12
|
jonatasgrosman
| 2022-12-11T17:57:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:57:24Z |
---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_vp-100k_age_teens-5_sixties-5_s12
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Kwaku/gpt2-finetuned-banking77
|
Kwaku
| 2022-12-11T17:54:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"eng",
"dataset:banking77",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-20T01:06:19Z |
---
language: eng
datasets:
- banking77
---
# GPT2 Fine-Tuned Banking 77
This is a fine-tuned version of the GPT2 model. It's best suited for text-generation.
## Model Description
Kwaku/gpt2-finetuned-banking77 was fine tuned on the [banking77](https://huggingface.co/datasets/banking77) dataset, which is "composed of online banking queries annotated with their corresponding intents."
## Intended Uses and Limitations
Given the magnitude of the [Microsoft DialoGPT-large](https://huggingface.co/microsoft/DialoGPT-large) model, the author resorted to fine-tuning the gpt2 model for the creation of a chatbot. The intent was for the chatbot to emulate a banking customer agent, hence the use of the banking77 dataset. However, when the fine-tuned model was deployed in the chatbot, the results were undesirable. Its responses were inappropriate and unnecessarily long. The last word of its response is repeated numerously, a major glitch in it. The model performs better in text-generation but is prone to generating banking-related text because of the corpus it was trained on.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>>from transformers import pipeline
>>> model_name = "Kwaku/gpt2-finetuned-banking77"
>>> generator = pipeline("text-generation", model=model_name)
>>> result = generator("My money is", max_length=15, num_return_sequences=2)
>>> print(result)
[{'generated_text': 'My money is stuck in ATM pending. Please cancel this transaction and refund it'}, {'generated_text': 'My money is missing. How do I get a second card, and how'}]
```
### Limitations and bias
For users who want a diverse text-generator, this model's tendency to generate mostly bank-related text will be a drawback. It also inherits [the biases of its parent model, the GPT2](https://huggingface.co/gpt2#limitations-and-bias).
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-8_sixties-2_s741
|
jonatasgrosman
| 2022-12-11T17:54:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:54:19Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-8_sixties-2_s741
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-8_sixties-2_s71
|
jonatasgrosman
| 2022-12-11T17:50:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:50:38Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-8_sixties-2_s71
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-8_sixties-2_s500
|
jonatasgrosman
| 2022-12-11T17:47:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:47:24Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-8_sixties-2_s500
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-2_sixties-8_s304
|
jonatasgrosman
| 2022-12-11T17:41:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:41:18Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-2_sixties-8_s304
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-10_sixties-0_s232
|
jonatasgrosman
| 2022-12-11T17:30:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:30:02Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-10_sixties-0_s232
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s649
|
jonatasgrosman
| 2022-12-11T17:17:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:17:01Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s649
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s197
|
jonatasgrosman
| 2022-12-11T17:14:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:14:12Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s197
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-8_sixties-2_s786
|
jonatasgrosman
| 2022-12-11T17:09:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:09:09Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-8_sixties-2_s786
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Kwaku/social_media_sa
|
Kwaku
| 2022-12-11T17:05:23Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"eng",
"dataset:banking77",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-21T23:20:43Z |
---
language: eng
datasets:
- banking77
---
# Social Media Sentiment Analysis Model
This is a fine-tuned version of the Distilbert model. It's best suited for sentiment-analysis.
## Model Description
Social Media Sentiment Analysis Model was trained on the [dataset consisting of tweets](https://www.kaggle.com/code/mohamednabill7/sentiment-analysis-of-twitter-data/data) obtained from Kaggle."
## Intended Uses and Limitations
This model is meant for sentiment-analysis. Because it was trained on a corpus of tweets, it is familiar with social media jargons.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>>from transformers import pipeline
>>> model_name = "Kwaku/social_media_sa"
>>> generator = pipeline("sentiment-analysis", model=model_name)
>>> result = generator("I like this model")
>>> print(result)
Generated output: [{'label': 'positive', 'score': 0.9494990110397339}]
```
### Limitations and bias
This model inherits the bias of its parent, [Distilbert](https://huggingface.co/models?other=distilbert).
Besides that, it was trained on only 1000 randomly selected sequences, and thus does not achieve a high probability rate.
It does fairly well nonetheless.
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s877
|
jonatasgrosman
| 2022-12-11T17:03:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:03:31Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s877
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s510
|
jonatasgrosman
| 2022-12-11T17:00:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T17:00:23Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s510
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
nemanjar/ppo-LunarLander-v2
|
nemanjar
| 2022-12-11T16:57:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-07T20:28:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.80 +/- 16.52
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s273
|
jonatasgrosman
| 2022-12-11T16:56:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T16:56:35Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s273
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-0_sixties-10_s304
|
jonatasgrosman
| 2022-12-11T16:39:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T16:38:44Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-0_sixties-10_s304
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s872
|
jonatasgrosman
| 2022-12-11T16:32:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T16:32:23Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s872
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
pgfeldman/model_explorer_hello_world
|
pgfeldman
| 2022-12-11T16:28:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-12-11T16:17:26Z |
---
license: cc-by-4.0
---
This model is a finetuned GPT-2 model on a small corpora of tweets about Paxlovid and Ivermectin. It is designed to be a "hello world" model to be used in conjunction with the "ModelExplorer" App that is part of the GitHub [KeywordExplorer](https://github.com/pgfeldman/KeywordExplorer) repository.
The key feature of this model is that it has been trained to use "Meta Wrapping", which adds additional information to a corpora that the model is then trained on. An example is shown below:
[[text: RT @Andygetout: Sehr geehrter @Karl_Lauterbach,gestern und heute musste ich mit Schrecken feststellen, wie und warum Paxlovid NICHT bei d… || created: 2022-09-04 07:10:25 || location: Kaiserslautern, Germany || probability: twenty]]
[[text: RT @axios: There's growing concern about the link between Pfizer's antiviral pill and COVID rebound, in which patients test positive or hav… || created: 2022-09-03 02:40:34 || location: Bendigo, Victoria. Australia || probability: thirty]]
In this case a tweet (everything after "text:"" and before "||") has been embedded in *MetaWrapping*, which adds information like date, location, and an arbitrary "probability" tag that will be "ten", "twenty", "thirty", or "forty". When generating text, these tags will reflect the meta information as well as the text. For example, a well-trained model will have "probability: ten" close to 10% of the time
|
EffyLi/bert-base-uncased-finetuned-ner-finetuned-ner
|
EffyLi
| 2022-12-11T16:18:36Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-11T16:17:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-base-uncased-finetuned-ner-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner-finetuned-ner
This model is a fine-tuned version of [EffyLi/bert-base-uncased-finetuned-ner](https://huggingface.co/EffyLi/bert-base-uncased-finetuned-ner) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0
- Datasets 2.7.1
- Tokenizers 0.11.0
|
ntinosmg/ppo-Huggy
|
ntinosmg
| 2022-12-11T16:02:27Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-11T16:02:16Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ntinosmg/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
teddy322/wav2vec2-large-xls-r-300m-kor-lr-5e-4
|
teddy322
| 2022-12-11T15:48:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:zeroth_korean_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-07T16:02:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- zeroth_korean_asr
model-index:
- name: wav2vec2-large-xls-r-300m-kor-lr-5e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kor-lr-5e-4
This model is a fine-tuned version of [teddy322/wav2vec2-large-xls-r-300m-kor-lr-5e-4](https://huggingface.co/teddy322/wav2vec2-large-xls-r-300m-kor-lr-5e-4) on the zeroth_korean_asr dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6605
- eval_wer: 0.4005
- eval_runtime: 150.1937
- eval_samples_per_second: 3.043
- eval_steps_per_second: 0.386
- epoch: 7.87
- step: 2800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
EffyLi/bert-base-NER-finetuned-ner
|
EffyLi
| 2022-12-11T15:27:03Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-08T10:52:38Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-base-NER-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-NER-finetuned-ner
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0
- Datasets 2.7.1
- Tokenizers 0.11.0
|
AI-MeisterBin/ko-sentence-bert-MeisterBin
|
AI-MeisterBin
| 2022-12-11T14:52:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"roberta",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-12-11T10:19:49Z |
심리상담 챗봇 메아리를 만들기 위한 버트 모델입니다.
챗봇
https://ai-meisterbin-project-chatbot-main-chatbot-qj3hxl.streamlit.app/
깃허브
https://github.com/AI-MeisterBin/project_chatbot
|
sanchit-gandhi/whisper-small-kab-1k-steps
|
sanchit-gandhi
| 2022-12-11T14:43:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ka",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T11:22:40Z |
---
language:
- ka
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Georgian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 kab
type: mozilla-foundation/common_voice_11_0
config: kab
split: test
args: kab
metrics:
- name: Wer
type: wer
value: 53.84203447245193
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Georgian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 kab dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6125
- Wer: 53.8420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5555 | 1.06 | 1000 | 0.6125 | 53.8420 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 2.0.0.dev20221210+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
anuragshas/whisper-large-v2-hi-v2
|
anuragshas
| 2022-12-11T14:33:10Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T06:34:14Z |
---
language:
- hi
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large-v2 Hindi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 hi
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 12.457650398315174
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-v2 Hindi
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 hi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1870
- Wer: 12.4577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 300
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2097 | 0.37 | 100 | 0.2616 | 17.6701 |
| 0.1578 | 0.73 | 200 | 0.2108 | 14.0990 |
| 0.0806 | 1.1 | 300 | 0.1870 | 12.4577 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
ScrappyCoco666/ppo-Huggy-1
|
ScrappyCoco666
| 2022-12-11T14:25:43Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-11T14:25:35Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ScrappyCoco666/ppo-Huggy-1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Yanjie24/bart-samsung-test
|
Yanjie24
| 2022-12-11T14:09:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-12-11T13:40:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: bart-samsung-test
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: train
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 46.7195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-samsung-test
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5511
- Rouge1: 46.7195
- Rouge2: 23.3711
- Rougel: 39.5121
- Rougelsum: 43.2091
- Gen Len: 17.7738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6838 | 1.0 | 1841 | 1.5511 | 46.7195 | 23.3711 | 39.5121 | 43.2091 | 17.7738 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sohm/ppo-LunarLander-v2
|
sohm
| 2022-12-11T14:04:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-10T22:54:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.39 +/- 18.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
polejowska/convnext-tiny-224-eurosat
|
polejowska
| 2022-12-11T14:00:13Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-11T13:48:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9537037037037037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-eurosat
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3153
- Accuracy: 0.9537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.863 | 0.98 | 33 | 1.5775 | 0.7619 |
| 1.039 | 1.98 | 66 | 0.8142 | 0.9008 |
| 0.5825 | 2.98 | 99 | 0.4442 | 0.9339 |
| 0.3228 | 3.98 | 132 | 0.3153 | 0.9537 |
| 0.2641 | 4.98 | 165 | 0.2868 | 0.9524 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ignamonte/ppo-Huggy
|
ignamonte
| 2022-12-11T13:48:48Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-11T13:48:35Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ignamonte/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
paulkm/autotrain-lottery_v2-2420075389
|
paulkm
| 2022-12-11T13:36:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"zh",
"dataset:paulkm/autotrain-data-lottery_v2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T13:31:07Z |
---
tags:
- autotrain
- text-classification
language:
- zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- paulkm/autotrain-data-lottery_v2
co2_eq_emissions:
emissions: 0.06047934032845949
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2420075389
- CO2 Emissions (in grams): 0.0605
## Validation Metrics
- Loss: 0.122
- Accuracy: 0.965
- Precision: 0.976
- Recall: 0.946
- AUC: 0.988
- F1: 0.961
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/paulkm/autotrain-lottery_v2-2420075389
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("paulkm/autotrain-lottery_v2-2420075389", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("paulkm/autotrain-lottery_v2-2420075389", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
paulkm/autotrain-lottery_v2-2420075390
|
paulkm
| 2022-12-11T13:32:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"zh",
"dataset:paulkm/autotrain-data-lottery_v2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T13:30:55Z |
---
tags:
- autotrain
- text-classification
language:
- zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- paulkm/autotrain-data-lottery_v2
co2_eq_emissions:
emissions: 0.013953144730323944
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2420075390
- CO2 Emissions (in grams): 0.0140
## Validation Metrics
- Loss: 0.117
- Accuracy: 0.966
- Precision: 0.965
- Recall: 0.960
- AUC: 0.990
- F1: 0.962
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/paulkm/autotrain-lottery_v2-2420075390
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("paulkm/autotrain-lottery_v2-2420075390", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("paulkm/autotrain-lottery_v2-2420075390", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
pranay-j/whisper-small-hindi
|
pranay-j
| 2022-12-11T13:31:10Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-10T01:56:13Z |
---
language:
- hi
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small hi- HYDDCSEZ
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 18.798644812746083
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small hi- HYDDCSEZ
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6357
- Wer: 18.7986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0037 | 14.01 | 1000 | 0.4715 | 19.1786 |
| 0.0001 | 28.01 | 2000 | 0.5589 | 18.5377 |
| 0.0001 | 43.01 | 3000 | 0.6008 | 18.5903 |
| 0.0 | 57.01 | 4000 | 0.6234 | 18.7735 |
| 0.0 | 72.01 | 5000 | 0.6357 | 18.7986 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
mokryak/ppo-LunarLander-v2
|
mokryak
| 2022-12-11T12:41:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T10:25:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.82 +/- 17.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hanq0212/RL_course_unit2
|
hanq0212
| 2022-12-11T12:07:11Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T11:29:41Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 859.00 +/- 348.69
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hanq0212 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hanq0212 -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hanq0212
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
janzw/ppo-lunar-lander-v2_r5
|
janzw
| 2022-12-11T12:03:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T12:03:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.49 +/- 16.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ahmetfirat/ppo-LunarLander-v2
|
ahmetfirat
| 2022-12-11T12:02:27Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T11:30:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.93 +/- 12.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sanchit-gandhi/whisper-small-sl-1k-steps
|
sanchit-gandhi
| 2022-12-11T11:22:31Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"sl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T10:15:40Z |
---
language:
- sl
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Slovenian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 sl
type: mozilla-foundation/common_voice_11_0
config: sl
split: test
args: sl
metrics:
- name: Wer
type: wer
value: 26.588921282798832
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Slovenian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 sl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4625
- Wer: 26.5889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0027 | 13.01 | 1000 | 0.4625 | 26.5889 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 2.0.0.dev20221210+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
harryrudolph/ppo-Huggy
|
harryrudolph
| 2022-12-11T11:07:00Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-11T11:06:45Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: harryrudolph/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vantezzen/pankocat
|
vantezzen
| 2022-12-11T10:55:24Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-11T10:44:50Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Pnkct1 Dreambooth model trained by vantezzen with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
polejowska/convnext-tiny-224-finetuned-eurosat-vitconfig-test-1
|
polejowska
| 2022-12-11T10:12:45Z | 28 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-11T09:59:58Z |
---
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: convnext-tiny-224-finetuned-eurosat-vitconfig-test-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-vitconfig-test-1
This model is a fine-tuned version of [](https://huggingface.co/) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
polejowska/convnext-tiny-224-finetuned-eurosat-vitconfig-test
|
polejowska
| 2022-12-11T09:47:22Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-11T09:25:43Z |
---
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: convnext-tiny-224-finetuned-eurosat-vitconfig-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-vitconfig-test
This model is a fine-tuned version of [](https://huggingface.co/) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Alan1999/ppo-LunarLander-v2
|
Alan1999
| 2022-12-11T09:24:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T09:23:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.83 +/- 15.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SerdarHelli/SDF-StyleGAN-3D
|
SerdarHelli
| 2022-12-11T09:01:38Z | 0 | 4 | null |
[
"Shape modeling",
"Volumetric models",
"dataset:shapenet",
"arxiv:2206.12055",
"license:other",
"region:us"
] | null | 2022-12-08T07:19:24Z |
---
license: other
tags:
- Shape modeling
- Volumetric models
datasets:
- shapenet
---
### Model Description
- SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation
- Zheng, Xin-Yang and Liu, Yang and Wang, Peng-Shuai and Tong, Xin, 2022
The proposed deeplearning model for 3D shape generation called signed distance field (SDF) - SDF-StyleGAN, whicH is based on StyleGAN2. The goal of this approach is to minimize the visual and geometric differences between the generated shapes and a collection of existing shapes.
### Documents
- [GitHub Repo](https://github.com/Zhengxinyang/SDF-StyleGAN)
- [Paper - SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation](https://arxiv.org/pdf/2206.12055.pdf)
### Datasets
ShapeNet is a comprehensive 3D shape dataset created for research in computer graphics, computer vision, robotics and related diciplines.
- [Offical Dataset of ShapeNet](https://shapenet.org/)
- [author's data preparation script](https://github.com/Zhengxinyang/SDF-StyleGAN)
- [author's training data](https://pan.baidu.com/s/1nVS7wlcOz62nYBgjp_M8Yg?pwd=oj1b)
### How to use
Training snippets are published under the official GitHub repository above.
### BibTeX Entry and Citation Info
```
@inproceedings{zheng2022sdfstylegan,
title = {SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation},
author = {Zheng, Xin-Yang and Liu, Yang and Wang, Peng-Shuai and Tong, Xin},
booktitle = {Comput. Graph. Forum (SGP)},
year = {2022},
}
```
|
polejowska/convnext-tiny-224-finetuned-eurosat-att-auto
|
polejowska
| 2022-12-11T09:01:21Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-11T08:25:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-finetuned-eurosat-att-auto
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9506172839506173
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-att-auto
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5076
- Accuracy: 0.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5583 | 0.97 | 23 | 1.6008 | 0.7160 |
| 1.2953 | 1.97 | 46 | 1.2957 | 0.7531 |
| 0.9488 | 2.97 | 69 | 1.0720 | 0.8148 |
| 0.7036 | 3.97 | 92 | 0.8965 | 0.8642 |
| 0.5446 | 4.97 | 115 | 0.7574 | 0.9383 |
| 0.4113 | 5.97 | 138 | 0.6522 | 0.9383 |
| 0.2259 | 6.97 | 161 | 0.5720 | 0.9383 |
| 0.1863 | 7.97 | 184 | 0.5076 | 0.9506 |
| 0.1443 | 8.97 | 207 | 0.4795 | 0.9383 |
| 0.1289 | 9.97 | 230 | 0.4685 | 0.9383 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CarpetCleaningLewisvilleTX/CarpetCleaningLewisvilleTX
|
CarpetCleaningLewisvilleTX
| 2022-12-11T08:46:58Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:46:39Z |
---
license: other
---
Carpet Cleaning Lewisville TX
https://carpetcleaninglewisville.com/
972-338-5376
Could it be said that you are searching for productive and Modest Floor covering Cleaning? You ought to know that there's just a single spot for you to call: cover Cleaning Lewisville, TX. Appreciate top cleaning that is likewise eco-accommodating from proficient cleaners today. You should simply call our number and book your visit.Pet steam cleaner is the most effective way for pet stain expulsion as well as spot evacuation, stain evacuation, wine stain expulsion, and even smell expulsion. Steam cleaning has ended up being far more effective than the other compound techniques that don't just demolish our floor coverings over the long haul yet additionally hurt your skin and take a ton of effort.On the other hand, steam cleaning is an eco-accommodating green cleaning strategy that productively arrives at the profound spots in your rugs and totally eliminates any stain. Also, it is protected and modest, and you won't have to put forth any attempt. Cover Cleaning Lewisville, TX, will thoroughly take care of you.
|
CoppellCarpetCleaning/CoppellCarpetCleaning
|
CoppellCarpetCleaning
| 2022-12-11T08:44:00Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:43:06Z |
---
license: other
---
Coppell Carpet Cleaning
https://coppellcarpetcleaning.com/
(972) 914-8246
Cover Green Cleaners utilizes the most complicated and better strategies than play out the entirety of your home's cleaning. Our clients comment how satisfied they are that we just utilize material or cleaning items that are alright for their kids, pets and other relatives. They generally value the way that we volunteer to make their homes completely safe.
|
RichardsonTXCarpetCleaning/DryerVentCleaningRichardsonTX
|
RichardsonTXCarpetCleaning
| 2022-12-11T08:40:02Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:39:31Z |
---
license: other
---
Dryer Vent Cleaning Richardson TX
https://carpetcleaning-richardson.com/dryer-vent-cleaning.html
(972) 454-9815
Additionally, if your vents are clogged, we can assist you in preventing dryer fires.If your clothes get too hot in your dryer or if it is too hot, this means that the hot air vents are blocked.When we remove the accumulated lint from the vents, we will be able to resolve this issue quickly.When customers need their dryers reconditioned or all of the lint that has built up in their vents removed, our skilled team is there to help.
|
RichardsonTXCarpetCleaning/TileandGroutCleaningRichardsonTX
|
RichardsonTXCarpetCleaning
| 2022-12-11T08:36:55Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:36:05Z |
---
license: other
---
Tile and Grout Cleaning Richardson TX
https://carpetcleaning-richardson.com/tile-and-grout-cleaning.html
(972) 454-9815
We have a Cheap Tile Cleaning service that brightens your floor and gives your home a clean look if you've been putting off cleaning your tiles because of the cost.Carpet cleaning in Richardson, Texas, doesn't just clean carpets.We cover everything when it comes to cleaning your home, from your ducts and vents to your tile and grout.
|
RichardsonTXCarpetCleaning/AreaRugCleaningRichardsonTX
|
RichardsonTXCarpetCleaning
| 2022-12-11T08:27:50Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:27:16Z |
---
license: other
---
Area Rug Cleaning Richardson TX
https://carpetcleaning-richardson.com/area-rug-cleaning.html
(972) 454-9815
Do you need the best cleaning services in town from Rug Shampooers?Do you want to bring back the natural beauty of your rugs after they have lost their original appearance?By simply calling our professionals, Richardson TX Carpet Cleaning will be able to properly clean them for you, leaving them looking good and brightening up your home at any time.
|
luigisaetta/whisper-medium-it
|
luigisaetta
| 2022-12-11T08:19:08Z | 18 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"whisper-event",
"it",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T18:00:42Z |
---
language:
- it
license: apache-2.0
tags:
- generated_from_trainer
- whisper-event
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: luigisaetta/whisper-medium-it
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 it
type: mozilla-foundation/common_voice_11_0
config: it
split: test
args: it
metrics:
- name: Wer
type: wer
value: 5.7191
---
# luigisaetta/whisper-medium-it
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1452
- Wer: 5.7191
## Model description
This model is a fine-tuning of the OpenAI Whisper Medium model, on the specified dataset.
## Intended uses & limitations
This model has been developed as part of the Hugging Face Whisper Fine Tuning sprint, December 2022.
It is meant to spread the knowledge on how these models are built and can be used to develop solutions
where it is needed ASR on the Italian Language.
It has not been extensively tested. It is possible that on other datasets the accuracy will be lower.
Please, test it before using it.
## Training and evaluation data
Trained and tested on Mozilla Common Voice, vers. 11
## Training procedure
The script **run.sh**, and the Python file, used for the training are saved in the repository.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1216 | 0.2 | 1000 | 0.2289 | 10.0594 |
| 0.1801 | 0.4 | 2000 | 0.1851 | 7.6593 |
| 0.1763 | 0.6 | 3000 | 0.1615 | 6.5258 |
| 0.1337 | 0.8 | 4000 | 0.1506 | 6.0427 |
| 0.0742 | 1.05 | 5000 | 0.1452 | 5.7191 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
polixonrio/whisper-small-fy-NL
|
polixonrio
| 2022-12-11T08:09:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"fy",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-10T17:27:53Z |
---
language:
- fy
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Western Frisian (Netherlands)
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 fy-NL
type: mozilla-foundation/common_voice_11_0
config: fy-NL
split: test
args: fy-NL
metrics:
- name: Wer
type: wer
value: 22.29686271707282
---
# Whisper Small Western Frisian (Netherlands)
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 fy-NL dataset.
This is an attempt for cross lingual transfer from Dutch to Frisian, since Whisper doesn't support Frisian.
It achieves the following results on the evaluation set:
- Loss: 0.5443
- Wer: 22.2969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0067 | 10.01 | 1000 | 0.4810 | 23.0115 |
| 0.0008 | 21.0 | 2000 | 0.5200 | 22.3576 |
| 0.0004 | 31.01 | 3000 | 0.5443 | 22.2969 |
| 0.0003 | 42.0 | 4000 | 0.5610 | 22.3719 |
| 0.0002 | 52.01 | 5000 | 0.5674 | 22.3898 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
lukechoi76/ppo-LunarLander-v4
|
lukechoi76
| 2022-12-11T08:04:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T08:03:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.18 +/- 20.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CarpetCleaningMesquiteTX/DryerVentCleaningMesquiteTX
|
CarpetCleaningMesquiteTX
| 2022-12-11T08:01:27Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:01:08Z |
---
license: other
---
Dryer Vent Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/dryer-vent-cleaning.html
(469) 213-8132
When you wash a lot each week, your dryer often works very hard to dry your clothes.It is safe to assume that your dry uses a lot of electricity in your home because it is used constantly.
|
CarpetCleaningMesquiteTX/AirDuctCleaningMesquiteTX
|
CarpetCleaningMesquiteTX
| 2022-12-11T08:00:43Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T08:00:17Z |
---
license: other
---
Air Duct Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/air-duct-cleaning.html
(469) 213-8132
Cleaning the air ducts is very important.We ensure that your carpets, tile flooring, and rugs are kept clean and in good condition.We can deal with a variety of heater and air conditioner cleaning issues in addition to cleaning air ducts.Your air ducts can be cleaned quickly and inexpensively of dust and debris.No matter how big or small the job is, our team of certified and professionally trained technicians will complete it correctly.
|
CarpetCleaningMesquiteTX/CarpetCleaningMesquiteTX
|
CarpetCleaningMesquiteTX
| 2022-12-11T07:57:15Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:56:56Z |
---
license: other
---
Carpet Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/
(469) 213-8132
The most ideal way to discard these bugs is expert and master steam cleaning with a truck mount. Cover Cleaning Mesquite TX will give you the total cleaning Administration that you expect from truly capable administrators. Our cleaners assurance to constantly give total, compelling, high audit cover administration and cleaning all over Mesquite TX and its district. We have bewildering cleaning counselors who are accessible to return to work for cleaning administrations over the course of the day nearby.
|
CarpetCleaningMckinneyTX/CarpetCleaningMckinneyTX
|
CarpetCleaningMckinneyTX
| 2022-12-11T07:53:59Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:53:36Z |
---
license: other
---
Carpet Cleaning Mckinney TX
https://carpetcleaningmckinneytx.com/
(469) 702-1202
Individuals search for elite administrations to keep their homes tidy and cutting-edge. We are certain about what we do in light of the fact that, we consolidate our long stretches of involvement in the cutting edge gear, drawing out the ideal outcome. For instance, our steam clean floor coverings technique guarantees the oil stains on your rug are for all time cleaned out with little water. Your rug will have insignificant drying time and be back on the floor quicker than expected.
|
FortWorthCarpetCleaning/UpholsteryCleaningFortWorthTX
|
FortWorthCarpetCleaning
| 2022-12-11T07:51:04Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:50:42Z |
---
license: other
---
Upholstery Cleaning Fort Worth TX
https://txfortworthcarpetcleaning.com/upholstery-cleaning.html
(817) 523-1237
When you sit on your upholstery, you inhale allergens, dirt, and dust that are trapped in its fibers.Therefore, if you want to ensure the safety of your upholstery—especially if you have children or pets—you need to hire experts in carpet cleaning for upholstery in Worth, Texas.We have the best upholstery cleaners who will come to your house and do an excellent job of cleaning it.Understanding the various fibers of your furniture is important to our technicians because it helps them choose effective and safe cleaning methods.When you hire us, we promise to give you a lot of attention and care, and we won't start cleaning your upholstery until we make sure the products we use are safe for the kind of fabric it is made of.
|
FortWorthCarpetCleaning/CarpetCleaningFortWorthTX
|
FortWorthCarpetCleaning
| 2022-12-11T07:49:00Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:48:41Z |
---
license: other
---
Carpet Cleaning Fort Worth TX
https://txfortworthcarpetcleaning.com/carpet-cleaning.html
(817) 523-1237
Carpet cleaning Fort Worth TX always focuses on making your home appear beautiful, particularly if this beauty is dependent on the appearance of your carpets, furniture, rugs, and tiles and ducts.We are the business that works to make your life in your home better. With our help, you can have a healthy and beautiful home.Call us if your current carpet has numerous stains and odors and you are unable to use it again due to its poor appearance and are considering purchasing a new one.
|
GreenCarpetCleaningGrandPrairie/GreenCarpetCleaningGrandPrairie
|
GreenCarpetCleaningGrandPrairie
| 2022-12-11T07:44:13Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:43:51Z |
---
license: other
---
Green Carpet Cleaning Grand Prairie
https://grandprairiecarpetcleaningtx.com/
(214) 301-3659
We give Floor covering Stain Expulsion that utilizes harmless to the ecosystem items. We lead the way with regards to dealing with the climate. Every one of our items are natural and are great for the environment, yet additionally for your pets and youngsters.
|
seastar105/whisper-small-ko-zeroth
|
seastar105
| 2022-12-11T07:42:51Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"whisper-event",
"ko",
"dataset:kresnik/zeroth_korean",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-11T00:49:45Z |
---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
- whisper-event
datasets:
- kresnik/zeroth_korean
metrics:
- wer
model-index:
- name: Whisper Small Korean
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Zeroth Korean
type: kresnik/zeroth_korean
config: clean
split: test
args: 'split: test'
metrics:
- name: Wer
type: wer
value: 6.761029965366662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Korean
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Zeroth Korean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0899
- Wer: 6.7610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1277 | 0.72 | 1000 | 0.1489 | 12.2271 |
| 0.0379 | 1.44 | 2000 | 0.1053 | 6.7159 |
| 0.0138 | 2.16 | 3000 | 0.0918 | 6.0382 |
| 0.0141 | 2.87 | 4000 | 0.0899 | 6.7610 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0a0+d0d6b1f
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CarpetCleaningArlingtonTX/CarpetCleaningArlingtonTX
|
CarpetCleaningArlingtonTX
| 2022-12-11T07:39:36Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:39:07Z |
---
license: other
---
Carpet Cleaning Arlington TX
https://carpetcleaning-arlington-tx.com/
(817) 381-5072
At Rug Cleaning Plano in TX we likewise have a truck mounted cover cleaning framework. These versatile vehicles have a force to be reckoned with of hardware. They generally have these on them and they can finish any occupation properly. Whether it is a little home, an enormous house or a gigantic modern intricate, the undertaking is rarely too large or intense.
|
CarpetCleaningPlanoTX/AirDuctCleaningPlanoTX
|
CarpetCleaningPlanoTX
| 2022-12-11T07:33:31Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:33:09Z |
---
license: other
---
Air Duct Cleaning Plano TX
https://carpetcleaningplanotx.com/air-duct-cleaning.html
(469) 444-1903
Airborne irritants are bad for your health, according to studies and other health research for a long time.Mold, pollen, and dust are examples.Your capacity to breathe is seriously impacted by these.Allergies and other respiratory issues are brought on by these pollutants.They may occasionally carry out attacks that can be fatal.What is the most important way to keep the air in your home, place of business, or place of business clean?It is cleaning air ducts.
|
CarpetCleaningPlanoTX/TileandGroutCleaningPlanoTX
|
CarpetCleaningPlanoTX
| 2022-12-11T07:32:44Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:32:11Z |
---
license: other
---
Tile and Grout Cleaning Plano TX
https://carpetcleaningplanotx.com/tile-and-grout-cleaning.html
(469) 444-1903
Cleaning tile grout used to take all day on your knees.But no longer.Our cleaning method is sophisticated yet gentle.Even the most complex and time-sensitive orders are handled quickly and easily by us.
|
CarpetCleaningPlanoTX/RugCleaningPlanoTX
|
CarpetCleaningPlanoTX
| 2022-12-11T07:30:50Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:30:22Z |
---
license: other
---
Rug Cleaning Plano TX
https://carpetcleaningplanotx.com/rug-cleaning.html
(469) 444-1903
Put your carpets, rugs, and other cleaning needs at risk.Avoid immersing them in hazardous and wasteful chemical processes in particular.We use cutting-edge Green Rug Cleaners services at carpet cleaning Plano, Texas.Texas cannot match these.Rug cleaning is safe and good for the environment thanks to our cutting-edge washing technology.This will not harm your property or put your friends, family, or pets in danger.
|
muhtasham/medium-mlm-tweet-target-tweet
|
muhtasham
| 2022-12-11T07:30:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T07:25:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: medium-mlm-tweet-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7593582887700535
- name: F1
type: f1
value: 0.7637254221785755
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-mlm-tweet-target-tweet
This model is a fine-tuned version of [muhtasham/medium-mlm-tweet](https://huggingface.co/muhtasham/medium-mlm-tweet) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9066
- Accuracy: 0.7594
- F1: 0.7637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4702 | 4.9 | 500 | 0.8711 | 0.7540 | 0.7532 |
| 0.0629 | 9.8 | 1000 | 1.2918 | 0.7701 | 0.7668 |
| 0.0227 | 14.71 | 1500 | 1.4801 | 0.7727 | 0.7696 |
| 0.0181 | 19.61 | 2000 | 1.5118 | 0.7888 | 0.7870 |
| 0.0114 | 24.51 | 2500 | 1.6747 | 0.7754 | 0.7745 |
| 0.0141 | 29.41 | 3000 | 1.8765 | 0.7674 | 0.7628 |
| 0.0177 | 34.31 | 3500 | 1.9066 | 0.7594 | 0.7637 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CarpetCleaningPlanoTX/CarpetCleaningPlanoTX
|
CarpetCleaningPlanoTX
| 2022-12-11T07:28:50Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:28:23Z |
---
license: other
---
Carpet Cleaning Plano TX
https://carpetcleaningplanotx.com/
(469) 444-1903
At Rug Cleaning Plano in TX we likewise have a truck mounted cover cleaning framework. These versatile vehicles have a force to be reckoned with of hardware. They generally have these on them and they can finish any occupation properly. Whether it is a little home, an enormous house or a gigantic modern intricate, the undertaking is rarely too large or intense.
|
MaviBogaz/ppo-LunarLander-v2
|
MaviBogaz
| 2022-12-11T07:27:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T07:26:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.84 +/- 20.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CandyCarpetCleaningIrving/AirVentCleaningIrvingTX
|
CandyCarpetCleaningIrving
| 2022-12-11T07:20:41Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:20:17Z |
---
license: other
---
Air Vent Cleaning Irving TX
https://carpetcleaninginirving.com/air-vent.html
(214) 744-3341
Our capacity to concentrate on the contentment of our clients is one of the ways that we outperform our rivals.Every time we provide services to our customers, we take the time to do it right.We plan our appointments so that our cleaners won't have to rush to serve you because there is a line of customers waiting for them.
|
CandyCarpetCleaningIrving/UpholsteryCleaningIrvingTX
|
CandyCarpetCleaningIrving
| 2022-12-11T07:16:00Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:15:38Z |
---
license: other
---
Upholstery Cleaning Irving TX
https://carpetcleaninginirving.com/upholstery.html
(214) 744-3341
Our Furniture Steam Cleaners in Irving, Texas, are well-prepared and highly skilled to assist you in cleaning your upholstery and deliver the kind of service you would expect from a market leader.
|
bjelkenhed/whisper-large-sv
|
bjelkenhed
| 2022-12-11T07:13:11Z | 7 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"sv",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-08T11:48:33Z |
---
language:
- sv
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large Swedish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 sv-SE
type: mozilla-foundation/common_voice_11_0
config: sv-SE
split: test
args: sv-SE
metrics:
- name: Wer
type: wer
value: 9.220639613007256
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Swedish
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) trained on NST Swedish ASR and evaluated on Common Voice 11 testset.
It achieves the following results on the evaluation set
- Loss: 0.2337
- Wer: 9.2206
## Model description
openai/whisper-large-v2 had a WER of 10.6 on Common Voice 9 testset.
## Intended uses & limitations
More information needed
## Training and evaluation data
The training dataset contains 276 000 examples and with a batch size of 64 and training 5000 it is 1.14 epochs.
More training data or more epochs would probably improve the result even further.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0695 | 0.2 | 1000 | 0.2695 | 12.4671 |
| 0.0524 | 0.4 | 2000 | 0.2659 | 11.6367 |
| 0.046 | 0.6 | 3000 | 0.2402 | 10.6557 |
| 0.0342 | 0.8 | 4000 | 0.2339 | 10.1774 |
| 0.0224 | 1.14 | 5000 | 0.2337 | 9.2206 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
CandyCarpetCleaningIrving/CandyCarpetCleaningIrving
|
CandyCarpetCleaningIrving
| 2022-12-11T07:11:02Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:10:41Z |
---
license: other
---
Candy Carpet Cleaning Irving
https://carpetcleaninginirving.com/
(214) 744-3341
We utilize strong cleaning procedures and an exceptionally present day and high level hardware to eliminate every one of the stains from your floor covering and simultaneously shield the varieties and the fiber from any harm. We additionally use eco-accommodating cleaning items that are 100% safe for your children and pets also. Toward the finish of our cleaning cycle we will apply a defensive covering that will shield the rug from any future stains.
|
muhtasham/small-mlm-imdb-target-tweet
|
muhtasham
| 2022-12-11T07:07:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T07:03:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: small-mlm-imdb-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7406417112299465
- name: F1
type: f1
value: 0.7432065579579084
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/small-mlm-imdb](https://huggingface.co/muhtasham/small-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2131
- Accuracy: 0.7406
- F1: 0.7432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5821 | 4.9 | 500 | 0.8006 | 0.7540 | 0.7514 |
| 0.1013 | 9.8 | 1000 | 1.1662 | 0.7567 | 0.7562 |
| 0.0236 | 14.71 | 1500 | 1.5152 | 0.7540 | 0.7518 |
| 0.0125 | 19.61 | 2000 | 1.6963 | 0.7620 | 0.7581 |
| 0.0068 | 24.51 | 2500 | 1.9273 | 0.7380 | 0.7383 |
| 0.0042 | 29.41 | 3000 | 2.0042 | 0.7487 | 0.7500 |
| 0.0041 | 34.31 | 3500 | 2.2131 | 0.7406 | 0.7432 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CleaningCarpetDallas/WaterDamageRestorationDallasTX
|
CleaningCarpetDallas
| 2022-12-11T07:05:33Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:05:13Z |
---
license: other
---
http://cleaningcarpetdallas.com/water-damage-restoration.html
(972) 643-8799
Another service you can expect from Cleaning Carpet Dallas TX is water damage restoration.Do you live in a Texas building that has been flooded by a natural disaster?Please inform our staff if you have residential or commercial architecture that has been damaged by a hurricane or flood.
|
muhtasham/mini-mlm-imdb-target-tweet
|
muhtasham
| 2022-12-11T07:03:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T07:00:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: mini-mlm-imdb-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.767379679144385
- name: F1
type: f1
value: 0.7668830990510893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/mini-mlm-imdb](https://huggingface.co/muhtasham/mini-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3042
- Accuracy: 0.7674
- F1: 0.7669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8543 | 4.9 | 500 | 0.6920 | 0.7674 | 0.7571 |
| 0.3797 | 9.8 | 1000 | 0.7231 | 0.7727 | 0.7709 |
| 0.1668 | 14.71 | 1500 | 0.9171 | 0.7594 | 0.7583 |
| 0.068 | 19.61 | 2000 | 1.1558 | 0.7647 | 0.7642 |
| 0.0409 | 24.51 | 2500 | 1.3042 | 0.7674 | 0.7669 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
p4b/whisper-small-ko-fl-v2
|
p4b
| 2022-12-11T07:01:08Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ko",
"dataset:fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-10T16:37:47Z |
---
language:
- ko
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: Whisper Small Ko(FLUERS) - by p4b
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLUERS Korean
type: fleurs
config: ko_kr
split: validation
args: ko_kr
metrics:
- name: Wer
type: wer
value: 148.1005085252767
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ko(FLUERS) - by p4b
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the FLUERS Korean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4512
- Wer: 148.1005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 96
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6003 | 32.0 | 800 | 0.5913 | 167.2749 |
| 0.459 | 64.0 | 1600 | 0.4978 | 170.9841 |
| 0.4035 | 96.0 | 2400 | 0.4653 | 168.5911 |
| 0.3812 | 128.0 | 3200 | 0.4531 | 149.4765 |
| 0.3766 | 160.0 | 4000 | 0.4512 | 148.1005 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.14.0.dev20221208+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
CleaningCarpetDallas/TileGroutCleaningDallasTX
|
CleaningCarpetDallas
| 2022-12-11T07:01:05Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T07:00:37Z |
---
license: other
---
http://cleaningcarpetdallas.com/tile-grout-cleaning.html
(972) 643-8799
Have you recently been harmed by filthy grout and tile?It's possible that you are finally ready to make some changes to your tapestry because you are extremely dissatisfied with the current appearance of it.Call Cleaning Carpet Dallas TX right now to learn more about how we can make this much better for you.We have sent a lot of information to some phone reps, some of which you are about to read.
|
muhtasham/tiny-mlm-imdb-target-tweet
|
muhtasham
| 2022-12-11T07:00:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T06:56:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: tiny-mlm-imdb-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.6925133689839572
- name: F1
type: f1
value: 0.7003562110650444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/tiny-mlm-imdb](https://huggingface.co/muhtasham/tiny-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5550
- Accuracy: 0.6925
- F1: 0.7004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.159 | 4.9 | 500 | 0.9977 | 0.6364 | 0.6013 |
| 0.7514 | 9.8 | 1000 | 0.8549 | 0.7112 | 0.7026 |
| 0.5011 | 14.71 | 1500 | 0.8516 | 0.7032 | 0.6962 |
| 0.34 | 19.61 | 2000 | 0.9019 | 0.7059 | 0.7030 |
| 0.2258 | 24.51 | 2500 | 0.9722 | 0.7166 | 0.7164 |
| 0.1607 | 29.41 | 3000 | 1.0724 | 0.6979 | 0.6999 |
| 0.1127 | 34.31 | 3500 | 1.1435 | 0.7193 | 0.7169 |
| 0.0791 | 39.22 | 4000 | 1.2807 | 0.7059 | 0.7069 |
| 0.0568 | 44.12 | 4500 | 1.3849 | 0.7139 | 0.7159 |
| 0.0478 | 49.02 | 5000 | 1.5550 | 0.6925 | 0.7004 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Shiry/Whisper_hebrew_medium
|
Shiry
| 2022-12-11T07:00:26Z | 35 | 1 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"he",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-03T15:11:25Z |
---
language:
- he
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Medium Hebrew
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs he_il
type: google/fleurs
config: he_il
split: test
args: he_il
metrics:
- name: Wer
type: wer
value: 34
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Hebrew
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the google/fleurs he_il dataset.
It achieves the following results on the evaluation set:
- Wer: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
CleaningCarpetDallas/UpholsteryCleaningDallasTX
|
CleaningCarpetDallas
| 2022-12-11T06:58:59Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-12-11T06:58:36Z |
---
license: other
---
http://cleaningcarpetdallas.com/upholstery-cleaning.html
(972) 643-8799
Spots and stains on your microfiber sofa, couch, or loveseat can seriously ruin the appearance of your living room.You won't stand out with your gourmet and designer rugs, grandfather clocks, and artwork, and you'll also make your friends laugh.
|
muhtasham/base-vanilla-target-tweet
|
muhtasham
| 2022-12-11T06:56:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T06:46:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: base-vanilla-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7780748663101604
- name: F1
type: f1
value: 0.7772664883136655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-vanilla-target-tweet
This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8380
- Accuracy: 0.7781
- F1: 0.7773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3831 | 4.9 | 500 | 0.9800 | 0.7807 | 0.7785 |
| 0.0414 | 9.8 | 1000 | 1.4175 | 0.7754 | 0.7765 |
| 0.015 | 14.71 | 1500 | 1.6411 | 0.7754 | 0.7708 |
| 0.0166 | 19.61 | 2000 | 1.5930 | 0.7941 | 0.7938 |
| 0.0175 | 24.51 | 2500 | 1.3934 | 0.7888 | 0.7852 |
| 0.0191 | 29.41 | 3000 | 1.9407 | 0.7647 | 0.7658 |
| 0.0137 | 34.31 | 3500 | 1.8380 | 0.7781 | 0.7773 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/medium-vanilla-target-tweet
|
muhtasham
| 2022-12-11T06:46:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T06:40:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: medium-vanilla-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7754010695187166
- name: F1
type: f1
value: 0.7745943137047872
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-vanilla-target-tweet
This model is a fine-tuned version of [google/bert_uncased_L-8_H-512_A-8](https://huggingface.co/google/bert_uncased_L-8_H-512_A-8) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9845
- Accuracy: 0.7754
- F1: 0.7746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4989 | 4.9 | 500 | 0.8358 | 0.7620 | 0.7589 |
| 0.0702 | 9.8 | 1000 | 1.3142 | 0.7674 | 0.7683 |
| 0.0233 | 14.71 | 1500 | 1.4760 | 0.7647 | 0.7650 |
| 0.015 | 19.61 | 2000 | 1.5151 | 0.7834 | 0.7841 |
| 0.0062 | 24.51 | 2500 | 1.6094 | 0.7968 | 0.7947 |
| 0.0113 | 29.41 | 3000 | 1.9273 | 0.7540 | 0.7537 |
| 0.0157 | 34.31 | 3500 | 2.0073 | 0.7433 | 0.7460 |
| 0.0124 | 39.22 | 4000 | 1.9845 | 0.7754 | 0.7746 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
aungmyatv8/ppo-LunarLander-v2
|
aungmyatv8
| 2022-12-11T05:23:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-11T05:04:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.93 +/- 21.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sagawa/PubChem-10m-t5-v2
|
sagawa
| 2022-12-11T05:16:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"dataset:sagawa/pubchem-10m-canonicalized",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-06T01:13:43Z |
---
license: mit
datasets:
- sagawa/pubchem-10m-canonicalized
metrics:
- accuracy
model-index:
- name: PubChem-10m-t5
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: sagawa/pubchem-10m-canonicalized
type: sagawa/pubchem-10m-canonicalized
metrics:
- name: Accuracy
type: accuracy
value: 0.9189779162406921
---
# PubChem-10m-t5
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/pubchem-10m-canonicalized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Accuracy: 0.9190
## Model description
We trained t5 on SMILES from PubChem using the task of masked-language modeling (MLM). Compared to PubChem-10m-t5, PubChem-10m-t5-v2 uses a character-level tokenizer, and it was also trained on PubChem.
## Intended uses & limitations
This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
## Training and evaluation data
We downloaded [PubChem data](https://drive.google.com/file/d/1ygYs8dy1-vxD1Vx6Ux7ftrXwZctFjpV3/view) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 9999960, and they were randomly split into train:validation=10:1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-03
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Step | Accuracy | Validation Loss |
|:-------------:|:------:|:--------:|:---------------:|
| 0.2592 | 100000 | 0.8997 | 0.2784 |
| 0.2790 | 200000 | 0.9095 | 0.2468 |
| 0.2278 | 300000 | 0.9162 | 0.2256 |
|
muhtasham/small-mlm-tweet-target-imdb
|
muhtasham
| 2022-12-11T05:07:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-11T04:57:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: small-mlm-tweet-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88784
- name: F1
type: f1
value: 0.9405881854394441
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-tweet-target-imdb
This model is a fine-tuned version of [muhtasham/small-mlm-tweet](https://huggingface.co/muhtasham/small-mlm-tweet) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4422
- Accuracy: 0.8878
- F1: 0.9406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3515 | 0.64 | 500 | 0.1494 | 0.9388 | 0.9684 |
| 0.2452 | 1.28 | 1000 | 0.1439 | 0.9450 | 0.9717 |
| 0.1956 | 1.92 | 1500 | 0.2199 | 0.9156 | 0.9559 |
| 0.1398 | 2.56 | 2000 | 0.4328 | 0.876 | 0.9339 |
| 0.1102 | 3.2 | 2500 | 0.4422 | 0.8878 | 0.9406 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.