modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 18:27:02
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 18:26:43
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/exp_w2v2t_fa_vp-fr_s282
|
jonatasgrosman
| 2022-07-09T23:00:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T23:00:08Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_vp-fr_s282
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_unispeech-ml_s408
|
jonatasgrosman
| 2022-07-09T22:54:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T22:53:45Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_unispeech-ml_s408
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_unispeech-ml_s195
|
jonatasgrosman
| 2022-07-09T22:50:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T22:50:27Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_unispeech-ml_s195
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_wavlm_s545
|
jonatasgrosman
| 2022-07-09T22:47:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T22:47:17Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_wavlm_s545
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_wavlm_s527
|
jonatasgrosman
| 2022-07-09T22:44:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T22:43:55Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_wavlm_s527
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
meln1k/MLAgents-PushBlock
|
meln1k
| 2022-07-09T21:57:09Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-PushBlock",
"region:us"
] |
reinforcement-learning
| 2022-07-09T21:57:03Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-PushBlock
library_name: ml-agents
---
# **ppo** Agent playing **PushBlock**
This is a trained model of a **ppo** agent playing **PushBlock** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-PushBlock
2. Step 1: Write your model_id: meln1k/MLAgents-PushBlock
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
meln1k/MLAgents-Worm
|
meln1k
| 2022-07-09T21:21:33Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Worm",
"region:us"
] |
reinforcement-learning
| 2022-07-09T21:21:27Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Worm
library_name: ml-agents
---
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm
2. Step 1: Write your model_id: meln1k/MLAgents-Worm
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jonaskoenig/xtremedistil-l6-h256-uncased-future-time-references
|
jonaskoenig
| 2022-07-09T21:03:37Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-09T20:17:23Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: jonaskoenig/xtremedistil-l6-h256-uncased-future-time-references
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jonaskoenig/xtremedistil-l6-h256-uncased-future-time-references
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0126
- Train Sparse Categorical Accuracy: 0.9961
- Validation Loss: 0.0148
- Validation Sparse Categorical Accuracy: 0.9955
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.0541 | 0.9841 | 0.0250 | 0.9929 | 0 |
| 0.0223 | 0.9936 | 0.0186 | 0.9947 | 1 |
| 0.0158 | 0.9953 | 0.0161 | 0.9953 | 2 |
| 0.0126 | 0.9961 | 0.0148 | 0.9955 | 3 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_fa_no-pretraining_s28
|
jonatasgrosman
| 2022-07-09T21:01:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T21:00:43Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_no-pretraining_s28
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_no-pretraining_s117
|
jonatasgrosman
| 2022-07-09T20:53:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T20:53:14Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_no-pretraining_s117
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_vp-sv_s689
|
jonatasgrosman
| 2022-07-09T20:49:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T20:48:42Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_vp-sv_s689
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_vp-sv_s738
|
jonatasgrosman
| 2022-07-09T20:45:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T20:44:50Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_vp-sv_s738
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_vp-sv_s749
|
jonatasgrosman
| 2022-07-09T20:40:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T20:39:48Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_vp-sv_s749
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_hubert_s801
|
jonatasgrosman
| 2022-07-09T20:29:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T20:29:02Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_hubert_s801
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_unispeech_s108
|
jonatasgrosman
| 2022-07-09T20:22:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T20:21:53Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_unispeech_s108
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_unispeech_s211
|
jonatasgrosman
| 2022-07-09T20:19:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T20:18:16Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_unispeech_s211
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
dgrinwald/swin-tiny-patch4-window7-224-finetuned-eurosat
|
dgrinwald
| 2022-07-09T20:17:28Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-09T09:23:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8464730290456431
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3266
- Accuracy: 0.8465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2941 | 1.0 | 17 | 1.1717 | 0.4689 |
| 1.0655 | 2.0 | 34 | 0.9397 | 0.5560 |
| 0.8008 | 3.0 | 51 | 0.6153 | 0.7303 |
| 0.7204 | 4.0 | 68 | 0.5665 | 0.7427 |
| 0.6931 | 5.0 | 85 | 0.4670 | 0.7801 |
| 0.6277 | 6.0 | 102 | 0.4328 | 0.8465 |
| 0.5689 | 7.0 | 119 | 0.4078 | 0.8174 |
| 0.6103 | 8.0 | 136 | 0.4060 | 0.8091 |
| 0.5501 | 9.0 | 153 | 0.4842 | 0.7884 |
| 0.6018 | 10.0 | 170 | 0.3780 | 0.8423 |
| 0.5668 | 11.0 | 187 | 0.3551 | 0.8631 |
| 0.5192 | 12.0 | 204 | 0.4514 | 0.8216 |
| 0.5133 | 13.0 | 221 | 0.3598 | 0.8174 |
| 0.5753 | 14.0 | 238 | 0.4172 | 0.8091 |
| 0.4833 | 15.0 | 255 | 0.4685 | 0.8050 |
| 0.5546 | 16.0 | 272 | 0.4474 | 0.7842 |
| 0.5179 | 17.0 | 289 | 0.4570 | 0.7884 |
| 0.5017 | 18.0 | 306 | 0.4218 | 0.8050 |
| 0.4808 | 19.0 | 323 | 0.4094 | 0.8050 |
| 0.4708 | 20.0 | 340 | 0.4693 | 0.7759 |
| 0.5033 | 21.0 | 357 | 0.3141 | 0.8672 |
| 0.4859 | 22.0 | 374 | 0.3687 | 0.8257 |
| 0.516 | 23.0 | 391 | 0.3819 | 0.8216 |
| 0.4822 | 24.0 | 408 | 0.3391 | 0.8506 |
| 0.4748 | 25.0 | 425 | 0.3281 | 0.8506 |
| 0.4914 | 26.0 | 442 | 0.3308 | 0.8631 |
| 0.4354 | 27.0 | 459 | 0.3859 | 0.8133 |
| 0.4297 | 28.0 | 476 | 0.3761 | 0.8133 |
| 0.4747 | 29.0 | 493 | 0.2914 | 0.8672 |
| 0.4395 | 30.0 | 510 | 0.3025 | 0.8548 |
| 0.4279 | 31.0 | 527 | 0.3314 | 0.8506 |
| 0.4327 | 32.0 | 544 | 0.4626 | 0.7842 |
| 0.446 | 33.0 | 561 | 0.3499 | 0.8382 |
| 0.4011 | 34.0 | 578 | 0.3408 | 0.8465 |
| 0.4418 | 35.0 | 595 | 0.3159 | 0.8589 |
| 0.484 | 36.0 | 612 | 0.3130 | 0.8548 |
| 0.4119 | 37.0 | 629 | 0.2899 | 0.8589 |
| 0.4453 | 38.0 | 646 | 0.3200 | 0.8465 |
| 0.4074 | 39.0 | 663 | 0.3493 | 0.8465 |
| 0.3937 | 40.0 | 680 | 0.3003 | 0.8672 |
| 0.4222 | 41.0 | 697 | 0.3547 | 0.8299 |
| 0.3922 | 42.0 | 714 | 0.3206 | 0.8589 |
| 0.3973 | 43.0 | 731 | 0.4074 | 0.8133 |
| 0.4118 | 44.0 | 748 | 0.3147 | 0.8589 |
| 0.4088 | 45.0 | 765 | 0.3393 | 0.8506 |
| 0.3635 | 46.0 | 782 | 0.3584 | 0.8257 |
| 0.403 | 47.0 | 799 | 0.3240 | 0.8506 |
| 0.3943 | 48.0 | 816 | 0.3536 | 0.8216 |
| 0.4085 | 49.0 | 833 | 0.3270 | 0.8465 |
| 0.3865 | 50.0 | 850 | 0.3266 | 0.8465 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_fa_xlsr-53_s204
|
jonatasgrosman
| 2022-07-09T20:15:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T20:14:39Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_xlsr-53_s204
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_wav2vec2_s873
|
jonatasgrosman
| 2022-07-09T19:49:50Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T19:49:26Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_wav2vec2_s873
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_wav2vec2_s168
|
jonatasgrosman
| 2022-07-09T19:45:42Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T19:45:17Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_wav2vec2_s168
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_wav2vec2_s321
|
jonatasgrosman
| 2022-07-09T19:41:14Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T19:40:51Z |
---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_wav2vec2_s321
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_vp-it_s817
|
jonatasgrosman
| 2022-07-09T19:37:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T19:36:42Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_vp-it_s817
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_vp-it_s975
|
jonatasgrosman
| 2022-07-09T19:29:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T19:28:43Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_vp-it_s975
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
NAACL2022/spider-nq-ctx-encoder
|
NAACL2022
| 2022-07-09T19:20:32Z | 4 | 4 |
transformers
|
[
"transformers",
"pytorch",
"dpr",
"arxiv:2112.07708",
"endpoints_compatible",
"region:us"
] | null | 2022-07-09T18:59:17Z |
# Spider-NQ: Context Encoder
This is the context encoder of the model fine-tuned on Natural Questions (and initialized from Spider) discussed in our paper [Learning to Retrieve Passages without Supervision](https://arxiv.org/abs/2112.07708).
## Usage
We used weight sharing for the query encoder and passage encoder, so the same model should be applied for both.
**Note**! We format the passages similar to DPR, i.e. the title and the text are separated by a `[SEP]` token, but token
type ids are all 0-s.
An example usage:
```python
from transformers import AutoTokenizer, DPRContextEncoder
tokenizer = AutoTokenizer.from_pretrained("NAACL2022/spider-nq-ctx-encoder")
model = DPRContextEncoder.from_pretrained("NAACL2022/spider-nq-ctx-encoder")
title = "Sauron"
context = "Sauron is the title character and main antagonist of J. R. R. Tolkien's \"The Lord of the Rings\"."
input_dict = tokenizer(title, context, return_tensors="pt")
del input_dict["token_type_ids"]
outputs = model(**input_dict)
```
|
NAACL2022/spider-trivia-ctx-encoder
|
NAACL2022
| 2022-07-09T19:19:59Z | 4 | 4 |
transformers
|
[
"transformers",
"pytorch",
"dpr",
"arxiv:2112.07708",
"endpoints_compatible",
"region:us"
] | null | 2022-07-09T19:04:51Z |
# Spider-TriviaQA: Context Encoder
This is the context encoder of the model fine-tuned on TriviaQA (and initialized from Spider) discussed in our paper [Learning to Retrieve Passages without Supervision](https://arxiv.org/abs/2112.07708).
## Usage
We used weight sharing for the query encoder and passage encoder, so the same model should be applied for both.
**Note**! We format the passages similar to DPR, i.e. the title and the text are separated by a `[SEP]` token, but token
type ids are all 0-s.
An example usage:
```python
from transformers import AutoTokenizer, DPRContextEncoder
tokenizer = AutoTokenizer.from_pretrained("NAACL2022/spider-trivia-ctx-encoder")
model = DPRContextEncoder.from_pretrained("NAACL2022/spider-trivia-ctx-encoder")
title = "Sauron"
context = "Sauron is the title character and main antagonist of J. R. R. Tolkien's \"The Lord of the Rings\"."
input_dict = tokenizer(title, context, return_tensors="pt")
del input_dict["token_type_ids"]
outputs = model(**input_dict)
```
|
jonatasgrosman/exp_w2v2t_sv-se_r-wav2vec2_s160
|
jonatasgrosman
| 2022-07-09T19:17:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T19:17:07Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_r-wav2vec2_s160
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
NAACL2022/spider-trivia-question-encoder
|
NAACL2022
| 2022-07-09T19:14:40Z | 5 | 4 |
transformers
|
[
"transformers",
"pytorch",
"dpr",
"feature-extraction",
"arxiv:2112.07708",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-07-09T19:06:50Z |
# Spider-TriviaQA: Question Encoder
This is the question encoder of the model fine-tuned on TriviaQA (and initialized from Spider) discussed in our paper [Learning to Retrieve Passages without Supervision](https://arxiv.org/abs/2112.07708).
## Usage
We used weight sharing for the query encoder and passage encoder, so the same model should be applied for both.
**Note**! We format the passages similar to DPR, i.e. the title and the text are separated by a `[SEP]` token, but token
type ids are all 0-s.
An example usage:
```python
from transformers import AutoTokenizer, DPRQuestionEncoder
tokenizer = AutoTokenizer.from_pretrained("NAACL2022/spider-trivia-question-encoder")
model = DPRQuestionEncoder.from_pretrained("NAACL2022/spider-trivia-question-encoder")
question = "Who is the villain in lord of the rings"
input_dict = tokenizer(question, return_tensors="pt")
del input_dict["token_type_ids"]
outputs = model(**input_dict)
```
|
jonatasgrosman/exp_w2v2t_sv-se_xls-r_s926
|
jonatasgrosman
| 2022-07-09T19:05:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T19:05:33Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_xls-r_s926
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
abecode/t5-small-finetuned-xsum
|
abecode
| 2022-07-09T18:56:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-08T22:49:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.3177
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4783
- Rouge1: 28.3177
- Rouge2: 7.7064
- Rougel: 22.2212
- Rougelsum: 22.2193
- Gen Len: 18.8307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7172 | 1.0 | 12753 | 2.4783 | 28.3177 | 7.7064 | 22.2212 | 22.2193 | 18.8307 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_sv-se_unispeech-sat_s515
|
jonatasgrosman
| 2022-07-09T18:45:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T18:45:05Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_unispeech-sat_s515
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_vp-nl_s842
|
jonatasgrosman
| 2022-07-09T18:28:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T18:28:21Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_vp-nl_s842
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
PrimeQA/tapas-based-tableqa-wikisql-lookup
|
PrimeQA
| 2022-07-09T18:28:41Z | 9 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tapas",
"table-question-answering",
"arxiv:2004.02349",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-07-05T16:45:00Z |
---
license: apache-2.0
---
# Model description
This is an [tapas-base](https://huggingface.co/google/tapas-base) model, trained on the lookup queries of [wikisql](https://huggingface.co/datasets/wikisql) dataset. It was trained to take tables and questions as input to extract answers from the table.
# Overview
*Language model*: tapas-base \
*Language*: English\
*Task*: Table Question Answering \
*Data*: WikiSQL
# Intented use and limitations
One can use this model to predict answers for natural language queries given a table. Biases associated with pre-training of tapas-base and wikisql dataset may be present.
## Usage
One can use this model directly in the [PrimeQA](https://github.com/primeqa/primeqa) framework as in this example [notebook](https://github.com/primeqa/primeqa/blob/tableqa_tapas/notebooks/tableqa/tableqa_inference.ipynb).
## Citation
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
|
jonatasgrosman/exp_w2v2t_sv-se_vp-nl_s764
|
jonatasgrosman
| 2022-07-09T18:25:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T18:24:42Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_vp-nl_s764
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_vp-fr_s237
|
jonatasgrosman
| 2022-07-09T18:08:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T18:08:01Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_vp-fr_s237
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_vp-fr_s387
|
jonatasgrosman
| 2022-07-09T18:04:59Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T18:04:34Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_vp-fr_s387
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_unispeech-ml_s664
|
jonatasgrosman
| 2022-07-09T17:57:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T17:56:58Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_unispeech-ml_s664
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_unispeech-ml_s35
|
jonatasgrosman
| 2022-07-09T17:50:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T17:50:09Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_unispeech-ml_s35
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_wavlm_s607
|
jonatasgrosman
| 2022-07-09T17:47:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T17:46:54Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_wavlm_s607
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_no-pretraining_s630
|
jonatasgrosman
| 2022-07-09T17:37:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T17:36:47Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_no-pretraining_s630
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_no-pretraining_s705
|
jonatasgrosman
| 2022-07-09T17:33:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T17:33:28Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_no-pretraining_s705
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_no-pretraining_s910
|
jonatasgrosman
| 2022-07-09T17:30:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T17:30:18Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_no-pretraining_s910
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_vp-sv_s116
|
jonatasgrosman
| 2022-07-09T17:27:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T17:27:07Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_vp-sv_s116
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_hubert_s730
|
jonatasgrosman
| 2022-07-09T16:53:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T16:53:11Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_hubert_s730
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_hubert_s805
|
jonatasgrosman
| 2022-07-09T16:45:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T16:45:18Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_hubert_s805
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
yelpfeast/byt5-base-english-ocr-correction
|
yelpfeast
| 2022-07-09T16:37:42Z | 173 | 7 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:wikitext",
"arxiv:2105.13626",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-16T22:16:27Z |
---
language: en
datasets:
- wikitext
---
# ByT5 base English fine tuned for OCR Correction
This model is a fine-tuned version of the [byt5-base](https://huggingface.co/google/byt5-base) for OCR Correction. ByT5 was
introduced in [this paper](https://arxiv.org/abs/2105.13626) and the idea and code for fine-tuning the model for OCR Correction was taken from [here](https://blog.ml6.eu/ocr-correction-with-byt5-5994d1217c07).
## Model description
byt5-base-english-ocr-correction is a model that has taken the byt5-base model and fine-tuned it an OCR Correction dataset. The model has been fine-tuned to take an input sentence that has incorrectly transcribed from an OCR model and output a sentence that corrects the errors.
The model was trained by taking the [wikitext dataset](https://huggingface.co/datasets/wikitext) and adding synthetic OCR errors using [nlpaug](https://github.com/makcedward/nlpaug).
## Intended uses & limitations
You can use the model for Text-to-Text Generation to remove errors caused by an OCR model.
### How to use
```python
from transformers import T5ForConditionalGeneration
import torch
import nlpaug.augmenter.char as nac
aug = nac.OcrAug(aug_char_p =0.4, aug_word_p = 0.6)
corrected_text = "Life is like a box of chocolates"
augmented_text = aug.augment(corrected_text)
model = T5ForConditionalGeneration.from_pretrained('yelpfeast/byt5-base-english-ocr-correction')
input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens
labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens
loss = model(input_ids, labels=labels).loss # forward pass
```
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
import nlpaug.augmenter.char as nac
aug = nac.OcrAug(aug_char_p =0.4, aug_word_p = 0.6)
corrected_text = "Life is like a box of chocolates"
augmented_text = aug.augment(corrected_text)
print(augmented_text)
model = T5ForConditionalGeneration.from_pretrained('yelpfeast/byt5-base-english-ocr-correction')
tokenizer = AutoTokenizer.from_pretrained("yelpfeast/byt5-base-english-ocr-correction")
inputs = tokenizer(augmented_text, return_tensors="pt", padding=True)
output_sequences = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
do_sample=False, # disable sampling to test if batching affects output
)
print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
```
### Limitations
The model has been trained on text that has been artificially corrupted to look like OCR errors. These errors may not be similar for all OCR models and hence the model may not do a good job at producing fully correct text.
|
jonatasgrosman/exp_w2v2t_sv-se_unispeech_s449
|
jonatasgrosman
| 2022-07-09T16:30:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T16:29:45Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_unispeech_s449
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
huangjia/xlm-roberta-base-finetuned-panx-all
|
huangjia
| 2022-07-09T16:25:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-09T16:13:01Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
- F1: 0.8561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 418 | 0.2042 | 0.8064 |
| 0.2421 | 2.0 | 836 | 0.1773 | 0.8376 |
| 0.2421 | 3.0 | 1254 | 0.1709 | 0.8561 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.18.4
- Tokenizers 0.10.3
|
huangjia/xlm-roberta-base-finetuned-panx-it
|
huangjia
| 2022-07-09T16:09:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-09T16:05:43Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.7938060309698453
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2687
- F1: 0.7938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 35 | 0.4625 | 0.6674 |
| 0.7337 | 2.0 | 70 | 0.3035 | 0.7613 |
| 0.7337 | 3.0 | 105 | 0.2687 | 0.7938 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.18.4
- Tokenizers 0.10.3
|
huangjia/xlm-roberta-base-finetuned-panx-fr
|
huangjia
| 2022-07-09T16:05:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-09T16:00:27Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8204272363150867
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2739
- F1: 0.8204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 96 | 0.3708 | 0.7672 |
| 0.506 | 2.0 | 192 | 0.2967 | 0.8130 |
| 0.506 | 3.0 | 288 | 0.2739 | 0.8204 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.18.4
- Tokenizers 0.10.3
|
huangjia/xlm-roberta-base-finetuned-panx-de-fr
|
huangjia
| 2022-07-09T15:58:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-09T15:47:40Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1584
- F1: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 358 | 0.1776 | 0.8263 |
| 0.2394 | 2.0 | 716 | 0.1599 | 0.8447 |
| 0.2394 | 3.0 | 1074 | 0.1584 | 0.8537 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.18.4
- Tokenizers 0.10.3
|
huggingtweets/bro_b619
|
huggingtweets
| 2022-07-09T15:47:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-09T15:37:02Z |
---
language: en
thumbnail: http://www.huggingtweets.com/bro_b619/1657381637888/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475310547805425664/2vnSS9WL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Brutha B 🧀🌐</div>
<div style="text-align: center; font-size: 14px;">@bro_b619</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Brutha B 🧀🌐.
| Data | Brutha B 🧀🌐 |
| --- | --- |
| Tweets downloaded | 1922 |
| Retweets | 302 |
| Short tweets | 345 |
| Tweets kept | 1275 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2lb73vwt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bro_b619's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/xm49vj8a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/xm49vj8a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bro_b619')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huangjia/xlm-roberta-base-finetuned-panx-de
|
huangjia
| 2022-07-09T15:39:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-09T15:23:57Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8550872422388397
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1333
- F1: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 263 | 0.1573 | 0.8137 |
| 0.2142 | 2.0 | 526 | 0.1386 | 0.8466 |
| 0.2142 | 3.0 | 789 | 0.1333 | 0.8551 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.18.4
- Tokenizers 0.10.3
|
jonatasgrosman/exp_w2v2t_sv-se_vp-100k_s904
|
jonatasgrosman
| 2022-07-09T15:16:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T15:16:17Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_vp-100k_s904
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_sv-se_vp-100k_s108
|
jonatasgrosman
| 2022-07-09T15:01:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T15:01:02Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_vp-100k_s108
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Valentinho/Bobbyboi
|
Valentinho
| 2022-07-09T14:56:52Z | 0 | 0 | null |
[
"license:bsd-3-clause-clear",
"region:us"
] | null | 2022-07-09T14:56:52Z |
---
license: bsd-3-clause-clear
---
|
dingusagar/vit-base-movie-scenes-v1
|
dingusagar
| 2022-07-09T14:34:10Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-09T14:22:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-movie-scenes-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-movie-scenes-v1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
Fine-tuned on movie scene images from batman and harry potter.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_sv-se_wav2vec2_s732
|
jonatasgrosman
| 2022-07-09T14:33:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T14:33:03Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_wav2vec2_s732
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
dastmard/stt_en_conformer_ctc_small
|
dastmard
| 2022-07-09T14:28:52Z | 1 | 0 |
nemo
|
[
"nemo",
"region:us"
] | null | 2022-07-09T14:25:05Z |
hf_model_name = f'{username}/{MODEL_NAME}'
TEMPLATE = f"""
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained("{hf_model_name}")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \
pretrained_name="{hf_model_name}" \
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
<ADD SOME INFORMATION ABOUT THE ARCHITECTURE>
## Training
<ADD INFORMATION ABOUT HOW THE MODEL WAS TRAINED - HOW MANY EPOCHS, AMOUNT OF COMPUTE ETC>
### Datasets
<LIST THE NAME AND SPLITS OF DATASETS USED TO TRAIN THIS MODEL (ALONG WITH LANGUAGE AND ANY ADDITIONAL INFORMATION)>
## Performance
<LIST THE SCORES OF THE MODEL -
OR
USE THE Hugging Face Evaluate LiBRARY TO UPLOAD METRICS>
## Limitations
<DECLARE ANY POTENTIAL LIMITATIONS OF THE MODEL>
Eg:
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## References
<ADD ANY REFERENCES HERE AS NEEDED>
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
"""
|
SushantGautam/LogClassification
|
SushantGautam
| 2022-07-09T14:21:33Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"canine",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-05T17:41:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LogClassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LogClassification
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_sv-se_wav2vec2_s818
|
jonatasgrosman
| 2022-07-09T14:06:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T14:05:43Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- sv-SE
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_sv-se_wav2vec2_s818
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
quanxi/q-Taxi-v3
|
quanxi
| 2022-07-09T12:23:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-09T12:23:39Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="quanxi/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
quanxi/q-FrozenLake-v1-4x4-noSlippery
|
quanxi
| 2022-07-09T12:10:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-09T12:09:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="quanxi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
pip64/PyBebra
|
pip64
| 2022-07-09T11:15:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-07-09T11:15:19Z |
# PyBebra
Беброчный язык программирования
Создан по рофлу, не воспринимайте его всерьёз!
# Использование
**Создай файл - `test.bbr`**
```py
bebra("Привет Бебромир!")
```
**Запуск**
```py
python shell.py
> lopata("test.bbr")
```
# Беброчная документация
**Главное**
`python shell.py` открывает консоль. Команда запуска `lopata("test.bbr")`
**Переменные**
Переменные создаются с помощью ключевого слова `beb`
```py
beb a = 100
beb b = 50
beb v = a + b
beb g = v * b
bebra(v)
bebra(g)
```
Вывод:
```
150
7500
```
**Условия**
Если - bif, или - belif, иначе - belse
```py
beb a = 100
bif a == 100 thenb bebra("a = 100") belse bebra("a не = 100")
```
Вывод:
```
а = 100
```
**Циклы**
```py
lopt i = 0 to 5 thenb
bebra("привет")
bend
```
Вывод:
```
привет
привет
привет
привет
привет
```
**Функции**
```py
bfunc pybebra(a) -> a + "Bebra"
bebra(pybebra("Это Py"))
```
Вывод:
```
Это PyBebra
```
# Конец
Это беброчный конец файла ридми! Удачного беброиспользования!
|
geninhu/article-summarization
|
geninhu
| 2022-07-09T09:51:51Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-07-09T08:32:16Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
hwalbertseo/bert-finetuned-squad
|
hwalbertseo
| 2022-07-09T08:17:50Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-09T04:22:07Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Arandine/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Arandine/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5695
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16638, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2752 | 0 |
| 0.7798 | 1 |
| 0.5695 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ankitsharma/bert-finetuned-ner
|
ankitsharma
| 2022-07-09T04:45:03Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-09T04:34:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ankitsharma/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ankitsharma/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0283
- Validation Loss: 0.0554
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1672 | 0.0635 | 0 |
| 0.0459 | 0.0552 | 1 |
| 0.0283 | 0.0554 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
okite97/xlm-roberta-base-finetuned-panx-all
|
okite97
| 2022-07-09T04:33:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-09T04:03:56Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1883
- F1: 0.8538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2967 | 1.0 | 1109 | 0.2050 | 0.8180 |
| 0.1571 | 2.0 | 2218 | 0.1880 | 0.8415 |
| 0.0983 | 3.0 | 3327 | 0.1883 | 0.8538 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sl82/swin-tiny-patch4-window7-224-finetuned-eurosat
|
sl82
| 2022-07-09T03:36:40Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-08T16:57:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9837037037037037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0581
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2666 | 1.0 | 190 | 0.1364 | 0.9541 |
| 0.1735 | 2.0 | 380 | 0.0970 | 0.9663 |
| 0.126 | 3.0 | 570 | 0.0581 | 0.9837 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
okite97/xlm-roberta-base-finetuned-panx-de-fr
|
okite97
| 2022-07-09T03:10:18Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-09T02:40:20Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1196
- F1: 0.8973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2837 | 1.0 | 1073 | 0.1775 | 0.8379 |
| 0.1446 | 2.0 | 2146 | 0.1301 | 0.8767 |
| 0.0917 | 3.0 | 3219 | 0.1196 | 0.8973 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_fr_vp-it_s878
|
jonatasgrosman
| 2022-07-09T02:00:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T02:00:15Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-it_s878
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_r-wav2vec2_s459
|
jonatasgrosman
| 2022-07-09T01:57:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T01:57:05Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_r-wav2vec2_s459
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_r-wav2vec2_s456
|
jonatasgrosman
| 2022-07-09T01:50:41Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T01:50:12Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_r-wav2vec2_s456
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_xls-r_s250
|
jonatasgrosman
| 2022-07-09T01:43:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T01:43:16Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_xls-r_s250
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_unispeech-sat_s115
|
jonatasgrosman
| 2022-07-09T01:33:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T01:33:09Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_unispeech-sat_s115
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_unispeech-sat_s655
|
jonatasgrosman
| 2022-07-09T01:30:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T01:29:58Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_unispeech-sat_s655
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_vp-nl_s93
|
jonatasgrosman
| 2022-07-09T01:23:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T01:23:20Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-nl_s93
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_vp-es_s281
|
jonatasgrosman
| 2022-07-09T01:13:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T01:13:16Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-es_s281
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_vp-es_s169
|
jonatasgrosman
| 2022-07-09T01:10:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T01:10:02Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-es_s169
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_wavlm_s208
|
jonatasgrosman
| 2022-07-09T00:45:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T00:44:40Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_wavlm_s208
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_vp-sv_s877
|
jonatasgrosman
| 2022-07-09T00:08:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T00:08:16Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-sv_s877
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_vp-sv_s596
|
jonatasgrosman
| 2022-07-09T00:05:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T00:04:45Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-sv_s596
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_vp-sv_s875
|
jonatasgrosman
| 2022-07-09T00:01:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-09T00:01:22Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-sv_s875
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_unispeech_s833
|
jonatasgrosman
| 2022-07-08T23:39:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T23:38:37Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_unispeech_s833
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_xlsr-53_s800
|
jonatasgrosman
| 2022-07-08T23:28:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T23:28:02Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_xlsr-53_s800
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_vp-100k_s688
|
jonatasgrosman
| 2022-07-08T23:12:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T23:11:37Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-100k_s688
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_wav2vec2_s870
|
jonatasgrosman
| 2022-07-08T23:07:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T23:06:57Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_wav2vec2_s870
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_wav2vec2_s809
|
jonatasgrosman
| 2022-07-08T23:04:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T23:03:23Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_wav2vec2_s809
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
steven123/Check_Missing_Teeth
|
steven123
| 2022-07-08T22:59:30Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-08T22:59:18Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Check_Missing_Teeth
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
# Check_Missing_Teeth
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Missing Teeth

#### Non-Missing Teeth

|
jonatasgrosman/exp_w2v2t_fr_wav2vec2_s227
|
jonatasgrosman
| 2022-07-08T22:58:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T22:58:05Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_wav2vec2_s227
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_it_vp-it_s411
|
jonatasgrosman
| 2022-07-08T22:51:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T22:51:14Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_vp-it_s411
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_it_r-wav2vec2_s646
|
jonatasgrosman
| 2022-07-08T22:41:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T22:40:55Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_r-wav2vec2_s646
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_it_r-wav2vec2_s317
|
jonatasgrosman
| 2022-07-08T22:37:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T22:37:32Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_r-wav2vec2_s317
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_it_xls-r_s417
|
jonatasgrosman
| 2022-07-08T22:22:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T22:21:41Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_xls-r_s417
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_it_unispeech-sat_s692
|
jonatasgrosman
| 2022-07-08T21:56:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T21:56:10Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_unispeech-sat_s692
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_it_unispeech-sat_s306
|
jonatasgrosman
| 2022-07-08T21:48:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T21:47:39Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_unispeech-sat_s306
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_it_unispeech-sat_s500
|
jonatasgrosman
| 2022-07-08T21:10:22Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T21:09:56Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_unispeech-sat_s500
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
huggingtweets/redo
|
huggingtweets
| 2022-07-08T21:02:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-08T20:47:57Z |
---
language: en
thumbnail: http://www.huggingtweets.com/redo/1657314137996/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/809537557943881728/GU7lSXyY_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gregory Renard</div>
<div style="text-align: center; font-size: 14px;">@redo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gregory Renard.
| Data | Gregory Renard |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 579 |
| Short tweets | 62 |
| Tweets kept | 2602 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hp88bd4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @redo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ncfpyxs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ncfpyxs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/redo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jonatasgrosman/exp_w2v2t_it_vp-nl_s335
|
jonatasgrosman
| 2022-07-08T20:58:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T20:57:52Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_vp-nl_s335
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
yuningm/bart-large-citesum
|
yuningm
| 2022-07-08T20:54:02Z | 19 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:yuningm/citesum",
"arxiv:2205.06207",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-07-01T17:53:34Z |
---
license: cc-by-nc-4.0
language: en
tags:
- summarization
datasets:
- yuningm/citesum
widget:
- text: "Abstract-This paper presents a control strategy that allows a group of mobile robots to position themselves to optimize the measurement of sensory information in the environment. The robots use sensed information to estimate a function indicating the relative importance of different areas in the environment. Their estimate is then used to drive the network to a desirable placement configuration using a computationally simple decentralized control law. We formulate the problem, provide a practical control solution, and present the results of numerical simulations. We then discuss experiments carried out on a swarm of mobile robots."
example_title: "Networked Robots"
- text: "Abstract. In this paper, a Bayesian method for face recognition is proposed based on Markov Random Fields (MRF) modeling. Constraints on image features as well as contextual relationships between them are explored and encoded into a cost function derived based on a statistical model of MRF. Gabor wavelet coefficients are used as the base features, and relationships between Gabor features at different pixel locations are used to provide higher order contextual constraints. The posterior probability of matching configuration is derived based on MRF modeling. Local search and discriminate analysis are used to evaluate local matches, and a contextual constraint is applied to evaluate mutual matches between local matches. The proposed MRF method provides a new perspective for modeling the face recognition problem. Experiments demonstrate promising results."
example_title: "Bayesian Face Recognition"
- text: "Abstract One of the most relevant applications of digital image forensics is to accurately identify the device used for taking a given set of images, a problem called source identification. This paper studies recent developments in the field and proposes the mixture of two techniques (Sensor Imperfections and Wavelet Transforms) to get better source identification of images generated with mobile devices. Our results show that Sensor Imperfections and Wavelet Transforms can jointly serve as good forensic features to help trace the source camera of images produced by mobile phones. Furthermore, the model proposed here can also determine with high precision both the brand and model of the device."
example_title: "Source identification for mobile devices"
---
# Bart-Large CiteSum (Sentences)
This is facebook/bart-large fine-tuned on CiteSum.
The "src" column is the input and the "tgt" column is the target summarization.
## Authors
### Yuning Mao, Ming Zhong, Jiawei Han
#### University of Illinois Urbana-Champaign
{yuningm2, mingz5, hanj}@illinois.edu
## Results
```
{
"epoch": 5.28,
"eval_gen_len": 37.0464,
"eval_loss": 2.058537483215332,
"eval_rouge1": 41.3415,
"eval_rouge2": 19.2246,
"eval_rougeL": 33.3258,
"eval_rougeLsum": 33.5075,
"eval_runtime": 697.7289,
"eval_samples": 4721,
"eval_samples_per_second": 6.766,
"eval_steps_per_second": 0.847,
"predict_gen_len": 37.0159,
"predict_loss": 2.0521159172058105,
"predict_rouge1": 41.9288,
"predict_rouge2": 19.5963,
"predict_rougeL": 33.7098,
"predict_rougeLsum": 33.9124,
"predict_runtime": 718.1231,
"predict_samples": 4921,
"predict_samples_per_second": 6.853,
"predict_steps_per_second": 0.858,
"train_loss": 1.7884394331498579,
"train_runtime": 23049.0303,
"train_samples": 83304,
"train_samples_per_second": 69.417,
"train_steps_per_second": 8.677
}
```
## Dataset Description
CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation.
CiteSum contains TLDR summaries for scientific papers from their citation texts without human annotation, making it around 30 times larger than the previous human-curated dataset SciTLDR.
## Homepage
https://github.com/morningmoni/CiteSum
## Paper
https://arxiv.org/abs/2205.06207
## Dataset on Hub
https://huggingface.co/datasets/nbroad/citesum
## How to use model
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="yuningm/bart-large-citesum")
article = ''' We describe a convolutional neural network that learns\
feature representations for short textual posts using hashtags as a\
supervised signal. The proposed approach is trained on up to 5.5 \
billion words predicting 100,000 possible hashtags. As well as strong\
performance on the hashtag prediction task itself, we show that its \
learned representation of text (ignoring the hashtag labels) is useful\
for other tasks as well. To that end, we present results on a document\
recommendation task, where it also outperforms a number of baselines.
'''
summarizer(article)
# [{'summary_text': 'REF proposed a convolutional neural network
# that learns feature representations for short textual posts
# using hashtags as a supervised signal.'}]
```
|
jonatasgrosman/exp_w2v2t_it_vp-nl_s27
|
jonatasgrosman
| 2022-07-08T20:51:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T20:50:16Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_vp-nl_s27
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.