modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-01 18:27:28
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
532 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-01 18:27:19
card
stringlengths
11
1.01M
Shaier/medqa_fine_tuned_linkbert
Shaier
2022-07-12T04:48:24Z
3
0
transformers
[ "transformers", "pytorch", "bert", "multiple-choice", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-07-12T03:27:12Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: medqa_fine_tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medqa_fine_tuned This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4462 - Accuracy: 0.4002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 1.3208 | 0.3553 | | 1.2802 | 2.0 | 636 | 1.3428 | 0.3703 | | 1.2802 | 3.0 | 954 | 1.3780 | 0.3892 | | 1.1466 | 4.0 | 1272 | 1.4234 | 0.3978 | | 1.052 | 5.0 | 1590 | 1.4462 | 0.4002 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.11.0
Saraswati/q-FrozenLake-v1-4x4-noSlippery
Saraswati
2022-07-12T04:25:49Z
0
1
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-12T04:25:40Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Saraswati/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Evelyn18/legalectra-small-spanish-becasv3-2
Evelyn18
2022-07-12T04:24:24Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:becasv2", "endpoints_compatible", "region:us" ]
question-answering
2022-07-12T04:00:10Z
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: legalectra-small-spanish-becasv3-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legalectra-small-spanish-becasv3-2 This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 4.7145 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 5.7994 | | No log | 2.0 | 10 | 5.6445 | | No log | 3.0 | 15 | 5.5595 | | No log | 4.0 | 20 | 5.4933 | | No log | 5.0 | 25 | 5.4248 | | No log | 6.0 | 30 | 5.3547 | | No log | 7.0 | 35 | 5.2872 | | No log | 8.0 | 40 | 5.2187 | | No log | 9.0 | 45 | 5.1585 | | No log | 10.0 | 50 | 5.1038 | | No log | 11.0 | 55 | 5.0451 | | No log | 12.0 | 60 | 5.0015 | | No log | 13.0 | 65 | 4.9638 | | No log | 14.0 | 70 | 4.9350 | | No log | 15.0 | 75 | 4.9034 | | No log | 16.0 | 80 | 4.8741 | | No log | 17.0 | 85 | 4.8496 | | No log | 18.0 | 90 | 4.8275 | | No log | 19.0 | 95 | 4.8139 | | No log | 20.0 | 100 | 4.7878 | | No log | 21.0 | 105 | 4.7672 | | No log | 22.0 | 110 | 4.7671 | | No log | 23.0 | 115 | 4.7611 | | No log | 24.0 | 120 | 4.7412 | | No log | 25.0 | 125 | 4.7307 | | No log | 26.0 | 130 | 4.7232 | | No log | 27.0 | 135 | 4.7208 | | No log | 28.0 | 140 | 4.7186 | | No log | 29.0 | 145 | 4.7158 | | No log | 30.0 | 150 | 4.7145 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Evelyn18/legalectra-small-spanish-becasv3-1
Evelyn18
2022-07-12T03:54:49Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:becasv2", "endpoints_compatible", "region:us" ]
question-answering
2022-07-12T03:49:49Z
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: legalectra-small-spanish-becasv3-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legalectra-small-spanish-becasv3-1 This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 5.5694 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 8 | 5.8980 | | No log | 2.0 | 16 | 5.8136 | | No log | 3.0 | 24 | 5.7452 | | No log | 4.0 | 32 | 5.6940 | | No log | 5.0 | 40 | 5.6554 | | No log | 6.0 | 48 | 5.6241 | | No log | 7.0 | 56 | 5.5997 | | No log | 8.0 | 64 | 5.5830 | | No log | 9.0 | 72 | 5.5730 | | No log | 10.0 | 80 | 5.5694 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
paola-md/recipe-distilbert-upper-Is
paola-md
2022-07-12T03:03:14Z
13
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-12T00:16:41Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: recipe-distilbert-upper-Is results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-distilbert-upper-Is This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8565 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.6309 | 1.0 | 1305 | 1.2607 | | 1.2639 | 2.0 | 2610 | 1.1291 | | 1.1592 | 3.0 | 3915 | 1.0605 | | 1.0987 | 4.0 | 5220 | 1.0128 | | 1.0569 | 5.0 | 6525 | 0.9796 | | 1.0262 | 6.0 | 7830 | 0.9592 | | 1.0032 | 7.0 | 9135 | 0.9352 | | 0.9815 | 8.0 | 10440 | 0.9186 | | 0.967 | 9.0 | 11745 | 0.9086 | | 0.9532 | 10.0 | 13050 | 0.8973 | | 0.9436 | 11.0 | 14355 | 0.8888 | | 0.9318 | 12.0 | 15660 | 0.8835 | | 0.9243 | 13.0 | 16965 | 0.8748 | | 0.9169 | 14.0 | 18270 | 0.8673 | | 0.9117 | 15.0 | 19575 | 0.8610 | | 0.9066 | 16.0 | 20880 | 0.8562 | | 0.9028 | 17.0 | 22185 | 0.8566 | | 0.901 | 18.0 | 23490 | 0.8583 | | 0.8988 | 19.0 | 24795 | 0.8557 | | 0.8958 | 20.0 | 26100 | 0.8565 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
nateraw/yolov6t
nateraw
2022-07-12T02:01:04Z
0
0
pytorch
[ "pytorch", "object-detection", "yolo", "autogenerated-modelcard", "en", "arxiv:1910.09700", "license:gpl-3.0", "region:us" ]
object-detection
2022-07-08T04:19:38Z
--- language: en license: gpl-3.0 library_name: pytorch tags: - object-detection - yolo - autogenerated-modelcard model_name: yolov6t --- # Model Card for yolov6t <!-- Provide a quick summary of what the model is/does. --> # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Model Examination](#model-examination) 7. [Environmental Impact](#environmental-impact) 8. [Technical Specifications](#technical-specifications-optional) 9. [Citation](#citation) 10. [Glossary](#glossary-optional) 11. [More Information](#more-information-optional) 12. [Model Card Authors](#model-card-authors-optional) 13. [Model Card Contact](#model-card-contact) 14. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance. - **Developed by:** [More Information Needed] - **Shared by [Optional]:** [@nateraw](https://hf.co/nateraw) - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Related Models:** [yolov6s](https://hf.co/nateraw/yolov6s), [yolov6n](https://hf.co/nateraw/yolov6n) - **Parent Model:** N/A - **Resources for more information:** The [official GitHub Repository](https://github.com/meituan/YOLOv6) # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is meant to be used as a general object detector. ## Downstream Use [Optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> You can fine-tune this model for your specific task ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Don't be evil. # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This model often classifies objects incorrectly, especially when applied to videos. It does not handle crowds very well. ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations. # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] # Model Examination [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] Please refer to the [official GitHub Repository](https://github.com/meituan/YOLOv6) # Model Card Authors [optional] [@nateraw](https://hf.co/nateraw) # Model Card Contact [@nateraw](https://hf.co/nateraw) - please leave a note in the discussions tab here # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> [More Information Needed] </details>
ArthurBaia/xlm-roberta-base-squad-pt
ArthurBaia
2022-07-11T22:42:37Z
7
2
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "generated_from_trainer", "dataset:squad_v1_pt", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-07-11T16:59:16Z
--- license: mit tags: - generated_from_trainer datasets: - squad_v1_pt model-index: - name: xlm-roberta-base-squad-pt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-squad-pt This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad_v1_pt dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results - "epoch": 3.0, - "eval_exact_match": 44.45600756859035, - "eval_f1": 57.37953911779836, - "eval_samples": 11095 ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.9.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
ManqingLiu/pegasus-samsum
ManqingLiu
2022-07-11T22:33:51Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-11T21:16:06Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7236 | 0.54 | 500 | 1.4858 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.10.3
mariastull/testpyramidsrnd
mariastull
2022-07-11T22:28:45Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-07-11T22:28:40Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: mariastull/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
tj-solergibert/distilbert-base-uncased-finetuned-emotion
tj-solergibert
2022-07-11T21:58:32Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-11T17:19:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9285 - name: F1 type: f1 value: 0.9285646975197546 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2158 - Accuracy: 0.9285 - F1: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8235 | 1.0 | 250 | 0.3085 | 0.915 | 0.9127 | | 0.2493 | 2.0 | 500 | 0.2158 | 0.9285 | 0.9286 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
camilag/t5-end2end-questions-generation
camilag
2022-07-11T20:52:28Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad_modified_for_t5_qg", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-11T20:12:30Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_modified_for_t5_qg model-index: - name: t5-end2end-questions-generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-end2end-questions-generation This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset. It achieves the following results on the evaluation set: - Loss: 1.7927 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5425 | 0.34 | 100 | 1.9416 | | 2.0221 | 0.68 | 200 | 1.7927 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_pt_vp-it_s738
jonatasgrosman
2022-07-11T20:09:11Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T20:08:31Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-it_s738 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-it_s996
jonatasgrosman
2022-07-11T19:59:08Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:58:21Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-it_s996 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
paola-md/recipe-roberta-tis
paola-md
2022-07-11T19:45:57Z
8
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-11T16:22:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: recipe-roberta-tis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-roberta-tis This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8491 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.3552 | 1.0 | 1012 | 1.1292 | | 1.1811 | 2.0 | 2024 | 1.0543 | | 1.1095 | 3.0 | 3036 | 1.0122 | | 1.0667 | 4.0 | 4048 | 0.9756 | | 1.0345 | 5.0 | 5060 | 0.9478 | | 1.0112 | 6.0 | 6072 | 0.9292 | | 0.9922 | 7.0 | 7084 | 0.9137 | | 0.9762 | 8.0 | 8096 | 0.9056 | | 0.9627 | 9.0 | 9108 | 0.8977 | | 0.9507 | 10.0 | 10120 | 0.8868 | | 0.9411 | 11.0 | 11132 | 0.8823 | | 0.9344 | 12.0 | 12144 | 0.8745 | | 0.9261 | 13.0 | 13156 | 0.8688 | | 0.9189 | 14.0 | 14168 | 0.8614 | | 0.9133 | 15.0 | 15180 | 0.8609 | | 0.9078 | 16.0 | 16192 | 0.8581 | | 0.906 | 17.0 | 17204 | 0.8544 | | 0.9015 | 18.0 | 18216 | 0.8537 | | 0.8988 | 19.0 | 19228 | 0.8494 | | 0.8975 | 20.0 | 20240 | 0.8491 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_pt_xls-r_s657
jonatasgrosman
2022-07-11T19:45:15Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:44:32Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_xls-r_s657 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
KD02/distilbert-base-uncased-finetuned-squad
KD02
2022-07-11T19:37:22Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-07-11T14:14:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [KD02/distilbert-base-uncased-finetuned-squad](https://huggingface.co/KD02/distilbert-base-uncased-finetuned-squad) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Sahara/finetuning-sentiment-model-3000-samples
Sahara
2022-07-11T19:23:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-11T14:06:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8533333333333334 - name: F1 type: f1 value: 0.8562091503267975 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3322 - Accuracy: 0.8533 - F1: 0.8562 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_pt_vp-nl_s6
jonatasgrosman
2022-07-11T19:17:20Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:16:53Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-nl_s6 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-nl_s833
jonatasgrosman
2022-07-11T19:13:31Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:12:53Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-nl_s833 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-es_s291
jonatasgrosman
2022-07-11T19:09:42Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:08:58Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-es_s291 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-fr_s752
jonatasgrosman
2022-07-11T18:58:10Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:57:25Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-fr_s752 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-fr_s485
jonatasgrosman
2022-07-11T18:54:15Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:53:30Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-fr_s485 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-fr_s675
jonatasgrosman
2022-07-11T18:49:06Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:48:25Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-fr_s675 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
quanxi/dqn-SpaceInvadersNoFrameskip-v4
quanxi
2022-07-11T18:32:52Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-11T18:32:11Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 596.50 +/- 113.18 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga quanxi -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga quanxi ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', True), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
jonatasgrosman/exp_w2v2t_pt_unispeech-ml_s808
jonatasgrosman
2022-07-11T18:31:15Z
4
0
transformers
[ "transformers", "pytorch", "unispeech", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:30:46Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_unispeech-ml_s808 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_unispeech-ml_s324
jonatasgrosman
2022-07-11T18:27:29Z
3
0
transformers
[ "transformers", "pytorch", "unispeech", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:26:59Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_unispeech-ml_s324 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_wavlm_s691
jonatasgrosman
2022-07-11T18:13:28Z
3
0
transformers
[ "transformers", "pytorch", "wavlm", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:13:02Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_wavlm_s691 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_wavlm_s51
jonatasgrosman
2022-07-11T18:10:28Z
3
0
transformers
[ "transformers", "pytorch", "wavlm", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:09:52Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_wavlm_s51 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_no-pretraining_s34
jonatasgrosman
2022-07-11T18:06:01Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:05:36Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_no-pretraining_s34 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-sv_s563
jonatasgrosman
2022-07-11T17:51:15Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:50:36Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-sv_s563 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
AdiKompella/Reinforce-Pixelcopter-PLE-v0
AdiKompella
2022-07-11T17:48:01Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-11T17:47:44Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - metrics: - type: mean_reward value: 12.70 +/- 11.50 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
jonatasgrosman/exp_w2v2t_pt_hubert_s486
jonatasgrosman
2022-07-11T17:43:15Z
3
0
transformers
[ "transformers", "pytorch", "hubert", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:42:50Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_hubert_s486 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_hubert_s301
jonatasgrosman
2022-07-11T17:40:03Z
3
0
transformers
[ "transformers", "pytorch", "hubert", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:39:41Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_hubert_s301 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_hubert_s807
jonatasgrosman
2022-07-11T17:36:35Z
3
0
transformers
[ "transformers", "pytorch", "hubert", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:36:06Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_hubert_s807 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ianspektor/reinforce-CartPole-v1
ianspektor
2022-07-11T17:36:19Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-11T16:33:35Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-CartPole-v1 results: - metrics: - type: mean_reward value: 359.42 +/- 89.49 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
kinanmartin/xlm-roberta-large-ner-hrl-finetuned-ner
kinanmartin
2022-07-11T17:29:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:toydata", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-11T03:49:46Z
--- tags: - generated_from_trainer datasets: - toydata metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-large-ner-hrl-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: toydata type: toydata args: SDN metrics: - name: Precision type: precision value: 0.9132452695465905 - name: Recall type: recall value: 0.9205854126679462 - name: F1 type: f1 value: 0.9169006511739053 - name: Accuracy type: accuracy value: 0.9784804945824268 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-ner-hrl-finetuned-ner This model is a fine-tuned version of [Davlan/xlm-roberta-large-ner-hrl](https://huggingface.co/Davlan/xlm-roberta-large-ner-hrl) on the toydata dataset. It achieves the following results on the evaluation set: - Loss: 0.0944 - Precision: 0.9132 - Recall: 0.9206 - F1: 0.9169 - Accuracy: 0.9785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 408 | 0.0900 | 0.8508 | 0.9303 | 0.8888 | 0.9719 | | 0.1087 | 2.0 | 816 | 0.0827 | 0.9043 | 0.9230 | 0.9136 | 0.9783 | | 0.0503 | 3.0 | 1224 | 0.0944 | 0.9132 | 0.9206 | 0.9169 | 0.9785 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_pt_xlsr-53_s829
jonatasgrosman
2022-07-11T17:23:34Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:23:00Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_xlsr-53_s829 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_xlsr-53_s677
jonatasgrosman
2022-07-11T17:17:00Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:16:33Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_xlsr-53_s677 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_wav2vec2_s859
jonatasgrosman
2022-07-11T16:58:14Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T16:57:41Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_wav2vec2_s859 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_wav2vec2_s250
jonatasgrosman
2022-07-11T16:51:46Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T16:51:14Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_wav2vec2_s250 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_vp-it_s179
jonatasgrosman
2022-07-11T16:44:55Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T16:44:09Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_vp-it_s179 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jorge-henao/gpt2-small-spanish-historias-conflicto-colpoetry-historias-conflicto-col
jorge-henao
2022-07-11T16:43:58Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-11T16:29:51Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: gpt2-small-spanish-historias-conflicto-colpoetry-historias-conflicto-col results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-small-spanish-historias-conflicto-colpoetry-historias-conflicto-col This model is a fine-tuned version of [jorge-henao/gpt2-small-spanish-historias-conflicto-col](https://huggingface.co/jorge-henao/gpt2-small-spanish-historias-conflicto-col) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.5017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
alefarasin/testpyramidsrnd
alefarasin
2022-07-11T16:37:44Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-07-11T16:37:35Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: alefarasin/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
jonatasgrosman/exp_w2v2t_es_r-wav2vec2_s809
jonatasgrosman
2022-07-11T16:26:53Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T16:26:08Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_r-wav2vec2_s809 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
paola-md/recipe-roberta-i
paola-md
2022-07-11T16:17:54Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-11T15:48:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: recipe-roberta-i results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-roberta-i This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9919 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.871 | 1.0 | 149 | 1.4670 | | 1.528 | 2.0 | 298 | 1.3426 | | 1.41 | 3.0 | 447 | 1.2636 | | 1.3332 | 4.0 | 596 | 1.2029 | | 1.2804 | 5.0 | 745 | 1.1646 | | 1.2441 | 6.0 | 894 | 1.1351 | | 1.21 | 7.0 | 1043 | 1.0898 | | 1.182 | 8.0 | 1192 | 1.0725 | | 1.1604 | 9.0 | 1341 | 1.0718 | | 1.1402 | 10.0 | 1490 | 1.0529 | | 1.1308 | 11.0 | 1639 | 1.0512 | | 1.1191 | 12.0 | 1788 | 1.0245 | | 1.0986 | 13.0 | 1937 | 1.0203 | | 1.0919 | 14.0 | 2086 | 1.0158 | | 1.084 | 15.0 | 2235 | 0.9930 | | 1.0797 | 16.0 | 2384 | 0.9855 | | 1.0697 | 17.0 | 2533 | 1.0061 | | 1.0652 | 18.0 | 2682 | 0.9725 | | 1.0658 | 19.0 | 2831 | 0.9861 | | 1.0642 | 20.0 | 2980 | 0.9919 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_es_xls-r_s118
jonatasgrosman
2022-07-11T16:13:12Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T16:12:22Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_xls-r_s118 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_unispeech-sat_s514
jonatasgrosman
2022-07-11T15:57:16Z
3
0
transformers
[ "transformers", "pytorch", "unispeech-sat", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T15:56:32Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_unispeech-sat_s514 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
nateraw/keras-dummy-functional-demo
nateraw
2022-07-11T15:41:53Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2022-03-02T23:29:05Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | learning_rate | 0.001 | | decay | 0.0 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
ericntay/clinical_bert_ft
ericntay
2022-07-11T15:30:06Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-11T10:38:42Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: clinical_bert_ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clinical_bert_ft This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2439 - F1: 0.8252 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5938 | 1.0 | 95 | 0.2480 | 0.7084 | | 0.1567 | 2.0 | 190 | 0.2035 | 0.7855 | | 0.083 | 3.0 | 285 | 0.2002 | 0.8026 | | 0.0482 | 4.0 | 380 | 0.2046 | 0.8118 | | 0.0269 | 5.0 | 475 | 0.2230 | 0.8143 | | 0.0185 | 6.0 | 570 | 0.2178 | 0.8175 | | 0.0123 | 7.0 | 665 | 0.2269 | 0.8253 | | 0.0093 | 8.0 | 760 | 0.2421 | 0.8227 | | 0.0072 | 9.0 | 855 | 0.2446 | 0.8267 | | 0.006 | 10.0 | 950 | 0.2439 | 0.8252 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ariesutiono/finetuned-test-1
ariesutiono
2022-07-11T14:57:10Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-11T13:24:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 model-index: - name: finetuned-test-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-test-1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 1.8192 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8219 | 1.0 | 30 | 2.3343 | | 2.4148 | 2.0 | 60 | 2.2010 | | 2.3236 | 3.0 | 90 | 2.1442 | | 2.2231 | 4.0 | 120 | 2.1651 | | 2.2171 | 5.0 | 150 | 2.0614 | | 2.127 | 6.0 | 180 | 2.0405 | | 2.0748 | 7.0 | 210 | 2.0092 | | 2.0511 | 8.0 | 240 | 1.9798 | | 2.0097 | 9.0 | 270 | 1.8662 | | 1.9969 | 10.0 | 300 | 1.9257 | | 2.0006 | 11.0 | 330 | 1.9386 | | 1.9273 | 12.0 | 360 | 1.9357 | | 1.9177 | 13.0 | 390 | 1.8983 | | 1.9128 | 14.0 | 420 | 1.8990 | | 1.8979 | 15.0 | 450 | 1.9037 | | 1.8721 | 16.0 | 480 | 1.8440 | | 1.8998 | 17.0 | 510 | 1.8404 | | 1.8862 | 18.0 | 540 | 1.9193 | | 1.9133 | 19.0 | 570 | 1.8494 | | 1.8799 | 20.0 | 600 | 1.8192 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_es_vp-nl_s924
jonatasgrosman
2022-07-11T14:57:04Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T14:56:23Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_vp-nl_s924 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_vp-es_s250
jonatasgrosman
2022-07-11T14:23:27Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T14:22:53Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_vp-es_s250 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
huggingartists/taylor-swift
huggingartists
2022-07-11T13:52:52Z
23
3
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/taylor-swift", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/taylor-swift tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/721a6c465a666419bf286b473287c33f.446x446x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Taylor Swift</div> <a href="https://genius.com/artists/taylor-swift"> <div style="text-align: center; font-size: 14px;">@taylor-swift</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Taylor Swift. Dataset is available [here](https://huggingface.co/datasets/huggingartists/taylor-swift). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/taylor-swift") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2l84tzp2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Taylor Swift's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1hy7aa65) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1hy7aa65/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/taylor-swift') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/taylor-swift") model = AutoModelWithLMHead.from_pretrained("huggingartists/taylor-swift") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
jonatasgrosman/exp_w2v2t_es_vp-es_s859
jonatasgrosman
2022-07-11T13:12:20Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T13:11:34Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_vp-es_s859 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
Dudul/dudul
Dudul
2022-07-11T13:09:08Z
0
0
null
[ "region:us" ]
null
2022-07-11T01:50:50Z
--- title: Cryptopunks Generator emoji: πŸ§ βž‘οΈπŸ™β€β™€οΈ colorFrom: red colorTo: indigo sdk: gradio app_file: app.py pinned: false --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
egg22314/LaserTube
egg22314
2022-07-11T13:03:19Z
0
1
null
[ "region:us" ]
null
2022-07-11T13:01:55Z
Watching YouTube videos too boring for you? Wish you could be punished for not clicking on stuff fast enough while you watch a cat play the piano? Well, LaserTube is here to solve that problem, by letting you turn any YouTube video into a genuine simulation of an oldschool laserdisc arcade game! Work in progress.
paola-md/recipe-roberta-upper-Is
paola-md
2022-07-11T12:57:29Z
61
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-11T08:50:33Z
--- license: mit tags: - generated_from_trainer model-index: - name: recipe-roberta-upper-Is results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-roberta-upper-Is This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2455 | 1.0 | 1228 | 1.0420 | | 1.0812 | 2.0 | 2456 | 0.9641 | | 1.018 | 3.0 | 3684 | 0.9220 | | 0.977 | 4.0 | 4912 | 0.8943 | | 0.9451 | 5.0 | 6140 | 0.8726 | | 0.9254 | 6.0 | 7368 | 0.8574 | | 0.9074 | 7.0 | 8596 | 0.8404 | | 0.8944 | 8.0 | 9824 | 0.8290 | | 0.8797 | 9.0 | 11052 | 0.8258 | | 0.869 | 10.0 | 12280 | 0.8115 | | 0.8609 | 11.0 | 13508 | 0.8085 | | 0.8522 | 12.0 | 14736 | 0.7995 | | 0.8462 | 13.0 | 15964 | 0.7958 | | 0.8414 | 14.0 | 17192 | 0.7891 | | 0.8374 | 15.0 | 18420 | 0.7856 | | 0.8327 | 16.0 | 19648 | 0.7850 | | 0.8268 | 17.0 | 20876 | 0.7784 | | 0.8256 | 18.0 | 22104 | 0.7802 | | 0.822 | 19.0 | 23332 | 0.7789 | | 0.8219 | 20.0 | 24560 | 0.7757 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_es_vp-fr_s980
jonatasgrosman
2022-07-11T12:51:08Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T12:50:21Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_vp-fr_s980 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ernestumorga/ppo-seals-Humanoid-v0
ernestumorga
2022-07-11T12:36:37Z
5
0
stable-baselines3
[ "stable-baselines3", "seals/Humanoid-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-30T12:00:35Z
--- library_name: stable-baselines3 tags: - seals/Humanoid-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: -43.69 +/- 155.83 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: seals/Humanoid-v0 type: seals/Humanoid-v0 --- # **PPO** Agent playing **seals/Humanoid-v0** This is a trained model of a **PPO** agent playing **seals/Humanoid-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ppo --env seals/Humanoid-v0 -orga ernestumorga -f logs/ python enjoy.py --algo ppo --env seals/Humanoid-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ppo --env seals/Humanoid-v0 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ppo --env seals/Humanoid-v0 -f logs/ -orga ernestumorga ``` ## Hyperparameters ```python OrderedDict([('batch_size', 256), ('clip_range', 0.2), ('ent_coef', 2.0745206045994986e-05), ('gae_lambda', 0.92), ('gamma', 0.999), ('learning_rate', 2.0309225666232827e-05), ('max_grad_norm', 0.5), ('n_envs', 1), ('n_epochs', 20), ('n_steps', 2048), ('n_timesteps', 10000000.0), ('normalize', True), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(activation_fn=nn.ReLU, net_arch=[dict(pi=[256, 256], ' 'vf=[256, 256])])'), ('vf_coef', 0.819262464558427), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ```
jonatasgrosman/exp_w2v2t_es_vp-fr_s281
jonatasgrosman
2022-07-11T12:32:07Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T12:31:26Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_vp-fr_s281 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ernestumorga/sac-seals-Swimmer-v0
ernestumorga
2022-07-11T12:31:16Z
1
0
stable-baselines3
[ "stable-baselines3", "seals/Swimmer-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-11T12:30:14Z
--- library_name: stable-baselines3 tags: - seals/Swimmer-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: SAC results: - metrics: - type: mean_reward value: 27.34 +/- 1.27 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: seals/Swimmer-v0 type: seals/Swimmer-v0 --- # **SAC** Agent playing **seals/Swimmer-v0** This is a trained model of a **SAC** agent playing **seals/Swimmer-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo sac --env seals/Swimmer-v0 -orga ernestumorga -f logs/ python enjoy.py --algo sac --env seals/Swimmer-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo sac --env seals/Swimmer-v0 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo sac --env seals/Swimmer-v0 -f logs/ -orga ernestumorga ``` ## Hyperparameters ```python OrderedDict([('batch_size', 128), ('buffer_size', 100000), ('gamma', 0.995), ('learning_rate', 0.00039981805535514633), ('learning_starts', 1000), ('n_timesteps', 1000000.0), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=[400, 300], log_std_init=-2.689958330139309)'), ('tau', 0.01), ('train_freq', 256), ('normalize', False)]) ```
ernestumorga/sac-seals-Ant-v0
ernestumorga
2022-07-11T12:29:54Z
1
0
stable-baselines3
[ "stable-baselines3", "seals/Ant-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-11T12:28:37Z
--- library_name: stable-baselines3 tags: - seals/Ant-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: SAC results: - metrics: - type: mean_reward value: 966.10 +/- 34.50 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: seals/Ant-v0 type: seals/Ant-v0 --- # **SAC** Agent playing **seals/Ant-v0** This is a trained model of a **SAC** agent playing **seals/Ant-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo sac --env seals/Ant-v0 -orga ernestumorga -f logs/ python enjoy.py --algo sac --env seals/Ant-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo sac --env seals/Ant-v0 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo sac --env seals/Ant-v0 -f logs/ -orga ernestumorga ``` ## Hyperparameters ```python OrderedDict([('batch_size', 512), ('buffer_size', 1000000), ('gamma', 0.98), ('learning_rate', 0.0018514039303149058), ('learning_starts', 1000), ('n_timesteps', 1000000.0), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=[256, 256], log_std_init=-2.2692589009754176)'), ('tau', 0.05), ('train_freq', 64), ('normalize', False)]) ```
ernestumorga/sac-seals-HalfCheetah-v0
ernestumorga
2022-07-11T12:28:24Z
2
0
stable-baselines3
[ "stable-baselines3", "seals/HalfCheetah-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-11T12:27:28Z
--- library_name: stable-baselines3 tags: - seals/HalfCheetah-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: SAC results: - metrics: - type: mean_reward value: 1474.73 +/- 33.37 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: seals/HalfCheetah-v0 type: seals/HalfCheetah-v0 --- # **SAC** Agent playing **seals/HalfCheetah-v0** This is a trained model of a **SAC** agent playing **seals/HalfCheetah-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo sac --env seals/HalfCheetah-v0 -orga ernestumorga -f logs/ python enjoy.py --algo sac --env seals/HalfCheetah-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo sac --env seals/HalfCheetah-v0 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo sac --env seals/HalfCheetah-v0 -f logs/ -orga ernestumorga ``` ## Hyperparameters ```python OrderedDict([('batch_size', 2048), ('buffer_size', 100000), ('gamma', 0.95), ('learning_rate', 0.000884624878315995), ('learning_starts', 10000), ('n_timesteps', 1000000.0), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=[64, 64], log_std_init=-0.6932709443503001)'), ('tau', 0.01), ('train_freq', 64), ('normalize', False)]) ```
ernestumorga/ppo-seals-Hopper-v0
ernestumorga
2022-07-11T12:27:11Z
0
0
stable-baselines3
[ "stable-baselines3", "seals/Hopper-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-30T11:59:04Z
--- library_name: stable-baselines3 tags: - seals/Hopper-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 2228.87 +/- 43.40 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: seals/Hopper-v0 type: seals/Hopper-v0 --- # **PPO** Agent playing **seals/Hopper-v0** This is a trained model of a **PPO** agent playing **seals/Hopper-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ppo --env seals/Hopper-v0 -orga ernestumorga -f logs/ python enjoy.py --algo ppo --env seals/Hopper-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ppo --env seals/Hopper-v0 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ppo --env seals/Hopper-v0 -f logs/ -orga ernestumorga ``` ## Hyperparameters ```python OrderedDict([('batch_size', 512), ('clip_range', 0.1), ('ent_coef', 0.0010159833764878474), ('gae_lambda', 0.98), ('gamma', 0.995), ('learning_rate', 0.0003904770450788824), ('max_grad_norm', 0.9), ('n_envs', 1), ('n_epochs', 20), ('n_steps', 2048), ('n_timesteps', 1000000.0), ('normalize', True), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(activation_fn=nn.ReLU, net_arch=[dict(pi=[64, 64], vf=[64, ' '64])])'), ('vf_coef', 0.20315938606555833), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ```
ernestumorga/ppo-seals-Walker2d-v0
ernestumorga
2022-07-11T12:25:31Z
0
0
stable-baselines3
[ "stable-baselines3", "seals/Walker2d-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-30T10:53:25Z
--- library_name: stable-baselines3 tags: - seals/Walker2d-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 1429.13 +/- 411.75 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: seals/Walker2d-v0 type: seals/Walker2d-v0 --- # **PPO** Agent playing **seals/Walker2d-v0** This is a trained model of a **PPO** agent playing **seals/Walker2d-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ppo --env seals/Walker2d-v0 -orga ernestumorga -f logs/ python enjoy.py --algo ppo --env seals/Walker2d-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ppo --env seals/Walker2d-v0 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ppo --env seals/Walker2d-v0 -f logs/ -orga ernestumorga ``` ## Hyperparameters ```python OrderedDict([('batch_size', 8), ('clip_range', 0.4), ('ent_coef', 0.00013057334805552262), ('gae_lambda', 0.92), ('gamma', 0.98), ('learning_rate', 3.791707778339674e-05), ('max_grad_norm', 0.6), ('n_envs', 1), ('n_epochs', 5), ('n_steps', 2048), ('n_timesteps', 1000000.0), ('normalize', True), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(activation_fn=nn.ReLU, net_arch=[dict(pi=[256, 256], ' 'vf=[256, 256])])'), ('vf_coef', 0.6167177795726859), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ```
jonatasgrosman/exp_w2v2t_es_vp-fr_s169
jonatasgrosman
2022-07-11T12:18:33Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T12:17:50Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_vp-fr_s169 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_unispeech-ml_s952
jonatasgrosman
2022-07-11T12:05:40Z
3
0
transformers
[ "transformers", "pytorch", "unispeech", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T12:04:48Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_unispeech-ml_s952 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_unispeech-ml_s474
jonatasgrosman
2022-07-11T11:58:24Z
3
0
transformers
[ "transformers", "pytorch", "unispeech", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T11:57:35Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_unispeech-ml_s474 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_unispeech-ml_s186
jonatasgrosman
2022-07-11T11:50:12Z
3
0
transformers
[ "transformers", "pytorch", "unispeech", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T11:49:25Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_unispeech-ml_s186 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
rajkumarrrk/t5-base-fine-tuned-on-cnn-dm
rajkumarrrk
2022-07-11T11:41:58Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-11T10:48:43Z
--- license: apache-2.0 --- T5-base fine-tuned on CNN/DM Summarization dataset. Training args: ``` { "learning_rate": 0.0001, "logging_steps": 5000, "lr_scheduler_type": "cosine", "num_train_epochs": 2, "per_device_train_batch_size": 16, # total batch size of 48 "save_total_limit": 1, "weight_decay": 0.1 } ``` Generation kwargs: ``` { "do_sample": true, "max_new_tokens": 100, "min_length": 50, "temperature": 0.7, "top_k": 0 }, ```` Pre-processing: Append prompt with prefix "Summarize: " Post-processing: None Test split metrics: ``` {"lexical/meteor": 0.30857827917561603, "lexical/rouge_rouge1": 0.41099971702474514, "lexical/rouge_rouge2": 0.17676173608661166, "lexical/rouge_rougeL": 0.2759112075051335, "lexical/rouge_rougeLsum": 0.34316108028094616, "lexical/bleu": 0.10747816852428271, "semantic/bert_score": 0.8760301497472277} ```
jonatasgrosman/exp_w2v2t_es_wavlm_s26
jonatasgrosman
2022-07-11T11:37:51Z
3
0
transformers
[ "transformers", "pytorch", "wavlm", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T11:37:01Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_wavlm_s26 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_wavlm_s115
jonatasgrosman
2022-07-11T11:30:30Z
3
0
transformers
[ "transformers", "pytorch", "wavlm", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T11:29:51Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_wavlm_s115 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_vp-sv_s93
jonatasgrosman
2022-07-11T11:11:20Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T11:10:33Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_vp-sv_s93 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_vp-sv_s863
jonatasgrosman
2022-07-11T11:03:20Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T11:02:29Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_vp-sv_s863 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_xlsr-53_s103
jonatasgrosman
2022-07-11T10:40:01Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T10:39:11Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_xlsr-53_s103 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_xlsr-53_s756
jonatasgrosman
2022-07-11T10:35:54Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T10:35:16Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_xlsr-53_s756 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
wooihen/distilbert-base-uncased-finetuned-emotion
wooihen
2022-07-11T10:28:32Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-11T10:04:15Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9225 - name: F1 type: f1 value: 0.922771245052197 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2146 - Accuracy: 0.9225 - F1: 0.9228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8233 | 1.0 | 250 | 0.3068 | 0.9025 | 0.8995 | | 0.2394 | 2.0 | 500 | 0.2146 | 0.9225 | 0.9228 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
jonatasgrosman/exp_w2v2t_es_vp-100k_s957
jonatasgrosman
2022-07-11T10:23:05Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T10:22:18Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_vp-100k_s957 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_es_wav2vec2_s875
jonatasgrosman
2022-07-11T10:19:31Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T10:18:46Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_wav2vec2_s875 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_vp-it_s975
jonatasgrosman
2022-07-11T10:03:48Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T10:03:23Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_vp-it_s975 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_vp-it_s817
jonatasgrosman
2022-07-11T09:59:54Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T09:59:26Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_vp-it_s817 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_r-wav2vec2_s399
jonatasgrosman
2022-07-11T09:50:40Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T09:49:58Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_r-wav2vec2_s399 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_xls-r_s635
jonatasgrosman
2022-07-11T09:42:39Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T09:42:14Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_xls-r_s635 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_unispeech-sat_s423
jonatasgrosman
2022-07-11T09:23:21Z
3
0
transformers
[ "transformers", "pytorch", "unispeech-sat", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T09:22:56Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_unispeech-sat_s423 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_vp-es_s664
jonatasgrosman
2022-07-11T09:07:47Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T09:07:23Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_vp-es_s664 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_vp-fr_s930
jonatasgrosman
2022-07-11T08:54:54Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T08:54:29Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_vp-fr_s930 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_wavlm_s331
jonatasgrosman
2022-07-11T08:42:19Z
5
0
transformers
[ "transformers", "pytorch", "wavlm", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T08:41:54Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_wavlm_s331 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_wavlm_s116
jonatasgrosman
2022-07-11T08:39:23Z
4
0
transformers
[ "transformers", "pytorch", "wavlm", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T08:38:58Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_wavlm_s116 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_no-pretraining_s895
jonatasgrosman
2022-07-11T08:30:17Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T08:29:32Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_no-pretraining_s895 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ybelkada/japanese-dummy-tokenizer
ybelkada
2022-07-11T08:24:32Z
4
1
transformers
[ "transformers", "ja", "japanese", "tokenizer", "en", "dataset:snow_simplified_japanese_corpus", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-04-06T12:31:37Z
--- language: - en - ja license: mit datasets: - snow_simplified_japanese_corpus tags: - ja - japanese - tokenizer widget: - text: "θͺ°γŒδΈ€η•ͺγ«η€γγ‹η§γ«γ―εˆ†γ‹γ‚ŠγΎγ›γ‚“γ€‚" --- # Japanese Dummy Tokenizer Repository containing a dummy Japanese Tokenizer trained on ```snow_simplified_japanese_corpus``` dataset. The tokenizer has been trained using Hugging Face datasets in a streaming manner. ## Intended uses & limitations You can use this tokenizer to tokenize Japanese sentences. ## How to use it ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ybelkada/japanese-dummy-tokenizer") ``` ## How to train the tokenizer Check the file ```tokenizer.py```, you can freely adapt it to other datasets. This tokenizer is based on the tokenizer from ```csebuetnlp/mT5_multilingual_XLSum```.
jonatasgrosman/exp_w2v2t_ru_vp-sv_s658
jonatasgrosman
2022-07-11T08:21:28Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T08:20:56Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_vp-sv_s658 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_vp-sv_s515
jonatasgrosman
2022-07-11T08:14:49Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T08:14:00Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_vp-sv_s515 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_hubert_s732
jonatasgrosman
2022-07-11T08:10:54Z
3
0
transformers
[ "transformers", "pytorch", "hubert", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T08:10:28Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_hubert_s732 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
LeoFeng/superb_wav2vec_submit
LeoFeng
2022-07-11T08:05:02Z
0
0
null
[ "region:us" ]
null
2022-07-11T07:57:49Z
# SUPERB Submission Template Welcome to the [SUPERB Challenge](https://superbbenchmark.org/challenge-slt2022/challenge_overview)! SUPERB is a collection of benchmarking resources to evaluate the capability of a universal shared representation for speech processing. It comes with a benchmark on the publicly available datasets and a challenge on a secret/not released hidden dataset. In SUPERB Challenge, a challenging hidden dataset is newly recorded to evaluate the ultimate generaliziblity across various tasks and data. You can participate the challenge by simply submitting your self-supervised (SSL) pretrained models (model definition & pretrained weights), and we benchmark it with the hidden datasets. This repository constains useful tools to let you easliy [submit](https://superbbenchmark.org/submit) your models ***privately*** for evaluation to [the challenge hidden-set leaderboard](https://superbbenchmark.org/leaderboard?track=constrained&subset=Hidden+Dev+Set). 1. Generate a submission template 2. Validate the format/interface correctness of your model 3. Upload to Huggingface's Hub (privately) 4. Submit the upload information to [SUPERB website](https://superbbenchmark.org/submit) #### Note 1. We accept pre-trained models in PyTorch by default. If you wish to submit upstreams in non-PyTorch frameworks, please mail to [superb.announcement@gmail.com](mailto:superb.announcement@gmail.com)! #### Note 2. If you are not feasible to submit the pre-trained model, please mail to [superb.announcement@gmail.com](mailto:superb.announcement@gmail.com) for us to see how to help! ## Quickstart ### 1. Add model interfaces #### forward Extract features from waveforms. - **Input:** A list of waveforms in 16000 Hz ```python SAMPLE_RATE = 16000 BATCH_SIZE = 8 EXAMPLE_SEC = 10 wavs = [torch.randn(SAMPLE_RATE * EXAMPLE_SEC).cuda() for _ in range(BATCH_SIZE)] ``` - **Output:** A dictionary with a key "hidden_states" (for compatiblility with old ver.). The value is **a list** of padded sequences in the same shape of **(batch_size, max_sequence_length_of_batch, hidden_size)** for weighted-sum to work. It is welcome to perform some task-specified / independent pre- / post-processing on the upstream's raw hidden-sets, including upsampling and downsampling. However, all the values must come from **a single upstream model**: ```python tasks = ["hidden_states", "PR", "SID", "ER", "ASR", "ASV", "SD", "QbE", "ST", "SS", "SE", "secret"] for task in tasks: # you can do task-specified pre- / post-processing depend on the arg "upstream_feature_selection" results = upstream(wavs, upstream_feature_selection=task) hidden_states = results["hidden_states"] assert isinstance(results, dict) assert isinstance(hidden_states, list) for state in hidden_states: assert isinstance(state, torch.Tensor) assert state.dim() == 3, "(batch_size, max_sequence_length_of_batch, hidden_size)" assert state.shape == hidden_states[0].shape ``` #### get_downsample_rates Provide the downsample rate **from 16000 Hz waveforms** for each task's representation in the dict. For the standard 10ms stride representation, the downsample rate is 160. ```python SAMPLE_RATE = 16000 MSEC_PER_SEC = 1000 downsample_rate = SAMPLE_RATE * 10 / MSEC_PER_SEC # 160 ``` The downsample rate will be used to: 1. Calculate the valid representation length of each utterance in the output padded representation. 2. Prepare the training materials according to the representation's downsample rate for frame-level tasks, e.g. SD, SE, and SS. - **Input:** the task key (str) - **Output:** the downsample rate (int) of the representation for that task ```python for task in tasks: assert isinstance(task, str) downsample_rate = upstream.get_downsample_rate(task) assert isinstance(downsample_rate, int) print("The upstream's representation for {task}" f" has the downsample rate of {downsample_rate}.") ``` ### 2. Create an account and organization on the Hugging Face Hub First create an account on the Hugging Face Hub and you can sign up [here](https://huggingface.co/join) if you haven't already! Next, create a new organization and invite the SUPERB Hidden Set Committee to join. You will upload your model to a repository under this organization so that members inside it can access the model which is not publicly available. * [superb-hidden-set](https://huggingface.co/superb-hidden-set) ### 3. Create a template repository on your machine The next step is to create a template repository on your local machine that contains various files and a CLI to help you validate and submit your pretrained models. The Hugging Face Hub uses [Git Large File Storage (LFS)](https://git-lfs.github.com) to manage large files, so first install it if you don't have it already. For example, on macOS you can run: ```bash brew install git-lfs git lfs install ``` Next, run the following commands to create the repository. We recommend creating a Python virtual environment for the project, e.g. with Anaconda: ```bash # Create and activate a virtual environment conda create -n superb-submit python=3.8 && conda activate superb-submit # Install the following libraries pip install cookiecutter huggingface-hub==0.0.16 # Create the template repository cookiecutter git+https://huggingface.co/superb/superb-submission ``` This will ask you to specify your Hugging Face Hub username, password, organisation, and the name of the repository: ``` hf_hub_username [<huggingface>]: hf_hub_password [<password>]: hf_hub_organisation [superb-submissions]: repo_name [<my-superb-submissions>]: ``` This will trigger the following steps: 1. Create a private dataset repository on the Hugging Face Hub under `{hf_hub_organisation}/{repo_name}` 2. Clone the repository to your local machine 3. Add various template files, commit them locally to the repository, and push them to the Hub The resulting repository should have the following structure: ``` my-superb-submission β”œβ”€β”€ LICENSE β”œβ”€β”€ README.md <- The README with submission instructions β”œβ”€β”€ cli.py <- The CLI for validating predictions etc └── requirements.txt <- The requirements packages for the submissions β”œβ”€β”€ expert.py <- Your model definition └── model.pt <- Your model weights ``` ### 4. Install the dependencies The final step is to install the project's dependencies: ```bash # Navigate to the template repository cd my-superb-submission # Install dependencies python -m pip install -r requirements.txt ``` That's it! You're now all set to start pretraining your speech models - see the instructions below on how to submit them to the Hub. ## Submitting to the leaderboard To make a submission to the [leaderboard](https://superbbenchmark.org/leaderboard?subset=Hidden+Dev+Set), there are 4 main steps: 1. Modify `expert.py` and change `model.pt` so we can initialize an upstream model following the [challenge policy](https://superbbenchmark.org/challenge-slt2022/upstream) by: ```python upstream = UpstreamExpert(ckpt="./model.pt") ``` ***Package Dependency:*** Note that we only install `torch` package so far by following the above steps. If your model needs more packages, you can modify the `requirement.txt` to meet your need and install them inside the current conda environment. We will install the packages you list in the `requirement.txt` before initializing the upstream model. 2. Validate the upstream model's interface meets the requirements in the [challenge policy](https://superbbenchmark.org/challenge-slt2022/upstream). If everything is correct, you should see the following message: "All submission files validated! Now you can make a submission." ``` python cli.py validate ``` 3. Push the model to the Hub! If there are no errors, you should see the following message: "Upload successful!" ``` python cli.py upload "commit message: my best model" ``` 4. [Make a submission at SUPERB website](https://superbbenchmark.org/submit) by uniquely indentifying this uploaded model with the following information, which can be shown by: ``` python cli.py info ``` - Organization Name - Repository Name - Commit Hash (full 40 characters) After you finish the above 4 steps. You will see a new entry in your [SUPERB profile page](https://superbbenchmark.org/profile) (need login) which does not have any benchmark numbers yet. Please wait for us to finetuned it on the hidden dataset and get the benchmark results. The results will be revealed within one week. Please stay tuned!
jonatasgrosman/exp_w2v2t_ru_unispeech_s132
jonatasgrosman
2022-07-11T07:58:18Z
5
0
transformers
[ "transformers", "pytorch", "unispeech", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T07:57:53Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_unispeech_s132 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_unispeech_s42
jonatasgrosman
2022-07-11T07:55:21Z
3
0
transformers
[ "transformers", "pytorch", "unispeech", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T07:54:56Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_unispeech_s42 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_xlsr-53_s201
jonatasgrosman
2022-07-11T07:49:05Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T07:48:21Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_xlsr-53_s201 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_xlsr-53_s303
jonatasgrosman
2022-07-11T07:45:29Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T07:44:48Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_xlsr-53_s303 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_vp-100k_s334
jonatasgrosman
2022-07-11T07:42:16Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T07:41:33Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_vp-100k_s334 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_ru_vp-100k_s732
jonatasgrosman
2022-07-11T07:39:00Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T07:38:17Z
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_vp-100k_s732 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.