modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-11 12:33:28
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-11 12:33:10
card
stringlengths
11
1.01M
christy/TEST2ppo-LunarLander-v2
christy
2022-05-04T23:56:23Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T23:55:41Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 164.94 +/- 74.94 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
akkasayaz/ppo-LunarLander-v2
akkasayaz
2022-05-04T23:44:34Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T23:43:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 233.78 +/- 19.45 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
reaprtripr/distillRobertaMLM-csnPython-finetuned-imdb
reaprtripr
2022-05-04T22:05:12Z
4
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-04T18:59:21Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: reaprtripr/distillRobertaMLM-csnPython-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # reaprtripr/distillRobertaMLM-csnPython-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.1703 - Validation Loss: 2.6914 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.1703 | 2.6914 | 0 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
BigSalmon/MediumInformalToFormalLincoln4
BigSalmon
2022-05-04T21:12:03Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-04T21:02:07Z
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MediumInformalToFormalLincoln4") model = AutoModelForCausalLM.from_pretrained("BigSalmon/MediumInformalToFormalLincoln4") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ```
elotech/ppo-LunarLander-v0
elotech
2022-05-04T20:57:09Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T20:29:19Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO_v2 results: - metrics: - type: mean_reward value: 275.76 +/- 26.02 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO_v2** Agent playing **LunarLander-v2** This is a trained model of a **PPO_v2** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
mmangino/ppo-LunarLander-v2
mmangino
2022-05-04T20:24:48Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T20:24:21Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 282.72 +/- 23.16 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
utkusaglm/ppo-LunarLander-v1
utkusaglm
2022-05-04T20:23:28Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T20:17:22Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 295.94 +/- 13.13 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
NessrineT/ppo-LunarLander-v2
NessrineT
2022-05-04T19:55:27Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T18:11:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 265.98 +/- 9.31 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
SaiShashank1303/ch-1-ppo-LunarLander-v2
SaiShashank1303
2022-05-04T19:04:07Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T19:03:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo-LunarLander-v2 results: - metrics: - type: mean_reward value: 203.94 +/- 26.92 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **ppo-LunarLander-v2** Agent playing **LunarLander-v2** This is a trained model of a **ppo-LunarLander-v2** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
huggingtweets/kanyewest-usmnt
huggingtweets
2022-05-04T18:51:59Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-04T17:44:08Z
--- language: en thumbnail: http://www.huggingtweets.com/kanyewest-usmnt/1651690314434/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1410587808666955776/mWkKWw1U_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">USMNT & ye</div> <div style="text-align: center; font-size: 14px;">@kanyewest-usmnt</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from USMNT & ye. | Data | USMNT | ye | | --- | --- | --- | | Tweets downloaded | 3247 | 1858 | | Retweets | 600 | 188 | | Short tweets | 215 | 573 | | Tweets kept | 2432 | 1097 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/12os8ehp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kanyewest-usmnt's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pwtssam) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pwtssam/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/kanyewest-usmnt') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
CWhy/given-ppo-LunarLander-v2
CWhy
2022-05-04T18:44:21Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T18:43:43Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 198.92 +/- 36.84 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
Harri/2nd_try_ppo-LunarLander-v2
Harri
2022-05-04T18:27:55Z
0
1
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T18:27:29Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 279.01 +/- 16.25 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
Harri/ppo-LunarLander-v2
Harri
2022-05-04T17:55:28Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T17:53:11Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 215.15 +/- 49.08 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
KoenBronstring/finetuning-sentiment-model-3000-samples
KoenBronstring
2022-05-04T17:53:58Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T12:08:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.8758169934640523 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3149 - Accuracy: 0.8733 - F1: 0.8758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cpu - Datasets 2.1.0 - Tokenizers 0.12.1
seriy21/ppo-LunarLander-v2_tst2
seriy21
2022-05-04T17:32:54Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T17:32:28Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 291.14 +/- 17.01 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
huggingtweets/usmnt-zacksteffen_
huggingtweets
2022-05-04T17:19:08Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-04T17:18:29Z
--- language: en thumbnail: http://www.huggingtweets.com/usmnt-zacksteffen_/1651684743123/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1410587808666955776/mWkKWw1U_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1509644465388105731/dErjQdWT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">USMNT & Zack Steffen</div> <div style="text-align: center; font-size: 14px;">@usmnt-zacksteffen_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from USMNT & Zack Steffen. | Data | USMNT | Zack Steffen | | --- | --- | --- | | Tweets downloaded | 3250 | 3120 | | Retweets | 600 | 869 | | Short tweets | 215 | 523 | | Tweets kept | 2435 | 1728 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34uud8si/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usmnt-zacksteffen_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wiyd3kq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wiyd3kq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/usmnt-zacksteffen_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
robertou2/TEST2ppo-LunarLander-v2
robertou2
2022-05-04T17:05:32Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T16:47:58Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 286.33 +/- 13.08 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2
MartinoMensio
2022-05-04T16:28:04Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T17:06:08Z
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `w-m-vote-nonstrict-epoch-2` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'w-m-vote-nonstrict-epoch-2' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.9680026173591614}, {'label': 'non-racist', 'score': 0.9936750531196594}] ``` For more details, see https://github.com/preyero/neatclass22
Guillaume63/ppo-LunarLander-v2
Guillaume63
2022-05-04T16:27:19Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T16:26:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PP0 results: - metrics: - type: mean_reward value: 223.27 +/- 26.13 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PP0** Agent playing **LunarLander-v2** This is a trained model of a **PP0** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
MartinoMensio/racism-models-w-m-vote-strict-epoch-3
MartinoMensio
2022-05-04T16:26:07Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:57:00Z
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `w-m-vote-strict-epoch-3` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'w-m-vote-strict-epoch-3' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.9619585871696472}, {'label': 'non-racist', 'score': 0.9396700859069824}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-regression-w-m-vote-epoch-4
MartinoMensio
2022-05-04T16:22:45Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:22:35Z
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `regression-w-m-vote-epoch-4` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline from transformers.pipelines import TextClassificationPipeline class TextRegressionPipeline(TextClassificationPipeline): """ Class based on the TextClassificationPipeline from transformers. The difference is that instead of being based on a classifier, it is based on a regressor. You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline. """ def __init__(self, **kwargs): """ Builds a new Pipeline based on regression. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold = kwargs.pop("regression_threshold", None) super().__init__(**kwargs) def __call__(self, *args, **kwargs): """ You can also specify the regression threshold when you call the pipeline. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold_call = kwargs.pop("regression_threshold", None) result = super().__call__(*args, **kwargs) return result def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False): outputs = model_outputs["logits"][0] outputs = outputs.numpy() scores = outputs score = scores[0] regression_threshold = self.regression_threshold # override the specific threshold if it is specified in the call if self.regression_threshold_call: regression_threshold = self.regression_threshold_call if regression_threshold: return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score} else: return {"score": score} model_name = 'regression-w-m-vote-epoch-4' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] # just get the score of regression print(pipe(texts)) # [{'score': 0.8345461}, {'score': 0.48615143}] # or also specify a threshold to cut racist/non-racist print(pipe(texts, regression_threshold=0.9)) # [{'label': 'non-racist', 'score': 0.8345461}, {'label': 'non-racist', 'score': 0.48615143}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-regression-w-m-vote-epoch-2
MartinoMensio
2022-05-04T16:20:44Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:18:45Z
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `regression-w-m-vote-epoch-2` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline from transformers.pipelines import TextClassificationPipeline class TextRegressionPipeline(TextClassificationPipeline): """ Class based on the TextClassificationPipeline from transformers. The difference is that instead of being based on a classifier, it is based on a regressor. You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline. """ def __init__(self, **kwargs): """ Builds a new Pipeline based on regression. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold = kwargs.pop("regression_threshold", None) super().__init__(**kwargs) def __call__(self, *args, **kwargs): """ You can also specify the regression threshold when you call the pipeline. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold_call = kwargs.pop("regression_threshold", None) result = super().__call__(*args, **kwargs) return result def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False): outputs = model_outputs["logits"][0] outputs = outputs.numpy() scores = outputs score = scores[0] regression_threshold = self.regression_threshold # override the specific threshold if it is specified in the call if self.regression_threshold_call: regression_threshold = self.regression_threshold_call if regression_threshold: return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score} else: return {"score": score} model_name = 'regression-w-m-vote-epoch-2' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] # just get the score of regression print(pipe(texts)) # [{'score': 0.8367272}, {'score': 0.4402479}] # or also specify a threshold to cut racist/non-racist print(pipe(texts, regression_threshold=0.9)) # [{'label': 'non-racist', 'score': 0.8367272}, {'label': 'non-racist', 'score': 0.4402479}] ``` For more details, see https://github.com/preyero/neatclass22
huggingtweets/zacksteffen_
huggingtweets
2022-05-04T16:16:32Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-04T16:15:50Z
--- language: en thumbnail: http://www.huggingtweets.com/zacksteffen_/1651680987265/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1509644465388105731/dErjQdWT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Zack Steffen</div> <div style="text-align: center; font-size: 14px;">@zacksteffen_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Zack Steffen. | Data | Zack Steffen | | --- | --- | | Tweets downloaded | 3120 | | Retweets | 869 | | Short tweets | 523 | | Tweets kept | 1728 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nz1w2dd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zacksteffen_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lqwnrcja) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lqwnrcja/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/zacksteffen_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
MartinoMensio/racism-models-m-vote-nonstrict-epoch-4
MartinoMensio
2022-05-04T16:14:06Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:50:19Z
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `m-vote-nonstrict-epoch-4` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'm-vote-nonstrict-epoch-4' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.9791656136512756}, {'label': 'non-racist', 'score': 0.996966540813446}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-m-vote-nonstrict-epoch-2
MartinoMensio
2022-05-04T16:12:34Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:46:17Z
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `m-vote-nonstrict-epoch-2` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'm-vote-nonstrict-epoch-2' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.8650100827217102}, {'label': 'non-racist', 'score': 0.9674995541572571}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-m-vote-strict-epoch-4
MartinoMensio
2022-05-04T16:10:41Z
98
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:41:38Z
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `m-vote-strict-epoch-4` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'm-vote-strict-epoch-4' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.9965864419937134}, {'label': 'racist', 'score': 0.6058831214904785}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-m-vote-strict-epoch-3
MartinoMensio
2022-05-04T16:09:42Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:35:22Z
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `m-vote-strict-epoch-3` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'm-vote-strict-epoch-3' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.9929012656211853}, {'label': 'non-racist', 'score': 0.5616322159767151}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-m-vote-strict-epoch-1
MartinoMensio
2022-05-04T16:07:46Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:29:06Z
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `m-vote-strict-epoch-1` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'm-vote-strict-epoch-1' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.6074065566062927}, {'label': 'non-racist', 'score': 0.8047575950622559}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-raw-label-epoch-2
MartinoMensio
2022-05-04T16:04:18Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:04:35Z
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `raw-label-epoch-2` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'raw-label-epoch-2' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.8982619643211365}, {'label': 'non-racist', 'score': 0.6703745126724243}] ``` For more details, see https://github.com/preyero/neatclass22
Phaneo/ppo-LunarLander-v2
Phaneo
2022-05-04T16:03:23Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T14:58:54Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 275.14 +/- 17.29 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
huggingtweets/cpulisic_10-usmnt-zacksteffen_
huggingtweets
2022-05-04T16:00:44Z
3
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-04T16:00:36Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1410587808666955776/mWkKWw1U_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1509644465388105731/dErjQdWT_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1511457717281607680/SuAprf1T_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">USMNT & Zack Steffen & Christian Pulisic</div> <div style="text-align: center; font-size: 14px;">@cpulisic_10-usmnt-zacksteffen_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from USMNT & Zack Steffen & Christian Pulisic. | Data | USMNT | Zack Steffen | Christian Pulisic | | --- | --- | --- | --- | | Tweets downloaded | 3243 | 3120 | 1159 | | Retweets | 599 | 869 | 629 | | Short tweets | 215 | 523 | 93 | | Tweets kept | 2429 | 1728 | 437 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/395einau/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cpulisic_10-usmnt-zacksteffen_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1x9olwhx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1x9olwhx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/cpulisic_10-usmnt-zacksteffen_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
LidarRL/TEST2ppo-LunarLander-v2
LidarRL
2022-05-04T15:10:24Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T14:20:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 204.65 +/- 31.76 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
dbmdz/flair-hipe-2022-ajmc-all
dbmdz
2022-05-04T13:43:34Z
10
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "multilingual", "license:mit", "region:us" ]
token-classification
2022-04-29T07:26:42Z
--- tags: - flair - token-classification - sequence-tagger-model language: multilingual widget: - text: "In editing the Fragments , I have availed myself of Mr . R . Ellis ’ acute remarks on them in the Cambridge Journal of Philology , Vol . IV , and that I am largely indebted , as every editor must now be , to the edition of the Tragic Fragments by A . Nauck , Leipzig , 1856 ." - text: "459 . Skyros klang dem Athener etwa wie Pholegandros und Sikinos bei Solon Eleg . 1 , 4 , dem Römer Ulubrae , Butunti ." - text: "Celles d ’ Ajax et des siens occupaient l ' extrême aile gauche , vers le promontoire Rhétée , et confinaient tout à la fois au retranchement et à la mer ( // . XIT1 , 681 ; Heynce , excursns cité ) ," license: mit ---
thuannc/vi-distilled-msmarco-MiniLM-L12-cos-v5
thuannc
2022-05-04T12:52:08Z
6
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:2004.09813", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-05-04T10:10:10Z
--- license: mit pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a Vietnamese [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like questions answering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> The thesis will be available on [https://github.com/ncthuan/uet-qa](https://github.com/ncthuan/uet-qa) with evaluation results in chapter 4. paraphrase-multilingual-minilm: 75 recall@10, 49 MRR@10 this model: 85 recall@10, 58 MRR@10 ## Training It was distilled using English-Vietnamese parallel data with this [training script](https://github.com/ncthuan/uet-qa/blob/main/scripts/train/make_multilingual.py) that follows the work of [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://www.sbert.net/examples/training/multilingual/README.html) teacher: msmarco-MiniLM-L12-cos-v5 student: paraphrase-multilingual-minilm-L12-v2 Data: PhoMT, MKQA, MLQA, XQuAD The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40148 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 2000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 1e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2000, "weight_decay": 0.005 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information --> ``` @inproceedings{reimers-2020-multilingual-sentence-bert, title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2020", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2004.09813", } @article{thuan2022-uetqa, title={{Extractive question answering system on regulations for University of Engineering and Technology}}, author={Nguyen, Thuan}, journal={Undergraduate Thesis, University of Engineering and Technology, Vietnam National University Hanoi}, year={2022} } ```
guhuawuli/distilbert-imdb
guhuawuli
2022-05-04T10:57:23Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-04T10:35:28Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 391 | 0.1846 | 0.9288 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0a0+3fd9dcf - Datasets 2.1.0 - Tokenizers 0.12.1
jonfrank/xlm-roberta-base-finetuned-panx-de
jonfrank
2022-05-04T10:13:21Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-04T09:39:55Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8654425558524246 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1334 - F1: 0.8654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2541 | 1.0 | 525 | 0.1596 | 0.8242 | | 0.1284 | 2.0 | 1050 | 0.1360 | 0.8499 | | 0.0827 | 3.0 | 1575 | 0.1334 | 0.8654 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
Nijana/gpt-neo-1.3B-climate_change_tweets
Nijana
2022-05-04T10:12:52Z
0
0
null
[ "region:us" ]
null
2022-05-02T11:35:45Z
# A fine-tuned GPT-Neo Model for Tweet Generation This model is a fine-tuned version of the 1.3B-parameter GPT-Neo model developed by EleutherAI. As the default GPT-Neo model did not receive any social media data during its pre-training, we fine-tuned it with tweets collected from Twitter from October to November 2021 related to climate change hashtags. The model received data in the format `<username> - <tweet>` We used an 80/20 train/test split, and to differentiate distinct tweets, we added a start-of-tweet and an end-of-tweet token to the training dataset. To guide you in using this model, please consult the `gpt_neo_1.3B_twitter.ipynb` Jupyter Notebook file from this repository. --- license: cc-by-3.0 ---
nbhimte/tiny-bert-mnli-distilled
nbhimte
2022-05-04T07:14:17Z
26
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-17T03:40:10Z
--- tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: tiny-bert-mnli-distilled results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.5818644931227712 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-bert-mnli-distilled It achieves the following results on the evaluation set: - Loss: 1.5018 - Accuracy: 0.5819 - F1 score: 0.5782 - Precision score: 0.6036 - Metric recall: 0.5819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 32 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 score | Precision score | Metric recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:-------------:| | 1.4475 | 1.0 | 614 | 1.4296 | 0.4521 | 0.4070 | 0.5621 | 0.4521 | | 1.3354 | 2.0 | 1228 | 1.4320 | 0.4805 | 0.4579 | 0.5276 | 0.4805 | | 1.2244 | 3.0 | 1842 | 1.4786 | 0.5699 | 0.5602 | 0.5865 | 0.5699 | | 1.1416 | 4.0 | 2456 | 1.5018 | 0.5819 | 0.5782 | 0.6036 | 0.5819 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1 - Datasets 2.1.0 - Tokenizers 0.11.6
ybkim95/lp-bert-model
ybkim95
2022-05-04T06:26:12Z
1
1
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-05-04T06:26:02Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # ybkim95/lp-bert-model This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ybkim95/lp-bert-model') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ybkim95/lp-bert-model') model = AutoModel.from_pretrained('ybkim95/lp-bert-model') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ybkim95/lp-bert-model) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 46 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
snunlp/KR-ELECTRA-discriminator
snunlp
2022-05-04T06:22:51Z
150
11
transformers
[ "transformers", "pytorch", "electra", "pretraining", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - "ko" --- ## KoRean based ELECTRA (KR-ELECTRA) This is a release of a Korean-specific ELECTRA model with comparable or better performances developed by the Computational Linguistics Lab at Seoul National University. Our model shows remarkable performances on tasks related to informal texts such as review documents, while still showing comparable results on other kinds of tasks. ### Released Model We pre-trained our KR-ELECTRA model following a base-scale model of [ELECTRA](https://github.com/google-research/electra). We trained the model based on Tensorflow-v1 using a v3-8 TPU of Google Cloud Platform. #### Model Details We followed the training parameters of the base-scale model of [ELECTRA](https://github.com/google-research/electra). ##### Hyperparameters | model | # of layers | embedding size | hidden size | # of heads | | ------: | ----------: | -------------: | ----------: | ---------: | | Discriminator | 12 | 768 | 768 | 12 | | Generator | 12 | 768 | 256 | 4 | ##### Pretraining | batch size | train steps | learning rates | max sequence length | generator size | | ---------: | ----------: | -------------: | ------------------: | -------------: | | 256 | 700000 | 2e-4 | 128 | 0.33333 | #### Training Dataset 34GB Korean texts including Wikipedia documents, news articles, legal texts, news comments, product reviews, and so on. These texts are balanced, consisting of the same ratios of written and spoken data. #### Vocabulary vocab size 30,000 We used morpheme-based unit tokens for our vocabulary based on the [Mecab-Ko](https://bitbucket.org/eunjeon/mecab-ko-dic/src/master/) morpheme analyzer. #### Download Link * Tensorflow-v1 model ([download](https://drive.google.com/file/d/1L_yKEDaXM_yDLwHm5QrXAncQZiMN3BBU/view?usp=sharing)) * PyTorch models on HuggingFace ```python from transformers import ElectraModel, ElectraTokenizer model = ElectraModel.from_pretrained("snunlp/KR-ELECTRA-discriminator") tokenizer = ElectraTokenizer.from_pretrained("snunlp/KR-ELECTRA-discriminator") ``` ### Finetuning We used and slightly edited the finetuning codes from [KoELECTRA](https://github.com/monologg/KoELECTRA), with additionally adjusted hyperparameters. You can download the codes and config files that we used for our model from our [github](https://github.com/snunlp/KR-ELECTRA). #### Experimental Results | | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :-------------------- | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | KoBERT | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 | | XLM-Roberta-Base | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 | | HanBERT | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 | | KoELECTRA-Base | 90.33 | 87.18 | 81.70 | 80.64 | 82.00 | 93.54 | 60.86 / 89.28 | 66.09 | | KoELECTRA-Base-v2 | 89.56 | 87.16 | 80.70 | 80.72 | 82.30 | 94.85 | 84.01 / 92.40 | 67.45 | | KoELECTRA-Base-v3 | 90.63 | **88.11** | **84.45** | 82.24 | **85.53** | 95.25 | 84.83 / **93.45** | 67.61 | | **KR-ELECTRA (ours)** | **91.168** | 87.90 | 82.05 | **82.51** | 85.41 | **95.51** | **84.93** / 93.04 | **74.50** | The baseline results are brought from [KoELECTRA](https://github.com/monologg/KoELECTRA)'s. ### Citation ```bibtex @misc{kr-electra, author = {Lee, Sangah and Hyopil Shin}, title = {KR-ELECTRA: a KoRean-based ELECTRA model}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/snunlp/KR-ELECTRA}} } ```
LiYouYou/bert_finetuning_cn
LiYouYou
2022-05-04T05:36:19Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-04T05:21:07Z
--- language: - en tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert_finetuning_cn results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.8314220183486238 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_finetuning_cn This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.5440 - Accuracy: 0.8314 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
abhi1nandy2/EManuals_RoBERTa
abhi1nandy2
2022-05-04T04:57:53Z
20
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "EManuals", "customer support", "QA", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - English tags: - EManuals - customer support - QA - roberta --- Refer to https://aclanthology.org/2021.findings-emnlp.392/ for the paper and https://sites.google.com/view/emanualqa/home for the project website ## Citation Please cite the work if you would like to use it. ``` @inproceedings{nandy-etal-2021-question-answering, title = "Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based {QA} Framework", author = "Nandy, Abhilash and Sharma, Soumya and Maddhashiya, Shubham and Sachdeva, Kapil and Goyal, Pawan and Ganguly, NIloy", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.392", doi = "10.18653/v1/2021.findings-emnlp.392", pages = "4600--4609", abstract = "Answering questions asked from instructional corpora such as E-manuals, recipe books, etc., has been far less studied than open-domain factoid context-based question answering. This can be primarily attributed to the absence of standard benchmark datasets. In this paper, we meticulously create a large amount of data connected with E-manuals and develop a suitable algorithm to exploit it. We collect E-Manual Corpus, a huge corpus of 307,957 E-manuals, and pretrain RoBERTa on this large corpus. We create various benchmark QA datasets which include question answer pairs curated by experts based upon two E-manuals, real user questions from Community Question Answering Forum pertaining to E-manuals etc. We introduce EMQAP (E-Manual Question Answering Pipeline) that answers questions pertaining to electronics devices. Built upon the pretrained RoBERTa, it harbors a supervised multi-task learning framework which efficiently performs the dual tasks of identifying the section in the E-manual where the answer can be found and the exact answer span within that section. For E-Manual annotated question-answer pairs, we show an improvement of about 40{\%} in ROUGE-L F1 scores over most competitive baseline. We perform a detailed ablation study and establish the versatility of EMQAP across different circumstances. The code and datasets are shared at https://github.com/abhi1nandy2/EMNLP-2021-Findings, and the corresponding project website is https://sites.google.com/view/emanualqa/home.", } ```
clevo570/Nissan_Project
clevo570
2022-05-03T21:54:07Z
0
0
null
[ "region:us" ]
null
2022-04-26T04:47:11Z
# Nissan Project --- license: mit --- ## Overview This model is based on [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) model and [roberta-base-squad2 ](https://huggingface.co/deepset/roberta-base-squad2) model. Bart-large-mnli model is a zero-shot pre-trained model so we don't need to train the model. We just input comments and features we want to classify. Roberta-base-squad2 is a Question Answering model, which helps us to filter which comment mentions the feature. ## Text-image matching ### Model Input ```python import pandas as pd from transformers import pipeline QA_input = { 'question': 'Does it mention dependable?', 'context': input("Enter your sentence:") } ``` ### Model Process ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/roberta-base-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) res = nlp(QA_input) if res['score'] > 0.1: sentence = QA_input['context'] classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli", device=0) sequence_to_classify = sentence candidate_labels = ['dependable', 'not dependable'] res_2 = classifier(sequence_to_classify, candidate_labels, multi_label=False) score = res_2.get('scores')[0]*2-1 else: score = 0 print(score) ``` ## Result If the score is zero, it means it doesn't mention the feature. Others, it gets the score of the comment with the feature we select. ### Demo code (Python Notebook) https://github.com/vanderbilt-data-science/nissan/blob/main/30-ModelFilter/question-answering.ipynb https://github.com/vanderbilt-data-science/nissan/blob/main/31-ModelWalkthrough/label_after_filtering.ipynb
espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3
espnet
2022-05-03T21:48:45Z
1
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:chime6", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-05-03T20:52:40Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - chime6 license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3` This model was trained by simpleoier using chime6 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout b757b89d45d5574cebf44e225cbe32e3e9e4f522 pip install -e . cd egs2/chime6/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3 ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Tue May 3 16:47:10 EDT 2022` - python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]` - espnet version: `espnet 202204` - pytorch version: `pytorch 1.10.1` - Git hash: `b757b89d45d5574cebf44e225cbe32e3e9e4f522` - Commit date: `Mon May 2 09:21:08 2022 -0400` ## asr_train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k_raw_en_bpe1000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|58881|66.5|21.3|12.2|8.8|42.3|77.4| |decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|58881|68.6|20.7|10.6|8.4|39.8|77.5| |decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|58881|67.5|20.3|12.2|8.0|40.5|76.5| |decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|58881|67.7|21.4|10.9|8.6|40.9|77.9| |decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|58881|66.6|20.9|12.5|8.2|41.6|77.8| |decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0| |decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|58881|69.4|20.2|10.4|8.6|39.1|75.8| |decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|58881|65.7|20.2|14.1|7.5|41.8|77.8| |decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|58881|65.7|19.0|15.3|6.2|40.6|78.8| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|280767|78.1|7.7|14.1|9.1|31.0|77.9| |decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|280767|80.0|7.6|12.5|8.7|28.8|78.1| |decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|280767|78.6|7.3|14.1|8.1|29.5|77.5| |decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|280767|79.5|7.7|12.8|9.1|29.6|78.8| |decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|280767|77.9|7.6|14.5|8.3|30.3|78.6| |decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0| |decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|280767|80.6|7.4|12.0|8.9|28.3|76.6| |decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|280767|76.5|7.4|16.1|7.7|31.2|78.5| |decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|280767|77.0|7.6|15.4|7.2|30.2|79.8| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|92680|65.8|18.8|15.4|8.7|42.9|78.0| |decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|92680|67.9|18.1|13.9|8.2|40.3|78.2| |decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|92680|66.9|17.8|15.2|8.0|41.1|77.7| |decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|92680|67.2|18.5|14.3|8.2|40.9|78.9| |decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|92680|66.1|18.2|15.7|7.8|41.7|78.6| |decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0| |decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|92680|68.9|17.7|13.4|8.2|39.3|76.6| |decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|92680|66.1|19.1|14.8|10.2|44.1|78.6| |decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|92680|66.0|19.9|14.1|9.5|43.6|79.8| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k_raw_en_bpe1000_sp ngpu: 0 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: null dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 8 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 48 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe1000_sp/train/speech_shape - exp/asr_stats_raw_en_bpe1000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe1000_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe1000_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_worn_simu_u400k_cleaned_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_worn_simu_u400k_cleaned_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_gss_multiarray/wav.scp - speech - kaldi_ark - - dump/raw/dev_gss_multiarray/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 scheduler: warmuplr scheduler_conf: warmup_steps: 20000 token_list: - <blank> - <unk> - '[inaudible]' - '[laughs]' - '[noise]' - ▁ - s - '''' - ▁i - ▁it - t - ▁you - ▁the - ▁yeah - ▁a - ▁like - ▁that - ▁and - ▁to - m - ▁oh - ▁so - '-' - e - re - a - ▁just - ▁no - d - ▁we - n - ▁in - ing - i - ▁of - ▁do - ▁is - ▁have - ▁what - ▁was - ▁this - ▁can - o - ▁one - r - ▁but - er - y - ▁they - ed - ▁uh - ▁for - ▁okay - ▁there - ▁be - ▁he - ▁don - g - ll - ▁right - p - ▁not - u - ▁on - c - ▁then - ▁know - ▁my - ▁or - ▁get - ▁are - ▁all - ▁um - ▁me - ▁if - ▁go - ▁good - ▁with - ▁really - b - ▁gonna - ▁think - ▁cuz - in - ▁your - k - ve - le - w - an - ▁she - l - ▁well - en - f - ▁up - al - ▁two - h - ar - ▁how - ▁mhm - v - ▁here - ly - ▁put - ▁out - ▁would - ▁at - ▁need - ▁did - ▁f - ▁want - ▁mm - ▁more - ch - ri - ▁now - or - ▁when - ▁k - ▁p - ▁see - ▁got - ▁too - ▁thing - ▁time - 'on' - ▁actually - ▁where - ne - ▁guys - ▁some - ▁had - ▁why - ic - ▁them - ▁st - ro - ▁make - ur - ▁three - ▁b - ▁mean - ▁wanna - ▁should - at - ▁from - th - ▁didn - ▁about - ▁yes - ▁because - ▁yep - ▁people - ▁co - ▁could - ▁were - ▁take - ▁has - ▁something - ce - ▁w - ▁c - ▁sure - ▁who - ▁other - ▁sh - ▁say - ▁an - ▁her - ▁g - ▁work - il - es - ▁little - el - ▁much - ▁eat - ▁still - ▁wait - ▁ma - ▁four - ▁de - ▁only - ▁down - ▁though - ▁way - ▁lot - ▁use - ▁over - ▁let - ▁pretty - ▁these - ▁bo - ▁any - ▁off - ▁ba - ▁di - ▁d - ▁back - ▁sorry - ▁those - ▁very - ▁bit - ▁even - li - ▁stuff - ke - ate - z - ▁probably - ▁nice - ▁turn - ▁doesn - ▁first - ▁does - ▁hmm - ▁look - ▁going - ▁play - ▁ho - pe - ▁maybe - ▁come - ▁fine - ▁cut - ▁man - ▁bu - ▁ca - ▁mo - ▁th - lo - ▁never - ry - ▁po - ▁h - ▁will - us - x - ge - ▁five - ▁start - ▁him - ▁long - ▁give - ▁se - ting - ▁sp - ▁ra - ▁done - ▁con - ▁big - ▁his - ▁y - ▁which - ▁been - ▁dunno - est - ion - ▁fa - ▁than - me - ▁our - ▁also - ▁six - ▁kinda - co - ▁cool - ty - ▁game - ▁thought - ▁fi - ▁after - ▁day - ▁doing - ment - ▁said - ▁whatever - ap - ▁place - ▁anything - ▁j - ▁guess - em - ▁always - ▁things - ▁card - ▁li - ▁thank - ▁last - ▁before - ▁many - ▁watch - ▁pa - ▁year - ▁ah - ▁hot - ▁into - ▁ten - ▁keep - ▁bad - tion - ▁us - ▁cr - ▁part - ▁cook - ▁o - ▁cards - ▁everything - ▁la - ▁ha - ▁by - ▁wow - ▁their - ies - ▁hey - ▁same - ▁went - ▁pick - ▁might - ▁sc - ▁ex - ie - ▁wood - ight - ▁another - ▁better - ▁try - ard - ▁seven - ▁guy - ▁point - up - op - ▁twenty - ▁hand - ▁wh - ▁food - ▁tra - ation - ▁buy - ▁kind - ist - ▁whole - ive - is - ▁half - able - ▁pro - ▁win - ▁different - ▁cl - age - ▁already - ▁gotta - ack - ▁ti - ▁lo - ▁every - ▁super - ▁again - ▁new - ▁remember - ers - ▁dude - um - ▁feel - ▁roll - ▁cheese - ▁na - ▁sit - ▁sa - way - ▁hard - ▁enough - 'no' - ▁eight - ity - ▁friend - ▁un - ul - ▁love - ▁salt - ▁mi - ▁steak - ▁nine - ▁else - ▁looks - ▁pu - ▁fl - ▁build - ▁pre - ▁end - ▁ta - ▁salad - ▁high - ▁find - ▁water - ▁usually - ▁small - ▁around - ▁butter - ▁car - ▁made - ▁wash - ▁move - ▁plate - ▁true - ▁pan - ain - cu - ▁nope - ▁ooh - ▁sauce - ▁help - ▁wa - ▁left - ▁person - uck - ▁top - ▁side - ▁cha - ▁god - ▁leave - ▁goes - ▁weird - ▁each - ▁r - ▁basically - ▁chicken - ted - ▁oil - ▁trying - ▁fun - ▁close - ▁taste - ▁old - ▁show - ble - ▁next - ▁name - ▁used - ▁mine - ous - ▁great - ▁pot - ally - ▁burn - ▁huh - ▁minutes - ▁once - ▁phone - ▁bowl - tic - ▁tell - ound - ▁ask - ▁mu - ▁thirty - ▁someone - ▁piece - ▁saying - ▁vi - ish - ▁ja - ▁comp - ▁called - ▁through - ▁gr - ize - ▁everyone - ▁funny - ▁getting - ▁won - ▁bl - ▁away - ▁pi - ▁chi - ▁totally - ▁red - ▁word - ▁hundred - ▁open - ▁dollar - ▁stone - ▁yet - ade - ▁du - ▁mmm - ▁sound - ▁both - ▁mar - ant - ▁potatoes - ▁garlic - fi - ▁hear - ▁pass - ▁saw - ▁kill - ▁second - ▁girl - ▁shit - ▁throw - ▁bought - ▁please - ▁che - ▁da - ▁hit - ▁tea - ▁hold - ▁shoot - ▁most - ▁clean - ▁wanted - ▁pepper - ▁happen - ▁aw - ▁home - ▁drink - ance - ▁yo - ▁sheep - ▁while - ▁ro - ▁house - ▁call - ▁meat - ▁face - ▁fuck - ▁talking - ▁green - ries - side - ▁set - ▁exactly - huh - ▁hour - ▁ready - ▁played - ▁finish - ▁add - ▁susie - q - ▁stop - ▁almost - ▁bring - ▁rice - ▁ear - ▁sweet - ▁hi - ▁pizza - ake - ▁wi - ▁gra - ▁free - ▁night - ▁pay - ▁rick - ▁full - ▁wheat - ▁count - ▁white - ful - ▁light - ▁plan - ▁supposed - ▁either - ▁bacon - ▁sim - ▁sense - ▁blue - ▁team - ▁interesting - ▁care - ▁room - nut - ward - ▁real - ▁week - ▁heard - ▁told - ▁mind - ▁table - ▁head - ash - ▁looking - ▁ever - ▁check - ▁together - ▁ju - ▁app - ▁grab - ▁brown - ▁eh - book - ▁stick - ▁later - ▁pea - ▁talk - ▁awesome - ▁cream - ling - ▁fifty - ▁color - ▁qu - ▁round - ▁nothing - ▁power - ▁deal - ▁matter - ▁player - ▁draw - ▁having - ▁kid - ▁fish - ▁damn - ▁own - ▁crazy - ▁dad - ▁took - ▁perfect - ▁idea - ▁couple - ▁live - ▁job - ▁smell - ▁number - ▁reason - ▁best - ▁forty - ▁making - ▁dinner - ▁change - ▁playing - ▁sometimes - ▁fridge - ▁miss - j - ▁woah - ▁chancey - ▁bucks - ▁brick - ▁rec - ▁run - ▁far - ball - ▁bread - ▁fast - ▁knife - ▁black - ▁break - ▁mix - ▁today - ▁cheap - ▁mike - ▁expensive - out - ▁normal - ▁under - ▁using - ▁double - ▁gold - ▁life - ▁oven - ▁less - ▁space - ▁wine - ence - land - ▁sea - ▁corn - ▁cooking - ▁stay - ▁line - ▁may - ▁bar - ▁block - ▁late - ▁yourself - ▁quite - ▁apple - ▁extra - ▁wedding - ▁happened - ▁kitchen - ▁coming - ▁zero - ▁definitely - ▁connect - ▁read - ▁crab - ▁easier - ▁mkay - ▁egg - ▁came - ▁money - ▁anyone - ▁save - ▁problem - ▁club - ▁tried - ▁wrong - ▁spot - ▁low - ▁amazing - ▁milk - ▁jeff - ▁flip - ▁text - ▁bottle - jo - ▁without - ▁parents - ▁anymore - ▁course - ship - ▁month - ▁chinese - ▁must - ▁movie - ▁wonder - ▁bunch - ▁family - ▁season - ▁quick - ▁past - ▁paul - ▁rid - ▁tennis - town - ▁cold - ▁serious - ▁drive - ▁boil - ▁screw - ▁least - ▁everybody - ▁sort - ▁thomas - ▁rest - ▁suck - ▁road - ▁fair - ▁forgot - ▁order - ▁middle - ▁babe - ▁bang - ▁dress - ▁sleep - ▁question - ▁until - ▁sheriff - ▁chop - ▁restaurant - ▁outside - ▁learn - ▁stand - ▁walk - ▁attack - ▁trade - ▁phil - ▁few - ▁strong - ▁school - ▁world - ▁company - ▁easy - ▁hockey - ▁somebody - ▁short - ▁figure - ▁spice - ▁apparently - ▁since - ▁serve - ▁huge - ▁saboteur - ▁fifteen - ▁myself - ▁such - ▁port - ▁literally - ▁lose - ▁crap - ught - ▁gosh - ▁unless - ▁joke - ▁store - ▁bigger - ▁spell - ▁ago - ▁hang - ▁depend - ▁ginger - ▁slow - ▁medium - ▁record - acti - ▁kenny - ▁picture - old - ▁thousand - ▁cover - ▁tree - ▁obvious - ▁glass - ▁taking - ▁letter - ▁eleven - ▁skin - ▁market - ▁anybody - ▁ahead - ▁morning - ▁brand - ▁paper - ▁lemon - ▁onions - ▁juice - ▁jimmy - ▁living - ▁front - ▁bottom - ▁dark - ▁oops - ▁arjan - ▁shot - ▁rule - ▁hun - ▁flavor - ▁speak - ▁gun - ▁potato - ▁worry - ▁twelve - ▁sandwich - ▁plus - ▁believe - ▁knew - ▁realize - ▁sugar - ▁happy - ▁sister - ▁entire - ▁master - ▁eye - ▁touch - ▁wenny - ▁drop - ▁price - ▁slice - ▁sword - ▁spicy - ▁listen - ▁outlaw - que - ▁percent - ▁yesterday - ▁mushroom - ▁worth - ▁proper - ▁story - ▁megan - ▁character - ▁hair - ▁straight - ▁discard - ▁spoon - ▁understand - ▁computer - ▁type - ▁nikki - ▁tomorrow - ▁trump - ▁third - ▁bennet - ▁nobody - ▁somewhere - ▁amount - ▁split - ▁accent - ▁group - ▁trip - ▁lunch - ▁racket - ▁level - ▁difference - ▁orange - ▁gave - ▁dessert - ▁single - ▁chocolate - ▁junette - ▁camera - ▁regular - ▁video - ▁gross - ▁notice - ▁actual - ▁between - ▁surprise - ▁smart - ▁east - ▁craft - ▁rock - ▁certain - ▁rather - ▁lobster - ▁photo - ▁favorite - ▁behind - ▁across - ▁steal - ▁spend - ▁weekend - ▁special - ▁sign - ▁wrap - ▁except - ▁john - ▁conversation - ▁asian - ▁grand - ▁online - ▁explain - ▁dishes - ▁magic - ▁decide - ▁fancy - ▁random - ▁tunnel - ▁switch - ▁transcribe - ▁english - ▁giant - ▁kick - ▁claire - ▁laugh - ▁yellow - ▁delicious - ▁freeze - ▁drunk - ▁general - ▁gimme - ▁damage - ▁breakfast - ▁roast - ▁josh - ▁choose - ▁email - ▁direct - ▁tomatoes - ▁fruit - ▁apart - ▁chopstick - ▁vancouver - ▁kept - tract - ▁chunk - ▁girlfriend - ▁shuffle - ▁terrible - ▁diamond - ▁sausage - ▁sweat - ▁iphone - ▁pineapple - ▁summer - ▁french - ▁fresh - ▁heavy - ▁million - ▁instead - ▁ridiculous - ▁tough - ▁friday - ▁whenever - ▁coffee - ▁hilarious - ▁worried - ▁especially - ▁shrimp - ▁avocado - '&' - ä - '#' - ǎ - î - ü - ǐ - ñ - â - ç - ']' - é - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram1000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: wavlm_large download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 100 num_freq_mask: 4 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false preencoder: linear preencoder_conf: input_size: 1024 output_size: 128 encoder: transformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d2 normalize_before: true postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.0 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 required: - output_dir - token_list version: '202204' distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
theojolliffe/bart-large-cnn-finetuned-roundup-32
theojolliffe
2022-05-03T21:24:20Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T19:23:27Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetuned-roundup-32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-32 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2324 - Rouge1: 46.462 - Rouge2: 25.9506 - Rougel: 29.4584 - Rougelsum: 44.1863 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 32 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 132 | 1.3139 | 48.8247 | 29.2173 | 31.7628 | 45.8992 | 142.0 | | No log | 2.0 | 264 | 1.2287 | 47.9398 | 29.4061 | 30.9133 | 44.9142 | 140.9 | | No log | 3.0 | 396 | 1.2676 | 49.2743 | 30.4469 | 32.8893 | 46.6208 | 142.0 | | 0.9578 | 4.0 | 528 | 1.3218 | 47.315 | 26.7303 | 30.5007 | 44.7654 | 142.0 | | 0.9578 | 5.0 | 660 | 1.3173 | 47.1476 | 25.9408 | 29.4257 | 44.4956 | 142.0 | | 0.9578 | 6.0 | 792 | 1.4283 | 47.5836 | 27.1572 | 29.8553 | 44.8858 | 142.0 | | 0.9578 | 7.0 | 924 | 1.5005 | 46.6839 | 26.2214 | 30.1895 | 43.8753 | 140.75 | | 0.3306 | 8.0 | 1056 | 1.5316 | 47.7611 | 27.1105 | 30.8142 | 44.7598 | 142.0 | | 0.3306 | 9.0 | 1188 | 1.6295 | 48.4416 | 27.6912 | 30.3409 | 45.317 | 142.0 | | 0.3306 | 10.0 | 1320 | 1.6564 | 46.5751 | 27.2306 | 29.7265 | 43.7327 | 142.0 | | 0.3306 | 11.0 | 1452 | 1.7471 | 47.9684 | 27.5739 | 30.7018 | 44.6852 | 141.75 | | 0.145 | 12.0 | 1584 | 1.7700 | 47.9274 | 28.5129 | 31.129 | 45.1009 | 142.0 | | 0.145 | 13.0 | 1716 | 1.8391 | 49.8091 | 30.1597 | 33.6004 | 47.2007 | 141.95 | | 0.145 | 14.0 | 1848 | 1.9212 | 45.2195 | 25.033 | 27.4181 | 42.6161 | 142.0 | | 0.145 | 15.0 | 1980 | 1.9267 | 48.4959 | 28.1 | 31.2796 | 46.2758 | 142.0 | | 0.0723 | 16.0 | 2112 | 1.9130 | 47.0765 | 27.4929 | 30.6862 | 44.1458 | 142.0 | | 0.0723 | 17.0 | 2244 | 1.9514 | 48.5354 | 28.4909 | 31.8966 | 45.7116 | 142.0 | | 0.0723 | 18.0 | 2376 | 2.0064 | 47.9339 | 28.6862 | 32.4472 | 45.3704 | 142.0 | | 0.042 | 19.0 | 2508 | 2.0210 | 48.3169 | 28.1579 | 30.2681 | 45.3831 | 141.3 | | 0.042 | 20.0 | 2640 | 2.0377 | 46.8156 | 26.0122 | 28.817 | 43.9383 | 142.0 | | 0.042 | 21.0 | 2772 | 2.0587 | 46.3813 | 27.3555 | 29.875 | 43.6605 | 142.0 | | 0.042 | 22.0 | 2904 | 2.0695 | 45.6728 | 26.0639 | 29.5653 | 42.3772 | 142.0 | | 0.025 | 23.0 | 3036 | 2.1617 | 46.7283 | 26.2082 | 28.52 | 43.3304 | 142.0 | | 0.025 | 24.0 | 3168 | 2.1375 | 48.1347 | 28.3444 | 31.7509 | 45.4907 | 142.0 | | 0.025 | 25.0 | 3300 | 2.1911 | 47.3358 | 27.1479 | 29.4923 | 44.0087 | 142.0 | | 0.025 | 26.0 | 3432 | 2.1806 | 47.2218 | 26.8421 | 30.03 | 44.2417 | 142.0 | | 0.0153 | 27.0 | 3564 | 2.1890 | 46.3745 | 27.0095 | 29.7274 | 43.3372 | 142.0 | | 0.0153 | 28.0 | 3696 | 2.2235 | 50.1274 | 30.8817 | 32.8766 | 46.7486 | 141.5 | | 0.0153 | 29.0 | 3828 | 2.2236 | 50.1785 | 30.8079 | 32.8886 | 46.9888 | 142.0 | | 0.0153 | 30.0 | 3960 | 2.2312 | 46.7468 | 26.4272 | 30.1175 | 43.9132 | 142.0 | | 0.0096 | 31.0 | 4092 | 2.2287 | 47.558 | 26.3933 | 29.9122 | 44.5752 | 142.0 | | 0.0096 | 32.0 | 4224 | 2.2324 | 46.462 | 25.9506 | 29.4584 | 44.1863 | 142.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
ShreyaR/finetuned-distil-bert-depression
ShreyaR
2022-05-03T20:44:08Z
6
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-16T13:54:51Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuned-distil-bert-depression results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-distil-bert-depression This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1695 - Accuracy: 0.9445 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0243 | 1.0 | 625 | 0.2303 | 0.9205 | | 0.0341 | 2.0 | 1250 | 0.1541 | 0.933 | | 0.0244 | 3.0 | 1875 | 0.1495 | 0.9445 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
SebastianS/distilbert-base-uncased-finetuned-imdb
SebastianS
2022-05-03T20:42:53Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-03T19:56:43Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0122 - eval_runtime: 27.9861 - eval_samples_per_second: 35.732 - eval_steps_per_second: 0.572 - epoch: 2.13 - step: 334 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
hbruce11216/distilbert-base-uncased-finetuned-OTTO
hbruce11216
2022-05-03T18:51:50Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-26T14:54:45Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-OTTO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-OTTO This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7687 | 1.0 | 17 | 3.3507 | | 3.5069 | 2.0 | 34 | 3.2786 | | 3.4126 | 3.0 | 51 | 3.2575 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
huggingtweets/wojespn
huggingtweets
2022-05-03T18:45:11Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/wojespn/1651603295184/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1509990164415893517/qIuzsMq6_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Adrian Wojnarowski</div> <div style="text-align: center; font-size: 14px;">@wojespn</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Adrian Wojnarowski. | Data | Adrian Wojnarowski | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 699 | | Short tweets | 46 | | Tweets kept | 2505 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kc1af3t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wojespn's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3d9r0f0h) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3d9r0f0h/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/wojespn') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
theojolliffe/bart-large-cnn-finetuned-roundup-8
theojolliffe
2022-05-03T18:12:19Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T17:16:58Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetuned-roundup-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-8 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4519 - Rouge1: 49.5671 - Rouge2: 27.0118 - Rougel: 30.8538 - Rougelsum: 45.5503 - Gen Len: 141.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 132 | 1.3159 | 48.5275 | 28.0817 | 30.6646 | 45.5024 | 142.0 | | No log | 2.0 | 264 | 1.2377 | 47.0791 | 27.4386 | 28.9458 | 44.1536 | 142.0 | | No log | 3.0 | 396 | 1.2474 | 49.3567 | 29.5904 | 30.8029 | 46.6083 | 142.0 | | 0.9623 | 4.0 | 528 | 1.2914 | 47.8795 | 27.0611 | 29.8538 | 44.4494 | 142.0 | | 0.9623 | 5.0 | 660 | 1.2982 | 49.9921 | 28.4839 | 31.5688 | 46.9734 | 142.0 | | 0.9623 | 6.0 | 792 | 1.3521 | 46.7269 | 25.8672 | 29.7325 | 43.8279 | 142.0 | | 0.9623 | 7.0 | 924 | 1.4102 | 47.4995 | 26.0066 | 29.4342 | 44.1102 | 141.8 | | 0.3734 | 8.0 | 1056 | 1.4519 | 49.5671 | 27.0118 | 30.8538 | 45.5503 | 141.75 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
laituan245/molt5-base-smiles2caption
laituan245
2022-05-03T18:07:57Z
617
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2204.11817", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T17:12:55Z
--- license: apache-2.0 --- This model can be used to generate an input caption from a SMILES string. ## Example Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-base-smiles2caption", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-base-smiles2caption') input_text = 'C1=CC2=C(C(=C1)[O-])NC(=CC2=O)C(=O)O' input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, num_beams=5, max_length=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Paper For more information, please take a look at our paper. Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817) Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
laituan245/molt5-small
laituan245
2022-05-03T18:07:24Z
178
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2204.11817", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T17:45:46Z
--- license: apache-2.0 --- ## Example Usage ```python from transformers import AutoTokenizer, T5ForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-small", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-small') ``` ## Paper For more information, please take a look at our paper. Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817) Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
TehranNLP-org/electra-base-mnli
TehranNLP-org
2022-05-03T17:01:07Z
5
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "generated_from_trainer", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-30T12:50:13Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SEED0042 results: - task: name: Text Classification type: text-classification dataset: name: MNLI type: '' args: mnli metrics: - name: Accuracy type: accuracy value: 0.8879266428935303 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SEED0042 This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4265 - Accuracy: 0.8879 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: not_parallel - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3762 | 1.0 | 12272 | 0.3312 | 0.8794 | | 0.2542 | 2.0 | 24544 | 0.3467 | 0.8843 | | 0.1503 | 3.0 | 36816 | 0.4265 | 0.8879 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu113 - Datasets 2.1.0 - Tokenizers 0.11.6
TehranNLP-org/electra-base-hateXplain
TehranNLP-org
2022-05-03T17:00:31Z
5
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "generated_from_trainer", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-30T12:51:26Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SEED0042 results: - task: name: Text Classification type: text-classification dataset: name: HATEXPLAIN type: '' args: hatexplain metrics: - name: Accuracy type: accuracy value: 0.4162330905306972 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SEED0042 This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the HATEXPLAIN dataset. It achieves the following results on the evaluation set: - Loss: 0.7667 - Accuracy: 0.4162 - Accuracy 0: 0.8145 - Accuracy 1: 0.1895 - Accuracy 2: 0.3084 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: not_parallel - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 150 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Accuracy 0 | Accuracy 1 | Accuracy 2 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:----------:|:----------:| | No log | 1.0 | 481 | 0.7431 | 0.4152 | 0.7707 | 0.1805 | 0.3650 | | No log | 2.0 | 962 | 0.7346 | 0.4152 | 0.8010 | 0.2190 | 0.2774 | | No log | 3.0 | 1443 | 0.7667 | 0.4162 | 0.8145 | 0.1895 | 0.3084 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu113 - Datasets 2.1.0 - Tokenizers 0.11.6
theojolliffe/bart-large-cnn-finetuned-roundup-4
theojolliffe
2022-05-03T16:58:47Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T16:09:59Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetuned-roundup-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-4 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2573 - Rouge1: 49.0193 - Rouge2: 28.6311 - Rougel: 31.3363 - Rougelsum: 46.1408 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 132 | 1.3178 | 48.4526 | 28.6361 | 30.2875 | 45.4822 | 142.0 | | No log | 2.0 | 264 | 1.2404 | 48.139 | 28.2459 | 29.3584 | 45.0785 | 142.0 | | No log | 3.0 | 396 | 1.2389 | 49.74 | 29.7834 | 33.143 | 46.8147 | 142.0 | | 0.9855 | 4.0 | 528 | 1.2573 | 49.0193 | 28.6311 | 31.3363 | 46.1408 | 142.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
mrm8488/data2vec-text-base-finetuned-stsb
mrm8488
2022-05-03T16:28:24Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "data2vec-text", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-03T15:51:59Z
--- license: mit tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: data2vec-text-base-finetuned-stsb results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.8716633516590501 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # data2vec-text-base-finetuned-stsb This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5530 - Pearson: 0.8732 - Spearmanr: 0.8717 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.725353773731373e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 5 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | No log | 1.0 | 180 | 1.0650 | 0.8102 | 0.8380 | | No log | 2.0 | 360 | 0.6211 | 0.8524 | 0.8497 | | 0.9312 | 3.0 | 540 | 0.5917 | 0.8640 | 0.8642 | | 0.9312 | 4.0 | 720 | 0.5672 | 0.8695 | 0.8686 | | 0.9312 | 5.0 | 900 | 0.5530 | 0.8732 | 0.8717 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
pietrolesci/t5v1_1-base-mnli
pietrolesci
2022-05-03T14:53:23Z
4
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T14:50:42Z
## Overview T5-Base v1.1 model trained to generate hypotheses given a premise and a label. Below the settings used to train it ```yaml Experiment configurations ├── datasets │ └── mnli_train: │ dataset_name: multi_nli │ dataset_config_name: null │ cache_dir: null │ input_fields: │ - premise │ - hypothesis │ target_field: label │ train_subset_names: null │ val_subset_names: validation_matched │ test_subset_names: none │ train_val_split: null │ limit_train_samples: null │ limit_val_samples: null │ limit_test_samples: null │ sampling_kwargs: │ sampling_strategy: random │ seed: 42 │ replace: false │ align_labels_with_mapping: null │ avoid_consistency_check: false │ predict_label_mapping: null │ mnli: │ dataset_name: multi_nli │ dataset_config_name: null │ cache_dir: null │ input_fields: │ - premise │ - hypothesis │ target_field: label │ train_subset_names: none │ val_subset_names: none │ test_subset_names: validation_mismatched │ train_val_split: null │ limit_train_samples: null │ limit_val_samples: null │ limit_test_samples: null │ sampling_kwargs: │ sampling_strategy: random │ seed: 42 │ replace: false │ align_labels_with_mapping: null │ avoid_consistency_check: false │ predict_label_mapping: null │ ├── data │ └── _target_: src.task.nli.data.NLIGenerationData.from_config │ main_dataset_name: null │ use_additional_as_test: null │ dataloader: │ batch_size: 64 │ eval_batch_size: 100 │ num_workers: 16 │ pin_memory: true │ drop_last: false │ persistent_workers: false │ shuffle: true │ seed_dataloader: 42 │ replacement: false │ processing: │ preprocessing_num_workers: 16 │ preprocessing_batch_size: 1000 │ load_from_cache_file: true │ padding: longest │ truncation: longest_first │ max_source_length: 128 │ max_target_length: 128 │ template: 'premise: $premise $label hypothesis: ' │ tokenizer: │ _target_: transformers.AutoTokenizer.from_pretrained │ pretrained_model_name_or_path: google/t5-v1_1-base │ use_fast: true │ ├── task │ └── optimizer: │ name: Adafactor │ lr: 0.001 │ weight_decay: 0.0 │ no_decay: │ - bias │ - LayerNorm.weight │ decay_rate: -0.8 │ clip_threshold: 1.0 │ relative_step: false │ scale_parameter: false │ warmup_init: false │ scheduler: │ name: constant_schedule │ model: │ model_name_or_path: google/t5-v1_1-base │ checkpoint_path: null │ freeze: false │ seed_init_weight: 42 │ _target_: src.task.nli.NLIGenerationTask.from_config │ generation: │ max_length: 128 │ min_length: 3 │ do_sample: true │ early_stopping: false │ num_beams: 1 │ temperature: 1.0 │ top_k: 50 │ top_p: 0.95 │ repetition_penalty: null │ length_penalty: null │ no_repeat_ngram_size: null │ encoder_no_repeat_ngram_size: null │ num_return_sequences: 1 │ max_time: null │ max_new_tokens: null │ decoder_start_token_id: null │ use_cache: null │ num_beam_groups: null │ diversity_penalty: null │ ├── trainer │ └── _target_: pytorch_lightning.Trainer │ callbacks: │ lr_monitor: │ _target_: pytorch_lightning.callbacks.LearningRateMonitor │ logging_interval: step │ log_momentum: false │ model_checkpoint: │ _target_: pytorch_lightning.callbacks.ModelCheckpoint │ dirpath: ./checkpoints/ │ filename: nli_generator_mnli-epoch={epoch:02d}-val_loss={val/aggregated_loss:.2f} │ monitor: val/aggregated_loss │ mode: min │ verbose: false │ save_last: true │ save_top_k: 1 │ auto_insert_metric_name: false │ save_on_train_epoch_end: false │ rich_model_summary: │ _target_: pytorch_lightning.callbacks.RichModelSummary │ max_depth: 1 │ log_grad_norm: │ _target_: src.core.callbacks.LogGradNorm │ norm_type: 2 │ group_separator: / │ only_total: true │ on_step: true │ on_epoch: false │ prog_bar: true │ log_generated_text: │ _target_: src.core.callbacks.GenerateAndLogText │ dirpath: ./generated_text │ type: generated_text │ pop_keys_after_logging: true │ on_train: false │ on_validation: false │ on_test: true │ log_to_wandb: true │ wandb_log_dataset_sizes: │ _target_: src.core.callbacks.WandbLogDatasetSizes │ logger: │ wandb: │ _target_: pytorch_lightning.loggers.WandbLogger │ project: nli_debiasing │ entity: team_brushino │ name: nli_generator_mnli │ save_dir: ./ │ offline: false │ log_model: false │ group: mnli │ job_type: generator │ tags: │ - nli_generator_mnli │ - seed=42 │ - seed_dataloader=42 │ notes: nli_generator_mnli_time=02-24-53 │ enable_checkpointing: true │ enable_progress_bar: true │ enable_model_summary: true │ gradient_clip_val: 0.0 │ gradient_clip_algorithm: null │ accelerator: gpu │ devices: auto │ gpus: null │ auto_select_gpus: true │ accumulate_grad_batches: 1 │ max_epochs: 3 │ min_epochs: 1 │ max_steps: -1 │ min_steps: null │ max_time: null │ num_sanity_val_steps: 2 │ overfit_batches: 0.0 │ fast_dev_run: false │ limit_train_batches: 1.0 │ limit_val_batches: 1.0 │ limit_test_batches: 1.0 │ profiler: null │ detect_anomaly: false │ deterministic: false │ check_val_every_n_epoch: 1 │ val_check_interval: 0.1 │ log_every_n_steps: 10 │ move_metrics_to_cpu: false │ └── training └── run_val_before_fit: false run_val_after_fit: false run_test_before_fit: false run_test_after_fit: true lr: 0.001 seed: 42 show_batch: false batch_size: 64 eval_batch_size: 100 num_workers: 16 pin_memory: true drop_last: false persistent_workers: false shuffle: true seed_dataloader: 42 ignore_warnings: true experiment_name: nli_generator_mnli ```
UWB-AIR/Czert-B-base-cased-long-zero-shot
UWB-AIR
2022-05-03T13:49:35Z
13
2
transformers
[ "transformers", "pytorch", "longformer", "feature-extraction", "cs", "fill-mask", "arxiv:2103.13031", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - cs - fill-mask --- # CZERT This repository keeps trained Czert-B-base-cased-long-zero-shot model for the paper [Czert – Czech BERT-like Model for Language Representation ](https://arxiv.org/abs/2103.13031) For more information, see the paper This is long version of Czert-B-base-cased created without any finetunning on long documents. Positional embedings were created by simply repeating the positional embeddings of the original Czert-B model. For tokenization, please use BertTokenizer. Cannot be used with AutoTokenizer. ## Available Models You can download **MLM & NSP only** pretrained models ~~[CZERT-A-v1](https://air.kiv.zcu.cz/public/CZERT-A-czert-albert-base-uncased.zip) [CZERT-B-v1](https://air.kiv.zcu.cz/public/CZERT-B-czert-bert-base-cased.zip)~~ After some additional experiments, we found out that the tokenizers config was exported wrongly. In Czert-B-v1, the tokenizer parameter "do_lower_case" was wrongly set to true. In Czert-A-v1 the parameter "strip_accents" was incorrectly set to true. Both mistakes are repaired in v2. [CZERT-A-v2](https://air.kiv.zcu.cz/public/CZERT-A-v2-czert-albert-base-uncased.zip) [CZERT-B-v2](https://air.kiv.zcu.cz/public/CZERT-B-v2-czert-bert-base-cased.zip) or choose from one of **Finetuned Models** | | Models | | - | - | | Sentiment Classification<br> (Facebook or CSFD) | [CZERT-A-sentiment-FB](https://air.kiv.zcu.cz/public/CZERT-A_fb.zip) <br> [CZERT-B-sentiment-FB](https://air.kiv.zcu.cz/public/CZERT-B_fb.zip) <br> [CZERT-A-sentiment-CSFD](https://air.kiv.zcu.cz/public/CZERT-A_csfd.zip) <br> [CZERT-B-sentiment-CSFD](https://air.kiv.zcu.cz/public/CZERT-B_csfd.zip) | Semantic Text Similarity <br> (Czech News Agency) | [CZERT-A-sts-CNA](https://air.kiv.zcu.cz/public/CZERT-A-sts-CNA.zip) <br> [CZERT-B-sts-CNA](https://air.kiv.zcu.cz/public/CZERT-B-sts-CNA.zip) | Named Entity Recognition | [CZERT-A-ner-CNEC](https://air.kiv.zcu.cz/public/CZERT-A-ner-CNEC-cased.zip) <br> [CZERT-B-ner-CNEC](https://air.kiv.zcu.cz/public/CZERT-B-ner-CNEC-cased.zip) <br>[PAV-ner-CNEC](https://air.kiv.zcu.cz/public/PAV-ner-CNEC-cased.zip) <br> [CZERT-A-ner-BSNLP](https://air.kiv.zcu.cz/public/CZERT-A-ner-BSNLP-cased.zip)<br>[CZERT-B-ner-BSNLP](https://air.kiv.zcu.cz/public/CZERT-B-ner-BSNLP-cased.zip) <br>[PAV-ner-BSNLP](https://air.kiv.zcu.cz/public/PAV-ner-BSNLP-cased.zip) | | Morphological Tagging<br> | [CZERT-A-morphtag-126k](https://air.kiv.zcu.cz/public/CZERT-A-morphtag-126k-cased.zip)<br>[CZERT-B-morphtag-126k](https://air.kiv.zcu.cz/public/CZERT-B-morphtag-126k-cased.zip) | | Semantic Role Labelling |[CZERT-A-srl](https://air.kiv.zcu.cz/public/CZERT-A-srl-cased.zip)<br> [CZERT-B-srl](https://air.kiv.zcu.cz/public/CZERT-B-srl-cased.zip) | ## How to Use CZERT? ### Sentence Level Tasks We evaluate our model on two sentence level tasks: * Sentiment Classification, * Semantic Text Similarity. <!-- tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False) model = TFAlbertForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, num_labels=1) or self.tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False) self.model_encoder = AutoModelForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, from_tf=True) --> ### Document Level Tasks We evaluate our model on one document level task * Multi-label Document Classification. ### Token Level Tasks We evaluate our model on three token level tasks: * Named Entity Recognition, * Morphological Tagging, * Semantic Role Labelling. ## Downstream Tasks Fine-tuning Results ### Sentiment Classification | | mBERT | SlavicBERT | ALBERT-r | Czert-A | Czert-B | |:----:|:------------------------:|:------------------------:|:------------------------:|:-----------------------:|:--------------------------------:| | FB | 71.72 ± 0.91 | 73.87 ± 0.50 | 59.50 ± 0.47 | 72.47 ± 0.72 | **76.55** ± **0.14** | | CSFD | 82.80 ± 0.14 | 82.51 ± 0.14 | 75.40 ± 0.18 | 79.58 ± 0.46 | **84.79** ± **0.26** | Average F1 results for the Sentiment Classification task. For more information, see [the paper](https://arxiv.org/abs/2103.13031). ### Semantic Text Similarity | | **mBERT** | **Pavlov** | **Albert-random** | **Czert-A** | **Czert-B** | |:-------------|:--------------:|:--------------:|:-----------------:|:--------------:|:----------------------:| | STA-CNA | 83.335 ± 0.063 | 83.593 ± 0.050 | 43.184 ± 0.125 | 82.942 ± 0.106 | **84.345** ± **0.028** | | STS-SVOB-img | 79.367 ± 0.486 | 79.900 ± 0.810 | 15.739 ± 2.992 | 79.444 ± 0.338 | **83.744** ± **0.395** | | STS-SVOB-hl | 78.833 ± 0.296 | 76.996 ± 0.305 | 33.949 ± 1.807 | 75.089 ± 0.806 | **79.827 ± 0.469** | Comparison of Pearson correlation achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on semantic text similarity. For more information see [the paper](https://arxiv.org/abs/2103.13031). ### Multi-label Document Classification | | mBERT | SlavicBERT | ALBERT-r | Czert-A | Czert-B | |:-----:|:------------:|:------------:|:------------:|:------------:|:-------------------:| | AUROC | 97.62 ± 0.08 | 97.80 ± 0.06 | 94.35 ± 0.13 | 97.49 ± 0.07 | **98.00** ± **0.04** | | F1 | 83.04 ± 0.16 | 84.08 ± 0.14 | 72.44 ± 0.22 | 82.27 ± 0.17 | **85.06** ± **0.11** | Comparison of F1 and AUROC score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on multi-label document classification. For more information see [the paper](https://arxiv.org/abs/2103.13031). ### Morphological Tagging | | mBERT | Pavlov | Albert-random | Czert-A | Czert-B | |:-----------------------|:---------------|:---------------|:---------------|:---------------|:---------------| | Universal Dependencies | 99.176 ± 0.006 | 99.211 ± 0.008 | 96.590 ± 0.096 | 98.713 ± 0.008 | **99.300 ± 0.009** | Comparison of F1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on morphological tagging task. For more information see [the paper](https://arxiv.org/abs/2103.13031). ### Semantic Role Labelling <div id="tab:SRL"> | | mBERT | Pavlov | Albert-random | Czert-A | Czert-B | dep-based | gold-dep | |:------:|:----------:|:----------:|:-------------:|:----------:|:----------:|:---------:|:--------:| | span | 78.547 ± 0.110 | 79.333 ± 0.080 | 51.365 ± 0.423 | 72.254 ± 0.172 | **81.861 ± 0.102** | \- | \- | | syntax | 90.226 ± 0.224 | 90.492 ± 0.040 | 80.747 ± 0.131 | 80.319 ± 0.054 | **91.462 ± 0.062** | 85.19 | 89.52 | SRL results – dep columns are evaluate with labelled F1 from CoNLL 2009 evaluation script, other columns are evaluated with span F1 score same as it was used for NER evaluation. For more information see [the paper](https://arxiv.org/abs/2103.13031). </div> ### Named Entity Recognition | | mBERT | Pavlov | Albert-random | Czert-A | Czert-B | |:-----------|:---------------|:---------------|:---------------|:---------------|:---------------| | CNEC | **86.225 ± 0.208** | **86.565 ± 0.198** | 34.635 ± 0.343 | 72.945 ± 0.227 | 86.274 ± 0.116 | | BSNLP 2019 | 84.006 ± 1.248 | **86.699 ± 0.370** | 19.773 ± 0.938 | 48.859 ± 0.605 | **86.729 ± 0.344** | Comparison of f1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on named entity recognition task. For more information see [the paper](https://arxiv.org/abs/2103.13031). ## Licence This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/ ## How should I cite CZERT? For now, please cite [the Arxiv paper](https://arxiv.org/abs/2103.13031): ``` @article{sido2021czert, title={Czert -- Czech BERT-like Model for Language Representation}, author={Jakub Sido and Ondřej Pražák and Pavel Přibáň and Jan Pašek and Michal Seják and Miloslav Konopík}, year={2021}, eprint={2103.13031}, archivePrefix={arXiv}, primaryClass={cs.CL}, journal={arXiv preprint arXiv:2103.13031}, } ```
PoloHuggingface/French_grammar_error_corrector
PoloHuggingface
2022-05-03T13:32:40Z
102
6
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "text2text generation", "fr", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-02T15:45:49Z
--- content: language: - fr tags: - text2text generation widget: - text: "improve grammar: Elle ne peux jamais aller au cinéma avec son amis" example_title: "Grammar correction" --- # Finetuned T5 on the french part of Lang-8 to automatically correct sentences. Since the Lang-8 dataset contains really short sentences, the model does not generalize well with sentences larger than 10 words. I'll upload soon the cleaned dataset that I've used for training.
sanchit-gandhi/flax-wav2vec2-2-bart-large-960h
sanchit-gandhi
2022-05-03T12:24:52Z
3
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-03T12:07:42Z
2.5% WER on dev.clean: https://wandb.ai/sanchit-gandhi/flax-wav2vec2-2-bart-large-960h/runs/2lhazd5v
Tobias/bert-base-uncased_English_Hotel_classification
Tobias
2022-05-03T11:46:32Z
7
1
transformers
[ "transformers", "tf", "bert", "text-classification", "eng", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-03T11:33:24Z
--- language: eng tags: - bert license: apache-2.0 widget: - text: "The hotel is very nicely located" example_title: "Example 1" - text: "The reception staff were extremely helpful and very welcoming" example_title: "Example 2" - text: "There is no balcony in the rooms on the mountain side" example_title: "Example 3" - text: "A bit pricey" example_title: "Example 4" --- # German Hotel Review Sentiment Classification A model trained on English Hotel Reviews from Switzerland. The base model is the [bert-base-uncased](https://huggingface.co/bert-base-uncased). The last hidden layer of the base model was extracted and a classification layer was added. The entire model was then trained for 5 epochs on our dataset. # Model Performance | Classes | Precision | Recall | F1 Score | | :--- | :---: | :---: |:---: | | Room | 77.78% | 77.78% | 77.78% | | Location | 95.45% | 95.45% | 95.45% | | Staff | 75.00% | 93.75% | 83.33% | | Unknown | 71.43% | 50.00% | 58.82% | | HotelOrganisation | 27.27% | 30.00% | 28.57% | | Food | 87.50% | 87.50% | 87.50% | | ReasonForStay | 63.64% | 58.33% | 60.87%| | GeneralUtility | 66.67% | 50.00% | 66.67% | | Accuracy | | | 74.00% | | Macro Average | 70.59%| 67.85% | 68.68% | | Weighted Average | 74.17% | 74.00% | 73.66% | ## Confusion Matrix ![Confusion Matrix](bert-base-uncased_English_classification.jpg)
lucaordronneau/twitter-roberta-base-sentiment-latest-finetuned-FG-SINGLE_SENTENCE-NEWS
lucaordronneau
2022-05-03T11:29:22Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-13T12:29:56Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: twitter-roberta-base-sentiment-latest-finetuned-FG-SINGLE_SENTENCE-NEWS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-sentiment-latest-finetuned-FG-SINGLE_SENTENCE-NEWS This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2822 - Accuracy: 0.6305 - F1: 0.6250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 321 | 0.9646 | 0.5624 | 0.4048 | | 0.9537 | 2.0 | 642 | 0.9474 | 0.5644 | 0.4176 | | 0.9537 | 3.0 | 963 | 0.9008 | 0.5903 | 0.5240 | | 0.858 | 4.0 | 1284 | 0.9939 | 0.5999 | 0.5846 | | 0.5908 | 5.0 | 1605 | 1.0947 | 0.6108 | 0.6026 | | 0.5908 | 6.0 | 1926 | 1.2507 | 0.5740 | 0.5823 | | 0.3676 | 7.0 | 2247 | 1.4717 | 0.6128 | 0.6017 | | 0.2246 | 8.0 | 2568 | 1.6726 | 0.5965 | 0.6003 | | 0.2246 | 9.0 | 2889 | 1.8041 | 0.6380 | 0.6298 | | 0.1468 | 10.0 | 3210 | 1.9796 | 0.6053 | 0.6026 | | 0.1161 | 11.0 | 3531 | 2.0988 | 0.6237 | 0.6202 | | 0.1161 | 12.0 | 3852 | 2.4171 | 0.5944 | 0.5989 | | 0.0916 | 13.0 | 4173 | 2.3326 | 0.6374 | 0.6288 | | 0.0916 | 14.0 | 4494 | 2.5472 | 0.6360 | 0.6245 | | 0.0661 | 15.0 | 4815 | 2.9127 | 0.6176 | 0.6187 | | 0.0454 | 16.0 | 5136 | 2.9133 | 0.6326 | 0.6276 | | 0.0454 | 17.0 | 5457 | 3.1299 | 0.6210 | 0.6162 | | 0.0337 | 18.0 | 5778 | 3.1828 | 0.6224 | 0.6188 | | 0.0223 | 19.0 | 6099 | 3.2655 | 0.6299 | 0.6223 | | 0.0223 | 20.0 | 6420 | 3.2822 | 0.6305 | 0.6250 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
anton-l/xtreme_s_xlsr_300m_voxpopuli_en
anton-l
2022-05-03T09:55:15Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "voxpopuli", "google/xtreme_s", "generated_from_trainer", "en", "dataset:google/xtreme_s", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-29T18:58:24Z
--- language: - en license: apache-2.0 tags: - voxpopuli - google/xtreme_s - generated_from_trainer datasets: - google/xtreme_s model-index: - name: xtreme_s_xlsr_300m_voxpopuli_en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtreme_s_xlsr_300m_voxpopuli_en This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - VOXPOPULI.EN dataset. It achieves the following results on the evaluation set: - Cer: 0.0966 - Loss: 0.3127 - Wer: 0.1549 - Predict Samples: 1842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 1.4221 | 0.19 | 500 | 1.3325 | 0.8224 | 0.3432 | | 0.8429 | 0.38 | 1000 | 0.7087 | 0.5028 | 0.2023 | | 0.7377 | 0.57 | 1500 | 0.4900 | 0.2778 | 0.1339 | | 0.5641 | 0.77 | 2000 | 0.4460 | 0.2540 | 0.1284 | | 0.5787 | 0.96 | 2500 | 0.4242 | 0.2148 | 0.1167 | | 0.3465 | 1.15 | 3000 | 0.4210 | 0.2087 | 0.1154 | | 0.2787 | 1.34 | 3500 | 0.3954 | 0.2090 | 0.1155 | | 0.2775 | 1.53 | 4000 | 0.3938 | 0.1992 | 0.1133 | | 0.262 | 1.72 | 4500 | 0.3748 | 0.2104 | 0.1151 | | 0.3138 | 1.92 | 5000 | 0.3825 | 0.1993 | 0.1134 | | 0.4331 | 2.11 | 5500 | 0.3648 | 0.1935 | 0.1104 | | 0.3802 | 2.3 | 6000 | 0.3966 | 0.1910 | 0.1109 | | 0.3928 | 2.49 | 6500 | 0.3995 | 0.1898 | 0.1100 | | 0.3441 | 2.68 | 7000 | 0.3764 | 0.1887 | 0.1103 | | 0.3673 | 2.87 | 7500 | 0.3800 | 0.1843 | 0.1086 | | 0.3422 | 3.07 | 8000 | 0.3932 | 0.1830 | 0.1092 | | 0.2933 | 3.26 | 8500 | 0.3672 | 0.1915 | 0.1104 | | 0.1785 | 3.45 | 9000 | 0.3820 | 0.1796 | 0.1072 | | 0.321 | 3.64 | 9500 | 0.3533 | 0.1994 | 0.1126 | | 0.1673 | 3.83 | 10000 | 0.3683 | 0.1856 | 0.1084 | | 0.1757 | 4.02 | 10500 | 0.3365 | 0.1925 | 0.1102 | | 0.1881 | 4.22 | 11000 | 0.3528 | 0.1775 | 0.1066 | | 0.3106 | 4.41 | 11500 | 0.3909 | 0.1754 | 0.1063 | | 0.25 | 4.6 | 12000 | 0.3734 | 0.1723 | 0.1052 | | 0.2005 | 4.79 | 12500 | 0.3358 | 0.1900 | 0.1092 | | 0.2982 | 4.98 | 13000 | 0.3513 | 0.1766 | 0.1060 | | 0.1552 | 5.17 | 13500 | 0.3720 | 0.1729 | 0.1059 | | 0.1645 | 5.37 | 14000 | 0.3569 | 0.1713 | 0.1044 | | 0.2065 | 5.56 | 14500 | 0.3639 | 0.1720 | 0.1048 | | 0.1898 | 5.75 | 15000 | 0.3660 | 0.1726 | 0.1050 | | 0.1397 | 5.94 | 15500 | 0.3731 | 0.1670 | 0.1033 | | 0.2056 | 6.13 | 16000 | 0.3782 | 0.1650 | 0.1030 | | 0.1859 | 6.32 | 16500 | 0.3903 | 0.1667 | 0.1033 | | 0.1374 | 6.52 | 17000 | 0.3721 | 0.1736 | 0.1048 | | 0.2482 | 6.71 | 17500 | 0.3899 | 0.1643 | 0.1023 | | 0.159 | 6.9 | 18000 | 0.3847 | 0.1687 | 0.1032 | | 0.1487 | 7.09 | 18500 | 0.3817 | 0.1671 | 0.1030 | | 0.1942 | 7.28 | 19000 | 0.4120 | 0.1616 | 0.1018 | | 0.1517 | 7.47 | 19500 | 0.3856 | 0.1635 | 0.1020 | | 0.0946 | 7.67 | 20000 | 0.3838 | 0.1621 | 0.1016 | | 0.1455 | 7.86 | 20500 | 0.3749 | 0.1652 | 0.1020 | | 0.1303 | 8.05 | 21000 | 0.4074 | 0.1615 | 0.1011 | | 0.1207 | 8.24 | 21500 | 0.4121 | 0.1606 | 0.1008 | | 0.0727 | 8.43 | 22000 | 0.3948 | 0.1607 | 0.1009 | | 0.1123 | 8.62 | 22500 | 0.4025 | 0.1603 | 0.1009 | | 0.1606 | 8.82 | 23000 | 0.3963 | 0.1580 | 0.1004 | | 0.1458 | 9.01 | 23500 | 0.3991 | 0.1574 | 0.1002 | | 0.2286 | 9.2 | 24000 | 0.4149 | 0.1596 | 0.1009 | | 0.1284 | 9.39 | 24500 | 0.4251 | 0.1572 | 0.1002 | | 0.1141 | 9.58 | 25000 | 0.4264 | 0.1579 | 0.1002 | | 0.1823 | 9.77 | 25500 | 0.4230 | 0.1562 | 0.0999 | | 0.2514 | 9.97 | 26000 | 0.4242 | 0.1564 | 0.0999 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.1+cu111 - Datasets 1.18.4.dev0 - Tokenizers 0.11.6
jerryKakooza/language-detection-fine-tuned-on-xlm-roberta-base
jerryKakooza
2022-05-03T09:31:18Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:common_language", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T16:45:16Z
--- license: mit tags: - generated_from_trainer datasets: - common_language metrics: - accuracy model-index: - name: language-detection-fine-tuned-on-xlm-roberta-base results: - task: name: Text Classification type: text-classification dataset: name: common_language type: common_language args: full metrics: - name: Accuracy type: accuracy value: 0.9760187824920342 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # language-detection-fine-tuned-on-xlm-roberta-base This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the common_language dataset. It achieves the following results on the evaluation set: - Loss: 0.1642 - Accuracy: 0.9760 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0725 | 1.0 | 22194 | 0.1642 | 0.9760 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
agi-css/distilroberta-base-mrl
agi-css
2022-05-03T09:27:53Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-23T06:28:03Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilroberta-base-mrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-mrl This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0170 - Accuracy: 0.9967 - F1: 0.9967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.1821851463909416e-05 - train_batch_size: 400 - eval_batch_size: 400 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.0265 | 0.9946 | 0.9946 | | No log | 2.0 | 96 | 0.0180 | 0.9962 | 0.9962 | | No log | 3.0 | 144 | 0.0163 | 0.9962 | 0.9962 | | No log | 4.0 | 192 | 0.0194 | 0.9946 | 0.9946 | | No log | 5.0 | 240 | 0.0193 | 0.9942 | 0.9942 | | No log | 6.0 | 288 | 0.0172 | 0.9967 | 0.9967 | | No log | 7.0 | 336 | 0.0206 | 0.9954 | 0.9954 | | No log | 8.0 | 384 | 0.0183 | 0.9962 | 0.9962 | | No log | 9.0 | 432 | 0.0170 | 0.9967 | 0.9967 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Hate-speech-CNERG/bengali-abusive-MuRIL
Hate-speech-CNERG
2022-05-03T08:50:49Z
33
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "bn", "arxiv:2204.12543", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-24T18:59:53Z
--- language: [bn] license: afl-3.0 --- This model is used detecting **abusive speech** in **Bengali**. It is finetuned on MuRIL model using bengali abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive ### For more details about our paper Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{das2022data, title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages}, author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2204.12543}, year={2022} } ~~~
Hate-speech-CNERG/urdu-abusive-MuRIL
Hate-speech-CNERG
2022-05-03T08:43:53Z
13
2
transformers
[ "transformers", "pytorch", "bert", "text-classification", "ur", "arxiv:2204.12543", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-25T19:18:21Z
--- language: ur license: afl-3.0 --- This model is used to detect **abusive speech** in **Urdu**. It is finetuned on MuRIL model using Urdu abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive ### For more details about our paper Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{das2022data, title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages}, author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2204.12543}, year={2022} } ~~~
alla1101/distilbert-base-uncased-finetuned-emotion
alla1101
2022-05-03T08:11:40Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-03T07:54:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9240869504197766 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2236 - Accuracy: 0.924 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3293 | 0.901 | 0.8979 | | No log | 2.0 | 500 | 0.2236 | 0.924 | 0.9241 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
niklaspm/linkbert-large-finetuned-squad
niklaspm
2022-05-03T07:51:30Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "arxiv:2203.15827", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-02T19:06:30Z
--- license: apache-2.0 --- --- license: apache-2.0 --- **Exact Match** 92.68 **F1** 86.5 Checkout [linkbert-base-finetuned-squad](https://huggingface.co/niklaspm/linkbert-base-finetuned-squad) See [LinkBERT Paper](https://arxiv.org/abs/2203.15827)
niklaspm/linkbert-base-finetuned-squad
niklaspm
2022-05-03T07:50:32Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "arxiv:2203.15827", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-02T08:53:53Z
--- license: apache-2.0 --- **Exact Match** 83.19 **F1** 90.46 Checkout [linkbert-large-finetuned-squad](https://huggingface.co/niklaspm/linkbert-large-finetuned-squad) which achives F1:92.68 and EM:86.5 See [LinkBERT Paper](https://arxiv.org/abs/2203.15827)
DioLiu/distilbert-base-uncased-finetuned-sst2-nostop
DioLiu
2022-05-03T06:43:45Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-03T06:31:34Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-sst2-nostop results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2-nostop This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0701 - Accuracy: 0.9888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.125 | 1.0 | 1116 | 0.0975 | 0.9743 | | 0.0599 | 2.0 | 2232 | 0.0692 | 0.9840 | | 0.0191 | 3.0 | 3348 | 0.0570 | 0.9871 | | 0.0109 | 4.0 | 4464 | 0.0660 | 0.9882 | | 0.0092 | 5.0 | 5580 | 0.0701 | 0.9888 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Hate-speech-CNERG/hindi-codemixed-abusive-MuRIL
Hate-speech-CNERG
2022-05-03T06:03:59Z
19
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2204.12543", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-25T05:12:26Z
--- language: hi-en license: afl-3.0 --- This model is used detecting **abusive speech** in **Code-Mixed Hindi**. It is finetuned on MuRIL model using code-mixed hindi abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive ### For more details about our paper Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{das2022data, title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages}, author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2204.12543}, year={2022} } ~~~
Nakul24/Spanbert-emotion-extraction
Nakul24
2022-05-03T05:10:03Z
6
1
transformers
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-05-01T17:42:46Z
Enter the Name of Emotion in the Question Field Enter The Text from which emotion has to be extracted Example 1- Question - Guilty Context - I shouted to my mom Example 2 - Question - Sad Context - I felt betrayed when my girlfriend kissed another guy even though she was drunk Note: Model is still under development stage so results might be a little strange
huggingtweets/lonelythey18
huggingtweets
2022-05-03T05:01:20Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-03T04:59:03Z
--- language: en thumbnail: http://www.huggingtweets.com/lonelythey18/1651554075248/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1488171735174238211/4Y7YAhJG_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Cara</div> <div style="text-align: center; font-size: 14px;">@lonelythey18</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Cara. | Data | Cara | | --- | --- | | Tweets downloaded | 2640 | | Retweets | 301 | | Short tweets | 500 | | Tweets kept | 1839 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3l0t3r5o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lonelythey18's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1znlhqjr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1znlhqjr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/lonelythey18') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BowmanFox/AlliedMasterComputer
BowmanFox
2022-05-03T01:09:04Z
0
0
null
[ "license:other", "region:us" ]
null
2022-03-26T07:14:49Z
--- license: other --- A dataset trained on known dialogue from AM in Harlan Ellison's video game adaption of "I have no mouth and I must scream," alongside the initial quote about hate. Model historically uses DialoGPT, however, will be updated and/or converted to C1-6B as soon as possible.
kornosk/bert-election2020-twitter-stance-biden
kornosk
2022-05-02T22:59:23Z
135
2
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "twitter", "stance-detection", "election2020", "politics", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" tags: - twitter - stance-detection - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (f-BERT) Pre-trained weights for **f-BERT** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden. # Usage This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path) id2label = { 0: "AGAINST", 1: "FAVOR", 2: "NONE" } ##### Prediction Neutral ##### sentence = "Hello World." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Favor ##### sentence = "Go Go Biden!!!" inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Against ##### sentence = "Biden is the worst." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) # please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
kornosk/bert-election2020-twitter-stance-trump-KE-MLM
kornosk
2022-05-02T22:58:49Z
40
1
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "twitter", "stance-detection", "election2020", "politics", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" tags: - twitter - stance-detection - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (KE-MLM) Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Donald Trump. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump. # Usage This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-trump-KE-MLM" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path) id2label = { 0: "AGAINST", 1: "FAVOR", 2: "NONE" } ##### Prediction Neutral ##### sentence = "Hello World." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Favor ##### sentence = "Go Go Trump!!!" inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Against ##### sentence = "Trump is the worst." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) # please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
huggingtweets/usrsistakenhelp
huggingtweets
2022-05-02T22:26:31Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-02T22:25:02Z
--- language: en thumbnail: http://www.huggingtweets.com/usrsistakenhelp/1651530363067/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1520487753896665088/lO1PwH2q_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Rosa - I miss tgamm</div> <div style="text-align: center; font-size: 14px;">@usrsistakenhelp</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Rosa - I miss tgamm. | Data | Rosa - I miss tgamm | | --- | --- | | Tweets downloaded | 3244 | | Retweets | 507 | | Short tweets | 1160 | | Tweets kept | 1577 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jxrwgo01/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usrsistakenhelp's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1z4w7mpe) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1z4w7mpe/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/usrsistakenhelp') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
caush/Clickbait1
caush
2022-05-02T20:36:10Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-26T18:25:39Z
--- license: mit tags: - generated_from_trainer model-index: - name: Clickbait1 results: [] --- # Clickbait1 This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the [Webis-Clickbait-17](https://zenodo.org/record/5530410) dataset. It achieves the following results on the evaluation set: - Loss: 0.0257 ## Model description MiniLM is a distilled model from the paper "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers". We fine tune this model to evaluate (regression) the clickbait level of title news. ## Intended uses & limitations Model looks like the model described in the paper [Predicting Clickbait Strength in Online Social Media](https://aclanthology.org/2020.coling-main.425/) by Indurthi Vijayasaradhi, Syed Bakhtiyar, Gupta Manish, Varma Vasudeva. The model was trained with english titles. ## Training and evaluation data We trained the model with the official training data for the chalenge (clickbait17-train-170630.zip (894 MiB, 19538 posts), plus another set that was just available after the end of the challenge (clickbait17-train-170331.zip (157 MiB, 2459 posts). ## Training procedure Code can be find in [Github](https://github.com/caush/Clickbait). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.05 | 50 | 0.0571 | | No log | 0.09 | 100 | 0.0448 | | No log | 0.14 | 150 | 0.0391 | | No log | 0.18 | 200 | 0.0326 | | No log | 0.23 | 250 | 0.0343 | | No log | 0.27 | 300 | 0.0343 | | No log | 0.32 | 350 | 0.0343 | | No log | 0.36 | 400 | 0.0346 | | No log | 0.41 | 450 | 0.0343 | | 0.0388 | 0.46 | 500 | 0.0297 | | 0.0388 | 0.5 | 550 | 0.0293 | | 0.0388 | 0.55 | 600 | 0.0301 | | 0.0388 | 0.59 | 650 | 0.0290 | | 0.0388 | 0.64 | 700 | 0.0326 | | 0.0388 | 0.68 | 750 | 0.0285 | | 0.0388 | 0.73 | 800 | 0.0285 | | 0.0388 | 0.77 | 850 | 0.0275 | | 0.0388 | 0.82 | 900 | 0.0314 | | 0.0388 | 0.87 | 950 | 0.0309 | | 0.0297 | 0.91 | 1000 | 0.0277 | | 0.0297 | 0.96 | 1050 | 0.0281 | | 0.0297 | 1.0 | 1100 | 0.0273 | | 0.0297 | 1.05 | 1150 | 0.0270 | | 0.0297 | 1.09 | 1200 | 0.0291 | | 0.0297 | 1.14 | 1250 | 0.0293 | | 0.0297 | 1.18 | 1300 | 0.0269 | | 0.0297 | 1.23 | 1350 | 0.0276 | | 0.0297 | 1.28 | 1400 | 0.0279 | | 0.0297 | 1.32 | 1450 | 0.0267 | | 0.0265 | 1.37 | 1500 | 0.0270 | | 0.0265 | 1.41 | 1550 | 0.0300 | | 0.0265 | 1.46 | 1600 | 0.0274 | | 0.0265 | 1.5 | 1650 | 0.0274 | | 0.0265 | 1.55 | 1700 | 0.0266 | | 0.0265 | 1.59 | 1750 | 0.0267 | | 0.0265 | 1.64 | 1800 | 0.0267 | | 0.0265 | 1.68 | 1850 | 0.0280 | | 0.0265 | 1.73 | 1900 | 0.0274 | | 0.0265 | 1.78 | 1950 | 0.0272 | | 0.025 | 1.82 | 2000 | 0.0261 | | 0.025 | 1.87 | 2050 | 0.0268 | | 0.025 | 1.91 | 2100 | 0.0268 | | 0.025 | 1.96 | 2150 | 0.0259 | | 0.025 | 2.0 | 2200 | 0.0257 | | 0.025 | 2.05 | 2250 | 0.0260 | | 0.025 | 2.09 | 2300 | 0.0263 | | 0.025 | 2.14 | 2350 | 0.0262 | | 0.025 | 2.19 | 2400 | 0.0269 | | 0.025 | 2.23 | 2450 | 0.0262 | | 0.0223 | 2.28 | 2500 | 0.0262 | | 0.0223 | 2.32 | 2550 | 0.0267 | | 0.0223 | 2.37 | 2600 | 0.0260 | | 0.0223 | 2.41 | 2650 | 0.0260 | | 0.0223 | 2.46 | 2700 | 0.0259 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.1.0 - Tokenizers 0.12.1
amirbr/finetuning-sentiment-model-3000-samples
amirbr
2022-05-02T20:06:03Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-30T09:31:11Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Tokenizers 0.12.1
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False
ali2066
2022-05-02T18:36:09Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T18:30:13Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERT_FINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERT_FINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0699 - Precision: 0.9942 - Recall: 0.9773 - F1: 0.9857 - Accuracy: 0.9725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 479 | 0.4036 | 0.8333 | 0.9326 | 0.8802 | 0.8054 | | 0.5047 | 2.0 | 958 | 0.3749 | 0.8635 | 0.9339 | 0.8973 | 0.8361 | | 0.3336 | 3.0 | 1437 | 0.3789 | 0.8862 | 0.9184 | 0.9020 | 0.8471 | | 0.2644 | 4.0 | 1916 | 0.4024 | 0.8762 | 0.9171 | 0.8962 | 0.8371 | | 0.2233 | 5.0 | 2395 | 0.4195 | 0.8784 | 0.9171 | 0.8973 | 0.8391 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False
ali2066
2022-05-02T18:29:59Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T18:27:39Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.8119 - Precision: 0.2752 - Recall: 0.9522 - F1: 0.4270 - Accuracy: 0.2849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 166 | 0.0726 | 0.9827 | 1.0 | 0.9913 | 0.9828 | | No log | 2.0 | 332 | 0.0569 | 0.9827 | 1.0 | 0.9913 | 0.9828 | | No log | 3.0 | 498 | 0.0434 | 0.9884 | 1.0 | 0.9942 | 0.9885 | | 0.1021 | 4.0 | 664 | 0.0505 | 0.9884 | 1.0 | 0.9942 | 0.9885 | | 0.1021 | 5.0 | 830 | 0.0472 | 0.9884 | 1.0 | 0.9942 | 0.9885 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
LACAI/roberta-large-adapted-PFG-progression
LACAI
2022-05-02T18:28:47Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T18:09:17Z
--- license: mit --- Base model: [lacai/roberta-large-dialog-narrative](https://huggingface.co/lacai/roberta-large-dialog-narrative) Fine tuned as a progression model (to predict the acceptability of a dialogue) on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019): Given a complete dialogue from (or in the style of) Persuasion For Good, the task is to predict a numeric score typically in the range (-3, 3) where a higher score means a more acceptable dialogue in context of the donation solicitation task. This model inherits a special dialogue token `<d>` from its base model, which indicates the start of a dialogue utterance. **Example input**: `<d>How are you?</s><d>Good! how about yourself?</s><d>Great. Would you like to donate today to help the children?</s>` For more context and usage information see [https://github.rpi.edu/LACAI/dialogue-progression](https://github.rpi.edu/LACAI/dialogue-progression).
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
ali2066
2022-05-02T18:27:20Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T18:24:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERT_FINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERT_FINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0703 - Precision: 0.9667 - Recall: 0.0505 - F1: 0.0961 - Accuracy: 0.0766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 95 | 0.5442 | 0.6667 | 0.1132 | 0.1935 | 0.75 | | No log | 2.0 | 190 | 0.5316 | 0.5385 | 0.1321 | 0.2121 | 0.74 | | No log | 3.0 | 285 | 0.5384 | 0.4615 | 0.2264 | 0.3038 | 0.725 | | No log | 4.0 | 380 | 0.5503 | 0.4286 | 0.2264 | 0.2963 | 0.715 | | No log | 5.0 | 475 | 0.5529 | 0.4286 | 0.2264 | 0.2963 | 0.715 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
ali2066
2022-05-02T18:23:52Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T18:22:28Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7321 - Precision: 0.9795 - Recall: 0.7277 - F1: 0.835 - Accuracy: 0.7208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 130 | 0.3755 | 0.8521 | 0.9910 | 0.9163 | 0.8529 | | No log | 2.0 | 260 | 0.3352 | 0.8875 | 0.9638 | 0.9241 | 0.8713 | | No log | 3.0 | 390 | 0.3370 | 0.8918 | 0.9321 | 0.9115 | 0.8529 | | 0.4338 | 4.0 | 520 | 0.3415 | 0.8957 | 0.9321 | 0.9135 | 0.8566 | | 0.4338 | 5.0 | 650 | 0.3416 | 0.8918 | 0.9321 | 0.9115 | 0.8529 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab3000
hassnain
2022-05-02T17:34:38Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-02T12:25:08Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab3000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab3000 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6852 - eval_wer: 0.3845 - eval_runtime: 71.297 - eval_samples_per_second: 9.846 - eval_steps_per_second: 1.234 - epoch: 24.22 - step: 8500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
wpatena/PB-Chlamy
wpatena
2022-05-02T16:34:01Z
0
0
null
[ "region:us" ]
null
2022-04-12T22:35:19Z
These are files for the trained protein localization prediction model PB-Chlamy, created for the paper **"A Chloroplast Protein Atlas Reveals Novel Structures and Spatial Organization of Biosynthetic Pathways"** by Lianyong Wang, Weronika Patena, Kelly A. Van Baalen, Yihua Xie, Emily R. Singer, Sophia Gavrilenko, Michelle Warren-Williams, Linqu Han, Henry Harrigan, Vivian Chen, Vinh Ton, Saw Kyin, Henry H. Shwe, Matthew H. Cahn, Alexandra Wilson, Jianping Hu, Christoph Benning, Danny J. Schnell, Claire D. McWhite, Martin Jonikas (submitted for publication in May 2022).
espnet/farsi_commonvoice_blstm
espnet
2022-05-02T15:50:24Z
5
3
espnet
[ "espnet", "audio", "automatic-speech-recognition", "fa", "dataset:commonvoice", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-05-02T15:49:22Z
--- tags: - espnet - audio - automatic-speech-recognition language: fa datasets: - commonvoice license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/farsi_commonvoice_blstm` This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b pip install -e . cd egs2/commonvoice/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/farsi_commonvoice_blstm ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon May 2 11:48:56 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `716eb8f92e19708acfd08ba3bd39d40890d3a84b` - Commit date: `Thu Apr 28 19:50:59 2022 -0400` ## asr_train_asr_rnn_raw_fa_bpe150_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_fa|9728|68904|0.0|0.0|100.0|0.0|100.0|100.0| |decode_rnn_asr_model_valid.acc.best/test_fa|9728|68904|91.4|7.2|1.4|1.0|9.5|30.1| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_fa|9728|331506|0.0|0.0|100.0|0.0|100.0|100.0| |decode_rnn_asr_model_valid.acc.best/test_fa|9728|331506|97.2|1.3|1.5|0.7|3.6|30.1| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_fa|9728|230963|0.0|0.0|100.0|0.0|100.0|100.0| |decode_rnn_asr_model_valid.acc.best/test_fa|9728|230963|95.9|2.4|1.6|0.7|4.7|30.1| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_rnn.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_rnn_raw_fa_bpe150_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 15 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 30 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_fa_bpe150_sp/train/speech_shape - exp/asr_stats_raw_fa_bpe150_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_fa_bpe150_sp/valid/speech_shape - exp/asr_stats_raw_fa_bpe150_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_fa_sp/wav.scp - speech - sound - - dump/raw/train_fa_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_fa/wav.scp - speech - sound - - dump/raw/dev_fa/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ی - ا - ه - ▁ - ر - م - و - د - ت - ش - ن - ل - ▁ب - ز - ب - . - ▁م - ان - ▁ا - س - ک - ▁می - گ - ف - ▁د - ؟ - ق - ▁و - ید - ▁ن - ند - ست - ار - ▁چ - ع - ج - ▁ت - ▁ک - ▁با - خ - ون - ▁پ - ▁به - ▁من - ▁س - ▁را - ، - ▁خ - ▁این - ▁کن - ▁آ - ▁در - ای - ▁از - اد - ▁است - ح - ص - ▁ش - ط - ▁تو - ین - ▁دار - ▁که - ال - ▁رو - ▁گ - ▁ج - ور - ام - ▁هم - ▁ح - فت - رد - یم - پ - غ - چ - ذ - ض - ظ - '!' - ث - ً - ئ - '"' - ژ - ك - آ - ي - ':' - ى - '-' - ِ - أ - َ - » - ـ - ',' - ُ - ( - ) - ء - ٔ - ٬ - ّ - ؛ - B - C - A - E - G - M - S - ؤ - I - ; - T - H - _ - F - D - ۀ - Y - N - K - U - – - ٌ - P - O - Q - Z - '&' - L - R - ة - X - ā - '#' - “ - '=' - « - š - ْ - ے - ” - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.5 use_preprocessor: true token_type: bpe bpemodel: data/fa_token_list/bpe_unigram150/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_fa_bpe150_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: vgg_rnn encoder_conf: rnn_type: lstm bidirectional: true use_projection: true num_layers: 4 hidden_size: 1024 output_size: 1024 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: num_layers: 2 hidden_size: 1024 sampling_probability: 0 att_conf: atype: location adim: 1024 aconv_chans: 10 aconv_filts: 100 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/greek_commonvoice_blstm
espnet
2022-05-02T15:35:07Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "el", "dataset:commonvoice", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-05-02T15:34:01Z
--- tags: - espnet - audio - automatic-speech-recognition language: el datasets: - commonvoice license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/greek_commonvoice_blstm` This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b pip install -e . cd egs2/commonvoice/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/greek_commonvoice_blstm ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sun Apr 17 19:51:46 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b` - Commit date: `Mon Apr 4 21:04:45 2022 -0400` ## asr_train_asr_rnn_tr_raw_el_bpe150_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_el|1681|10574|90.7|7.7|1.6|0.5|9.9|27.4| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_el|1681|61731|96.6|1.5|1.9|0.6|4.0|27.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_el|1681|44869|95.7|2.4|1.9|0.7|5.0|27.5| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_rnn_tr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_rnn_tr_raw_el_bpe150_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 16 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_el_bpe150_sp/train/speech_shape - exp/asr_stats_raw_el_bpe150_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_el_bpe150_sp/valid/speech_shape - exp/asr_stats_raw_el_bpe150_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_el_sp/wav.scp - speech - sound - - dump/raw/train_el_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_el/wav.scp - speech - sound - - dump/raw/dev_el/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ▁ - α - ν - ρ - ι - ε - ο - τ - ς - λ - ά - σ - κ - ό - . - ί - ▁π - έ - ω - π - γ - η - μ - υ - ',' - ή - ▁το - χ - θ - ώ - ▁και - ▁του - δ - τα - αν - ει - ▁να - ▁σ - ου - σε - ▁κ - ύ - ού - φ - στ - ρα - ια - ▁μ - ▁δ - ▁έ - τι - β - ρι - μα - πο - εί - ▁φ - ▁με - κα - ▁α - ος - ; - ▁χ - '!' - ▁β - ες - ▁στο - τε - ▁γ - '"' - τη - ▁ο - ▁Π - ▁δε - ▁που - ▁μου - με - ▁τα - σα - λα - Μ - ιά - ▁από - εις - ▁την - έρ - ▁ε - ▁τον - ρά - λο - ▁είπε - ▁μα - ψ - Τ - '''' - Κ - Σ - Ε - Α - Θ - '-' - Η - Ά - Ν - Δ - Χ - ’ - Ξ - » - Π - ΐ - Ώ - Ο - A - O - · - ':' - E - G - H - N - R - T - V - Υ - ϋ - Ψ - ́ - ‘ - Ι - Ί - Ρ - Ω - « - Ύ - Ζ - ϊ - Ή - Φ - Λ - Ό - Γ - Έ - Β - ζ - M - ξ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.5 use_preprocessor: true token_type: bpe bpemodel: data/el_token_list/bpe_unigram150/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_el_bpe150_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: vgg_rnn encoder_conf: rnn_type: lstm bidirectional: true use_projection: true num_layers: 4 hidden_size: 1024 output_size: 1024 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: num_layers: 2 hidden_size: 1024 sampling_probability: 0 att_conf: atype: location adim: 1024 aconv_chans: 10 aconv_filts: 100 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/zh-CN_commonvoice_blstm
espnet
2022-05-02T15:27:21Z
1
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "dataset:commonvoice", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-05-02T15:24:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: zh-CN datasets: - commonvoice license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/zh-CN_commonvoice_blstm` This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b pip install -e . cd egs2/commonvoice/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/zh-CN_commonvoice_blstm ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon Apr 18 13:15:36 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b` - Commit date: `Mon Apr 4 21:04:45 2022 -0400` ## asr_zh_lr_0.1_epoch_50 ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_zh_CN|9698|9698|14.9|85.1|0.0|0.0|85.1|85.1| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_zh_CN|9698|156383|79.4|19.5|1.1|0.8|21.4|85.1| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_zh_CN|9698|166069|80.8|18.2|1.1|0.7|20.0|84.9| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_rnn_zh.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_zh_lr_0.1_epoch_50 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 16 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_zh-CN_bpe4409_sp/train/speech_shape - exp/asr_stats_raw_zh-CN_bpe4409_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_zh-CN_bpe4409_sp/valid/speech_shape - exp/asr_stats_raw_zh-CN_bpe4409_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_zh_CN_sp/wav.scp - speech - sound - - dump/raw/train_zh_CN_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_zh_CN/wav.scp - speech - sound - - dump/raw/dev_zh_CN/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - 。 - ▁ - 的 - ',' - 是 - 为 - 在 - 人 - 和 - 中 - 大 - 科 - 有 - 尔 - 一 - 斯 - 了 - 种 - 于 - 县 - 与 - 上 - 市 - 地 - 克 - 区 - 不 - 西 - 以 - 属 - 后 - 名 - 会 - 特 - 下 - 州 - 子 - 德 - 年 - 南 - 长 - 国 - 拉 - 多 - 、 - 被 - 里 - 山 - 部 - 出 - 也 - 时 - 来 - 内 - 及 - 学 - 家 - 之 - 东 - 所 - 海 - 到 - 布 - 任 - 高 - 利 - 亚 - 对 - 省 - 分 - 文 - 由 - 体 - 用 - 主 - 北 - 新 - 安 - 生 - 法 - 天 - 公 - 巴 - 马 - 罗 - 者 - 成 - 都 - 而 - 三 - 道 - 十 - 线 - 小 - 原 - 定 - 其 - 站 - 外 - 水 - 行 - 前 - 格 - 性 - 金 - 场 - 作 - 因 - 军 - 曾 - 加 - 代 - 明 - 此 - 林 - 等 - 同 - 王 - 平 - 兰 - 发 - 位 - 最 - 方 - 他 - 将 - 可 - 本 - 基 - 士 - 两 - 城 - 战 - 数 - 路 - 尼 - 日 - 花 - 球 - 氏 - 面 - 维 - 字 - 河 - 纳 - 动 - 合 - 机 - 卡 - 自 - 石 - 过 - 或 - 度 - 达 - 个 - 夫 - 化 - 民 - 力 - 伊 - 次 - 理 - 制 - 比 - 教 - 至 - 古 - 级 - 物 - 太 - 并 - 总 - 光 - 阿 - 正 - 事 - 叶 - 普 - 镇 - 门 - 党 - 车 - 电 - 四 - 色 - 着 - 府 - 瓦 - 开 - 台 - 能 - 建 - 汉 - · - 华 - 入 - 量 - 莱 - 相 - 书 - 女 - 得 - 改 - 六 - 龙 - 重 - 从 - 族 - 首 - 类 - 语 - 星 - 勒 - 该 - 美 - 点 - 还 - 黄 - 流 - 萨 - 雷 - 神 - 无 - 岛 - 塔 - 则 - 波 - 治 - 武 - 向 - 二 - 草 - 实 - 队 - 手 - 元 - 号 - 传 - 全 - 口 - 工 - 师 - 圣 - 说 - 指 - 清 - 形 - 演 - 通 - 型 - 当 - 经 - 设 - 五 - 业 - 院 - 白 - 期 - 就 - 称 - 八 - 官 - 宗 - 母 - 阳 - 郡 - 广 - 保 - 带 - 派 - 图 - 剧 - 各 - 常 - 间 - 运 - 意 - 立 - 局 - 现 - 这 - 交 - 米 - 使 - 福 - 片 - 变 - 表 - 赛 - 好 - 田 - 塞 - 器 - 亦 - 要 - 村 - 头 - 政 - 式 - 吉 - 沙 - 员 - 卫 - 起 - 义 - 乡 - 朝 - 青 - 少 - 木 - 世 - 奥 - 提 - 隆 - 回 - 进 - 产 - 埃 - 观 - 心 - 共 - 毛 - 资 - 列 - 哈 - 七 - 目 - 江 - 校 - 洛 - 湖 - 团 - 信 - 源 - 已 - 选 - 处 - 鲁 - 泰 - 身 - 姆 - 单 - 乐 - 史 - 月 - 又 - 托 - 英 - 角 - 应 - 森 - 解 - 威 - 受 - 职 - 更 - 近 - 兴 - 蛛 - 很 - 商 - 约 - 今 - 复 - 节 - 黑 - 博 - 红 - 程 - 系 - 伯 - 记 - 先 - 接 - 库 - 转 - 川 - 司 - 去 - 画 - 诺 - 周 - 边 - 如 - 百 - 宁 - 未 - 景 - 密 - 那 - 管 - 质 - 反 - 索 - 权 - 知 - 音 - 张 - 条 - 别 - 土 - 鱼 - 曲 - 创 - 强 - 九 - 专 - 集 - 真 - 装 - 室 - 奖 - 爱 - 莫 - 社 - 果 - 放 - 收 - 兵 - 击 - 贝 - 皮 - 空 - 给 - 编 - 堂 - 根 - 朗 - 希 - 报 - 食 - 持 - 耶 - 虫 - 务 - 领 - 气 - 情 - 迪 - 双 - 李 - 组 - 只 - 风 - 铁 - 论 - 历 - 梅 - 皇 - 群 - 嘉 - 甲 - 庆 - 置 - 恩 - 野 - 帝 - 曼 - 升 - 顿 - 苏 - 非 - 宫 - 办 - 帕 - 关 - 园 - 备 - 洲 - 沃 - 声 - 奇 - 录 - 然 - 见 - 支 - 修 - 馆 - 康 - 温 - 哥 - 段 - 视 - 连 - 像 - 随 - 松 - 层 - 评 - 议 - 计 - 环 - 宣 - 极 - 参 - 品 - 丹 - 做 - 几 - 永 - 展 - 件 - 香 - 乌 - 云 - 死 - 每 - 万 - 较 - 素 - 堡 - 蒙 - 藏 - 老 - 令 - 监 - 让 - 影 - 座 - 费 - 速 - 短 - 测 - 造 - 居 - 营 - 状 - 弗 - 蟹 - 结 - 众 - 志 - 介 - 籍 - 导 - 农 - 察 - ':' - 存 - 统 - 调 - 话 - 宋 - 客 - 干 - 楼 - 看 - 射 - 千 - 配 - 推 - 联 - 命 - 网 - 洋 - 即 - 树 - 病 - 往 - 证 - 蒂 - 京 - 伦 - 丁 - 菜 - 寺 - 桥 - 留 - 杨 - 才 - 她 - 象 - 防 - 副 - 考 - 标 - 易 - 韦 - 离 - 异 - 投 - 案 - 男 - 取 - 规 - 算 - 富 - 再 - 初 - 界 - 半 - 尾 - 户 - 深 - 热 - 致 - 尚 - 牛 - 驻 - 刘 - 均 - 守 - 落 - 认 - 直 - 足 - 医 - 父 - 友 - 想 - 摩 - 我 - 引 - 封 - 藤 - 礼 - 感 - 打 - 它 - 丽 - 舰 - 祖 - 螺 - 率 - 整 - 盖 - 章 - 协 - 项 - 第 - 唐 - 孙 - 思 - 步 - 班 - 登 - 排 - 除 - 街 - 注 - 早 - 施 - 游 - 町 - 征 - 》 - 具 - 火 - 季 - 写 - 灵 - 《 - 播 - 昌 - 抗 - 端 - 你 - 决 - 陆 - 欧 - 住 - 杜 - 切 - 卢 - 丰 - 吴 - 胡 - 超 - 远 - 侧 - 左 - 效 - 移 - 越 - 试 - 助 - 泽 - 胜 - 仅 - 杰 - 确 - 旧 - 查 - 范 - 久 - 告 - 旗 - 儿 - 委 - 瑞 - 续 - 竹 - 宝 - 届 - 精 - 样 - 歌 - 典 - 仍 - 房 - 把 - 满 - 麦 - 庄 - 承 - 谢 - 刺 - 末 - 玉 - “ - 亲 - 紫 - 墓 - 佛 - 授 - 料 - 港 - 豆 - 劳 - 雅 - ” - 积 - 酸 - 圆 - 蕨 - 围 - 督 - 霍 - 赫 - 杀 - 望 - 眼 - 服 - 低 - 牙 - 依 - 失 - 词 - 攻 - 核 - 贵 - 构 - 故 - 什 - 停 - 兹 - 陈 - 争 - 态 - 岩 - 荣 - 始 - 艺 - 穆 - 雄 - 背 - 津 - 「 - 许 - 飞 - 限 - 桑 - 顶 - 顺 - 齐 - 容 - 何 - 模 - 谷 - 戈 - 艾 - 检 - 识 - 血 - 弟 - 春 - 优 - 微 - 例 - 航 - 述 - 滨 - 魏 - 岸 - 据 - 术 - 临 - 廷 - 照 - 梁 - 药 - 归 - 斗 - 举 - 划 - 余 - 厂 - 底 - 势 - 阶 - 难 - 」 - 版 - 印 - 邦 - 茨 - 郎 - 夏 - 讲 - 延 - 玛 - 觉 - 陵 - 预 - 店 - 互 - 唱 - 庙 - 油 - 输 - 快 - 简 - 言 - 蝶 - 茶 - 君 - 沿 - 晋 - 便 - 臣 - 请 - 消 - 域 - 终 - 纪 - 附 - 假 - '?' - 仁 - 价 - 批 - 断 - 钱 - 巨 - 序 - 晚 - 但 - 们 - 拜 - 突 - 御 - 获 - 凯 - 毒 - 溪 - 右 - 屋 - 票 - 耳 - 菲 - 载 - 频 - 鸟 - 舞 - 佩 - 善 - 黎 - 骨 - 散 - 菌 - 孔 - 活 - 护 - 菊 - 兼 - 徽 - 奈 - 亡 - 且 - 警 - 显 - 细 - 拿 - 昂 - 粉 - 良 - 甘 - 麻 - 轻 - 示 - 求 - 差 - 走 - ; - 弹 - 桃 - 轮 - 值 - 柳 - 讯 - 息 - 朱 - 芬 - 按 - 烈 - 破 - 买 - 止 - 害 - 压 - 执 - 婚 - 占 - 赵 - 牌 - 准 - 养 - 降 - 葡 - 贡 - 诗 - 峰 - 没 - 济 - 厅 - 虎 - 鼠 - 舍 - 姓 - 郑 - 冲 - 私 - 购 - 找 - 辖 - 凤 - 绝 - 吸 - 幕 - 船 - 乔 - 蓝 - 遭 - 雪 - 迁 - 旋 - 聚 - 判 - 坦 - 境 - 读 - 技 - 棋 - 肯 - 退 - '!' - 井 - 潮 - 盛 - 财 - 役 - 股 - 脉 - 冈 - 板 - 律 - 独 - 纽 - 申 - 默 - 巡 - 辛 - 杂 - 款 - 齿 - 伤 - 扎 - 若 - 爵 - 控 - 澳 - 泉 - 秀 - 赖 - 乘 - 奏 - 盘 - 逊 - 壳 - 跟 - 翼 - 银 - 采 - 址 - 绍 - 乙 - 尤 - 换 - 瑟 - 症 - 夜 - 审 - 码 - 份 - 殿 - 隶 - 问 - 俄 - 宪 - 追 - 吕 - 锡 - 徐 - 继 - 架 - 珠 - 课 - 靖 - 灰 - 秘 - 秦 - 奉 - 绿 - 完 - 洪 - 辽 - 予 - 吃 - 岁 - 皆 - 刻 - 脑 - 激 - 腹 - 仪 - 纹 - 锦 - 否 - 包 - 莎 - 钟 - 适 - 犹 - 童 - 材 - 秋 - 废 - 坚 - 叙 - 么 - 酒 - 阮 - 崇 - 署 - 肉 - 裂 - 败 - 努 - 虾 - 迹 - 妹 - 徒 - 另 - 些 - 韩 - 顾 - 昆 - 熙 - 丝 - 略 - 补 - 颜 - 尖 - 幼 - 灯 - 粗 - 货 - 盾 - 送 - 遗 - 篮 - 休 - 免 - 怀 - 琴 - 透 - 阵 - 戴 - 援 - 操 - 池 - 缘 - 销 - 险 - 呈 - 释 - 柏 - 缺 - 哲 - 供 - 戏 - 喜 - 练 - 仓 - 味 - 柱 - 炮 - 笔 - 符 - 郭 - 厄 - 待 - 液 - 智 - 佐 - 卷 - 探 - 益 - 亮 - 听 - 莲 - 袭 - 诸 - 绪 - 羊 - 遇 - 饰 - 捕 - 沟 - 扩 - 邻 - 染 - 增 - 娜 - 爆 - 裔 - 充 - 混 - 盐 - 宇 - 汇 - 罪 - 翰 - 固 - 杯 - 叫 - 枪 - 席 - 验 - 旅 - 婆 - 译 - 隐 - 径 - 矿 - 磁 - 葬 - 讷 - 额 - 圈 - 讨 - 救 - 暴 - 赤 - 零 - 际 - 习 - 湾 - 仙 - 芒 - 栖 - 崎 - 楚 - 犯 - 穿 - 途 - 培 - 滑 - 暗 - 献 - 棒 - 殖 - 灭 - 育 - 功 - 羽 - 捷 - 涅 - 冰 - 拟 - 梦 - 瓜 - 乎 - 售 - 币 - 浦 - 箭 - 弘 - 渡 - 芝 - 逐 - 软 - 盟 - 革 - 寿 - 览 - 豪 - 坐 - 惠 - 映 - 禁 - 脚 - 誉 - 贞 - 阴 - 启 - 孟 - 潘 - 衣 - 迎 - 偏 - 墙 - 套 - 横 - 渐 - 蛋 - 裁 - 夺 - 担 - 却 - 旁 - 阁 - 呼 - 宽 - 患 - 洞 - 邮 - 错 - 珊 - 竞 - 访 - 靠 - 枝 - 陶 - 荷 - 似 - 辑 - 冬 - 斑 - 误 - 镜 - 侵 - 赞 - 宾 - 雀 - 责 - 须 - 幸 - 曹 - 繁 - 藩 - 享 - 宿 - 愿 - 汗 - 翁 - 付 - 锋 - 刚 - 奴 - 减 - 阻 - 硬 - 紧 - 策 - 需 - 茂 - 珍 - 丘 - 侯 - 卒 - 绘 - 蛾 - 钢 - 泡 - 碑 - 糖 - 敏 - 鳞 - 颁 - 浪 - 妻 - 题 - 欢 - 跳 - 姜 - 忠 - 翻 - 融 - 岭 - 恐 - 汽 - 壁 - 恒 - 敬 - 疏 - 迈 - 键 - 轴 - 胶 - 郊 - 甚 - 念 - 扬 - 伍 - 寻 - 拔 - 泊 - 详 - 召 - 桂 - 猪 - 疑 - 汤 - 签 - 雕 - 壮 - 疾 - 侍 - 困 - 姐 - 候 - 仕 - 剂 - 危 - 块 - 炸 - 闭 - 触 - 趣 - 返 - 萧 - 折 - 抵 - 鹿 - 刑 - 崔 - 疗 - 兄 - 负 - 胆 - 厦 - 龟 - 彩 - 乳 - 狗 - 针 - 露 - 骑 - 恋 - 避 - 健 - 振 - 溶 - 铜 - 脱 - 衡 - 静 - 严 - 旺 - 淡 - 塘 - 昭 - 亨 - 孝 - 尽 - 晶 - 桐 - 沉 - 祥 - 蔡 - 逝 - 寄 - 蛇 - 卖 - 廉 - 残 - 损 - 胸 - 逃 - 饭 - 莉 - 迫 - 董 - 训 - 综 - 勤 - 剑 - 辅 - 距 - 隔 - 彭 - 穗 - 契 - 峡 - 覆 - 坑 - 筑 - 响 - 含 - 伟 - 涉 - 滕 - 遍 - 咸 - 宜 - 榜 - 牧 - 厚 - 累 - 肃 - 跨 - 辰 - 岳 - 殊 - 陷 - 仔 - 敌 - 衔 - 遣 - 允 - 葛 - 撞 - 冠 - 震 - 闻 - 某 - 摄 - 研 - 萤 - 贺 - 浮 - 虽 - 贾 - 跃 - 棘 - 邓 - 储 - 熊 - 腺 - 潜 - 勋 - 卿 - 掌 - 烧 - 坏 - 抚 - 撤 - 替 - 氢 - 辐 - 餐 - 驱 - 麟 - 急 - 赢 - 烟 - 庭 - 茅 - 硕 - 坎 - 尊 - 枢 - 筒 - 乱 - 杭 - 粒 - 赏 - 宅 - 柄 - 柯 - 沈 - 玄 - 冷 - 屈 - 插 - 梯 - 绕 - 喀 - 炎 - 猫 - 恶 - 挑 - 苗 - 苣 - 必 - 乃 - 灾 - 兽 - 彼 - 猎 - 雍 - 膜 - 鹅 - 熟 - 蛤 - 迷 - 魔 - 册 - 纸 - 诉 - 谱 - 铃 - 仰 - 伏 - 循 - 慈 - 饮 - 翅 - 帮 - 拍 - 悬 - 祭 - 档 - 禄 - 薇 - 虚 - 袋 - 汝 - 税 - 弃 - 撒 - 歇 - 矩 - 谈 - 仿 - 叉 - 扁 - 湿 - 妇 - 焦 - 茎 - 谋 - 俗 - 朋 - 肥 - 腔 - 促 - 敦 - 琳 - 坛 - 伞 - 贫 - 偶 - 屿 - 裕 - 仲 - 赴 - 鹤 - 垂 - 番 - 纲 - 雨 - 吗 - 彦 - 柔 - 浅 - 燃 - 障 - 颗 - 掉 - 苦 - 酶 - 阪 - 链 - 借 - 幅 - 脂 - 轨 - 丙 - 迅 - 凹 - 狄 - 萼 - 葵 - 娘 - 浩 - 搬 - 燕 - 狼 - 蝠 - 傅 - 冕 - 握 - 毁 - 陀 - 荆 - 页 - 尉 - 浓 - 瓣 - 绩 - 祠 - 缩 - 脊 - 蜗 - 植 - 庞 - 坡 - 逻 - 墨 - 尺 - 慢 - 篇 - 伴 - 夷 - 窗 - 笑 - 蜜 - 霸 - 卓 - 妈 - 瓷 - 芳 - 订 - 跑 - 孢 - 佳 - 己 - 织 - 寒 - 泥 - 邀 - 鬼 - 冯 - 椒 - 祝 - 刊 - 彻 - 忍 - 蒋 - 赠 - 佑 - 励 - 句 - 招 - 暂 - 驾 - 廊 - 弄 - 旨 - 柴 - 肠 - 蒿 - 鼓 - 卑 - 姚 - 怪 - 樟 - 瘤 - 糕 - 耀 - 蒲 - 蜥 - 屏 - 岗 - 谦 - 赐 - 铭 - 鹰 - 倒 - 痛 - 碳 - 肌 - 驶 - 唇 - 勇 - 慕 - 拆 - 栋 - 薄 - 遵 - 悠 - 拱 - 扰 - 泛 - 凡 - 袁 - 床 - 蹄 - 丑 - 琉 - 砂 - 詹 - 亭 - 伸 - 贯 - 忙 - 惊 - 袖 - 午 - 宙 - 豹 - 钦 - 搭 - 锥 - 伐 - 叛 - 拓 - 杆 - 蚜 - 污 - 稀 - 辞 - 凰 - 辉 - 锁 - 拳 - 盲 - 况 - 束 - 描 - 蚓 - 肖 - 迦 - 杉 - 卯 - 夹 - 汪 - 钩 - 妮 - 潭 - 魁 - 俊 - 催 - 匹 - 塑 - 甫 - 舒 - 仑 - 爬 - 巫 - 梨 - 泳 - 狱 - 珀 - 粮 - 绳 - 贤 - 丈 - 崖 - 抽 - 纯 - 扑 - 措 - 搜 - 洗 - 醇 - 妃 - 娃 - 忆 - 芦 - 盆 - 磨 - 滚 - 氧 - 络 - 戌 - 巢 - 瑙 - 堪 - 帅 - 旦 - 犬 - 穴 - 屠 - 兆 - 坊 - 斜 - 渔 - 纷 - 肢 - 脏 - 蓬 - 谓 - 券 - 拒 - 淮 - 癌 - 鳍 - 刷 - 弦 - 撰 - 递 - 遂 - 雌 - 丛 - 卵 - 寨 - 绣 - 刀 - 屡 - 狭 - 蜡 - 赋 - 吻 - 丧 - 尹 - 惯 - 奶 - 摇 - 玩 - 芹 - 嫁 - 藉 - 壬 - 堆 - 吾 - 鼎 - 鼩 - 圭 - 狮 - 租 - 蜂 - 貌 - 儒 - 笼 - 虱 - 丞 - 僧 - 吧 - 宠 - 滇 - 稻 - 荒 - 坟 - 姻 - 尿 - 绒 - 沧 - 截 - 暖 - 缓 - 卜 - 晓 - 串 - 陨 - 够 - 鲜 - 壤 - 牵 - 戊 - 穷 - 纵 - 鲨 - 鼻 - 邑 - 鸿 - 嘴 - 株 - 洁 - 爪 - 肾 - 酯 - 凉 - 婴 - 慧 - 擅 - 润 - 舌 - 芋 - 伽 - 屯 - 雾 - 俘 - 弥 - 橙 - 腐 - 阔 - 鸣 - 倾 - 剩 - 嗣 - 幻 - 斐 - 诚 - 颈 - 颖 - 侄 - 奎 - 睡 - 勃 - 扶 - 腾 - 龄 - 恰 - 艇 - 铸 - 鲤 - 湘 - 钻 - 碎 - 绵 - 概 - 恭 - 藻 - 遥 - 癸 - 丸 - 兔 - 岑 - 幽 - 榆 - 薯 - 叔 - 滩 - 贴 - 芥 - 赶 - 辈 - 坝 - 欣 - 蕉 - 诏 - 魂 - 坞 - 答 - 罚 - 苞 - 蛙 - 践 - 凭 - 衍 - 鞭 - 鲷 - 枚 - 梭 - 毅 - 苯 - 邵 - 棱 - 盗 - 褐 - 宰 - 沫 - 炉 - 肝 - 蹈 - 淘 - 媒 - 毕 - 浙 - 腊 - 愈 - 聪 - 挂 - 鳗 - 乏 - 闸 - 阜 - 喷 - 坂 - 庚 - 悉 - 烷 - 绥 - 罕 - 翔 - 肿 - 闪 - 噶 - 妳 - 巧 - 歧 - 礁 - 胎 - 询 - 別 - 哭 - 垒 - 彰 - 扮 - 眉 - 纺 - 诞 - 鞘 - 哪 - 姬 - 弱 - 怕 - 猴 - 稚 - 仆 - 帽 - 氮 - 鉴 - 铺 - 侦 - 宏 - 蝉 - 坪 - 掠 - 箱 - 臂 - 鹃 - 夸 - 禅 - 倍 - 祀 - 胺 - 薛 - 擎 - 涂 - 央 - 漫 - 胞 - 甸 - 泵 - 煤 - 樱 - 添 - 朔 - 札 - 渠 - 漠 - 羌 - 鸭 - 侨 - 姑 - 嫌 - 叠 - 妙 - 披 - 渥 - 茄 - 蜀 - 赚 - 违 - 逸 - 凌 - 娶 - 驼 - 呢 - 囊 - 捐 - 壶 - 弓 - 忧 - 翠 - 艘 - 贼 - 镶 - 闽 - 鲍 - 抑 - 衰 - 铝 - 氯 - 汰 - 脸 - 踪 - 皿 - 禾 - 葱 - 蕾 - 逆 - 吏 - 栏 - 揭 - 抱 - 雇 - 析 - 缅 - 锐 - 埋 - 耆 - 邱 - 凸 - 妖 - 渊 - 肤 - 飓 - 攀 - 沼 - 苑 - 桔 - 灌 - 猛 - 纤 - 萝 - 戒 - 槐 - 沂 - 涵 - 淀 - 碘 - 苍 - 茹 - 仇 - 厥 - 吞 - 啊 - 喝 - 填 - 烛 - 砖 - 绑 - 胀 - 虑 - 颇 - 弯 - 怒 - 漏 - 琼 - 畅 - 碱 - 倡 - 凝 - 埠 - 朴 - 稿 - 荐 - 亥 - 冒 - 欲 - 冶 - 削 - 喇 - 漂 - 醛 - 吐 - 猿 - 磷 - 沪 - 霉 - 骗 - 蚕 - 唯 - 帆 - 裴 - 蚌 - 犁 - 割 - 殉 - 鸡 - 垣 - 舟 - 昔 - 牡 - 剪 - 慰 - 橘 - 矮 - 肩 - 蕊 - 酰 - 伪 - 垦 - 扇 - 荚 - 荡 - 蕈 - 谊 - 谟 - 谭 - 钝 - 鞍 - 抢 - 栗 - 浆 - 溯 - 烃 - 罢 - 肽 - 墅 - 孤 - 弼 - 杖 - 霞 - 鲈 - 岐 - 烯 - 狐 - 稍 - 淳 - 腿 - 舱 - 闲 - 鲸 - 楠 - 聊 - 踏 - 姊 - 怡 - 汁 - 炼 - 燥 - 耕 - 骸 - 滋 - 腰 - 邪 - 芽 - 劣 - 夕 - 宛 - 悦 - 晃 - 躲 - 耗 - 晖 - 眠 - 惟 - 峨 - 般 - 讳 - 帐 - 炭 - 谁 - 筋 - 菀 - 扫 - 寅 - 恨 - 榕 - 甜 - 笠 - 莹 - 襄 - 邢 - 酱 - 悼 - 榄 - 澜 - 聘 - 腓 - 薪 - 毗 - 蟾 - 迟 - 拖 - 汐 - 瓶 - 蚬 - 谨 - 赌 - 酉 - 锯 - 驹 - 删 - 缴 - 艳 - 贪 - 酮 - 鹉 - 乞 - 厝 - 柜 - 梗 - 筹 - 醉 - 饼 - 圳 - 庐 - 椎 - 汀 - 淄 - 盒 - 碧 - 笙 - 臭 - 芭 - 鄂 - 吊 - 姿 - 敢 - 菱 - 钠 - 债 - 戎 - 贩 - 劝 - 贷 - 卧 - 忌 - 淫 - 芯 - 皱 - 漳 - 锅 - 粤 - 拥 - 辆 - 庸 - 滴 - 芪 - 厉 - 肋 - 蒸 - 矾 - 蓟 - 忘 - 枯 - 涡 - 狂 - 硅 - 既 - 噜 - 庇 - 弧 - 挖 - 窝 - 羁 - 妆 - 嵩 - 擢 - 浴 - 犀 - 禹 - 糙 - 舜 - 蔓 - 蕴 - 押 - 拂 - 拼 - 晕 - 莓 - 骤 - 麓 - 尸 - 慎 - 框 - 胃 - 蛮 - 辟 - 邸 - 酿 - 醚 - 饲 - 鳕 - 欠 - 庶 - 彗 - 徙 - 悲 - 拨 - 挡 - 斩 - 熔 - 糊 - 跋 - 颠 - 驳 - 哩 - 嗜 - 埔 - 嫩 - 泌 - 準 - 纠 - 芙 - 蝇 - 褶 - 辨 - 阅 - 沁 - 滤 - 饱 - 擦 - 樽 - 泪 - 竭 - 冢 - 抄 - 毫 - 瑜 - 贬 - 鞋 - 廖 - 尧 - 粘 - 鳄 - 豫 - 锤 - 匠 - 尘 - 澎 - 溴 - 孚 - 畸 - 缝 - 腭 - 铠 - 携 - 著 - 畔 - 企 - 贸 - 究 - 辩 - 韵 - 抬 - 彬 - 沅 - 嵌 - 沛 - 哇 - 槭 - 牲 - 畴 - 讽 - 铎 - 搞 - 浑 - 渝 - 琦 - 禧 - 窟 - 绛 - 肺 - 茜 - 蓄 - 赣 - 轿 - 陇 - 骏 - 劲 - 喙 - 坤 - 挽 - 晴 - 磐 - 缕 - 诛 - 谣 - 趾 - 轩 - 铅 - 奔 - 凶 - 嫡 - 幢 - 庵 - 拘 - 梓 - 椅 - 罐 - 虞 - 衙 - 谥 - 酷 - 鹘 - 吹 - 妥 - 誓 - 酬 - 闵 - 颅 - 啦 - 廿 - 栽 - 桨 - 涩 - 笛 - 蓉 - 蕃 - 诱 - 陪 - 鬣 - 鸦 - 匈 - 俞 - 净 - 剿 - 裸 - 诊 - 邨 - 郁 - 匡 - 喉 - 扭 - 摆 - 湄 - 笋 - 芸 - 莪 - 秩 - 喻 - 孕 - 肇 - 荫 - 萍 - 蟒 - 黛 - 忽 - 窑 - 谏 - 厨 - 醒 - 煮 - 肛 - 饶 - 藿 - 妾 - 濑 - 灼 - 琪 - 咒 - 伙 - 狩 - 钧 - 黏 - 崩 - 秉 - 秒 - 茵 - 阎 - 缠 - 耐 - 蚁 - ‘ - 戟 - 摊 - 矢 - 衷 - 贿 - 赦 - 锈 - 靛 - 俭 - 嘲 - 娄 - 婿 - 彝 - 懒 - 懿 - 晏 - 枣 - 柑 - 睾 - 膨 - ’ - 傲 - 兜 - 椰 - 沽 - 煌 - 獐 - 隼 - 侣 - 凿 - 劾 - 懋 - 扣 - 樊 - 澄 - 盔 - 睦 - 祈 - 稽 - 耿 - 胤 - 臧 - 荀 - 萌 - 蔚 - 袍 - 裙 - 谍 - 鹏 - 寮 - 桓 - 冀 - 惹 - 懂 - 捉 - 摸 - 斤 - 毙 - 虹 - 酵 - 阱 - 飘 - 鳖 - 鹭 - 冻 - 於 - 泾 - 酥 - 嘛 - 堵 - 昊 - 晨 - 桦 - 棉 - 畜 - 祸 - 窦 - 竖 - 籽 - 翊 - 阀 - 饿 - 斋 - 偷 - 咏 - 坠 - 娅 - 淋 - 轰 - 隙 - 桉 - 豚 - 佥 - 勿 - 卸 - 奋 - 履 - 摔 - 纱 - 黔 - 寂 - 岬 - 烦 - 爷 - 甬 - 纂 - 颌 - 岱 - 抛 - 丫 - 纶 - 』 - 『 - 涯 - 钯 - 缔 - 驴 - 膝 - 吁 - 梾 - 漆 - 瀚 - 菅 - 釉 - 镖 - 颉 - 鼹 - 垫 - 槽 - 檐 - 汶 - 琮 - 羟 - 舅 - 酚 - 銮 - 韶 - 鲹 - 碟 - 乒 - 侃 - 敖 - 敷 - 旱 - 溃 - 炽 - 爽 - 牟 - 瑛 - 畏 - 聂 - 肼 - 苎 - 蔺 - 虐 - 蚂 - 镀 - 霖 - 勉 - 咨 - 啸 - 堤 - 媛 - 扈 - 栩 - 渭 - 焊 - 祺 - 稠 - 腕 - 艰 - 菁 - 蚊 - 谜 - 辣 - 鑫 - 钙 - 雏 - 颂 - 吓 - 哉 - 孜 - 抓 - 烽 - ・ - 凑 - 劈 - 劫 - 勐 - 勾 - 呀 - 岷 - 崴 - 帛 - 悟 - 戍 - 戚 - 斌 - 朵 - 柃 - 烤 - 猬 - 秆 - 笃 - 谅 - 赔 - 铨 - 隅 - 颤 - 驿 - 髓 - 鸠 - 鸻 - 凋 - 喔 - 宴 - 掩 - 昏 - 栅 - 椤 - 焰 - 狸 - 畿 - 盈 - 竟 - 葆 - 诈 - 钾 - 锚 - 鸥 - 椿 - 亏 - 勘 - 哀 - 墩 - 奠 - 嫔 - 寇 - 廓 - 忒 - 惨 - 惩 - 愉 - 旭 - 汛 - 淑 - 溥 - 玮 - 碲 - 莘 - 诃 - 谬 - 逼 - 钮 - 韧 - 垄 - 涨 - 矣 - 碗 - 谐 - 辗 - 偿 - 僚 - 惕 - 窄 - 绅 - 邯 - 闯 - 鲣 - 佣 - 唤 - 欺 - 胄 - 蔗 - 镧 - 刹 - 榨 - 涛 - 昨 - 匪 - 咬 - 涧 - 琅 - 斧 - 锂 - 陕 - 择 - 俱 - 砍 - 镁 - ( - 妨 - 檀 - 洱 - 翟 - 谤 - 鹳 - 佰 - 撕 - 湛 - 炬 - 狡 - 瘦 - 祚 - 赉 - 乍 - 兀 - 匐 - 叻 - 塌 - 妓 - 掖 - 昀 - 橐 - 泄 - 漕 - 漪 - 澧 - 珂 - 珙 - 瘾 - 瞭 - 祕 - 缨 - 缮 - 肆 - 芮 - 莴 - 蛏 - 覃 - 赈 - 郓 - 铬 - 闍 - 雯 - 僵 - 拯 - 丕 - 亩 - 仗 - 匙 - 寞 - 捞 - 敕 - 毋 - 汾 - 沸 - 泗 - 烨 - 瓢 - 矫 - 筱 - 篱 - 翘 - 肄 - 胚 - 脓 - 腥 - 膳 - 葎 - 郃 - 雉 - 韬 - 驰 - 媳 - 揖 - 羲 - 鄞 - 凄 - 募 - 厘 - 噬 - 圻 - 夥 - 奢 - 宦 - 峻 - 巩 - 弊 - 惑 - 愤 - 慌 - 憩 - 拾 - 掷 - 敞 - 敲 - 晒 - 晟 - 曰 - 枫 - 栉 - 歼 - 殷 - 洒 - 淹 - 溞 - 滞 - 濒 - 灶 - 犍 - 猜 - 瑶 - 璋 - 甄 - 疼 - 祁 - 纬 - 纭 - 缉 - 罩 - 腧 - 苴 - 莼 - 蒴 - 蠢 - 襟 - 觅 - 谛 - 钉 - 钥 - 阙 - 颊 - 鲳 - 磅 - 稷 - 臀 - 宕 - 屑 - 摘 - 枕 - 渗 - 灿 - 牦 - 绶 - 菇 - 萱 - 蛹 - 辙 - 钰 - 霜 - 驯 - '-' - 剌 - 卦 - 嘎 - 孵 - 巷 - 抒 - 晁 - 柚 - 溺 - 炒 - 烂 - 琰 - 疣 - 皖 - 砌 - 祷 - 簇 - 胖 - 胱 - 舄 - 谎 - 踢 - 钓 - 锻 - 镊 - 陋 - 饥 - 饷 - 鸽 - 鸾 - 吨 - 熬 - 帧 - 栈 - 棚 - 濂 - 猗 - 穹 - 簧 - 缚 - 螈 - 谕 - 邕 - 俩 - 匀 - 叹 - 唑 - 扯 - 烘 - 舆 - 钵 - 龈 - 俨 - 筛 - 鲃 - 鲑 - 暨 - 棍 - 浊 - 玲 - 粪 - 胰 - 逢 - 钜 - 铀 - 挛 - 陁 - 疲 - 瞄 - 钛 - 鳌 - 汴 - ) - 拮 - 沆 - 泱 - 琵 - 瑀 - 箕 - 锆 - 哨 - 壸 - 巳 - 滥 - 珞 - 糯 - 翎 - 蛟 - 馨 - 侬 - 冥 - 噩 - 寥 - 峄 - 杠 - 汞 - 洮 - 琚 - 瑾 - 眷 - 笏 - 绯 - 缇 - 翱 - 荃 - 诟 - 遁 - 郯 - 鄣 - 铰 - 镐 - 闹 - 陡 - 珑 - 腈 - P - ̃ - ̌ - 亳 - 伎 - 吵 - 咎 - 奕 - 寡 - 幂 - 廪 - 弢 - 栾 - 桧 - 桩 - 泓 - 浇 - 涪 - 淯 - 渚 - 猕 - 珩 - 璜 - 皋 - 睿 - 砸 - 祗 - 绎 - 缀 - 耸 - 肘 - 苟 - 蔻 - 蛸 - 诵 - 谚 - 醋 - 鏊 - 闫 - 闾 - 靶 - 骠 - 鱿 - 鲢 - 鲱 - 歪 - 瞧 - Т - 侏 - 冉 - 凳 - 厍 - 厩 - 吟 - 呆 - 哮 - 啧 - 啰 - 喊 - 喵 - 墟 - 婉 - 婷 - 峙 - 巽 - 恕 - 恬 - 悄 - 悚 - 拦 - 挟 - 捧 - 掸 - 枋 - 栻 - 梳 - 棵 - 榔 - 殆 - 汲 - 沾 - 泸 - 涟 - 淇 - 漩 - 潞 - 灏 - 炕 - 焉 - 焕 - 熄 - 狠 - 琛 - 琢 - 璇 - 璧 - 睹 - 瞬 - 碰 - 碾 - 祉 - 窜 - 筐 - 箩 - 缸 - 脆 - 膦 - 芃 - 茫 - 荥 - 萎 - 虔 - 虻 - 裘 - 讶 - 谶 - 赃 - 赭 - 轸 - 辇 - 辜 - 邹 - 邺 - 酌 - 鍊 - 锺 - 镒 - 镛 - 閒 - 闳 - 陲 - 颍 - 颐 - 馅 - 骄 - 魄 - 黯 - 鼬 - 俯 - 〉 - 佃 - 堰 - 愁 - 摹 - 栓 - 毓 - 琨 - 痒 - 箴 - 膺 - 苇 - 茉 - 莅 - 铟 - 鞅 - 鸮 - 堕 - 逛 - 〈 - 丟 - 刈 - 剥 - 匣 - 卞 - 卤 - 噪 - 壹 - 彧 - 悔 - 惰 - 愚 - 抹 - 昕 - 桢 - 楞 - 殃 - 沮 - 涌 - 涞 - 潢 - 煜 - 篡 - 粥 - 蜊 - 蠕 - 貂 - 贰 - 赟 - 铳 - 阇 - 雁 - 霆 - 骆 - 孺 - 硼 - 窃 - 肟 - 裹 - 褚 - μ - 侗 - 吠 - 懈 - 杏 - 渴 - 硒 - 窥 - 莒 - 衢 - 诫 - 账 - 钡 - 铂 - 铋 - 锌 - 锰 - 鹑 - 兮 - 匆 - 岚 - 挤 - 肱 - 础 - 函 - 钪 - 仄 - 伛 - 剀 - 崁 - 暮 - 榙 - 沤 - 焱 - 瓛 - 茁 - 踊 - 鞣 - 沔 - 綦 - 诹 - 骰 - ㄟ - 佤 - 佬 - 佼 - 俵 - 傍 - 儆 - 儋 - 咪 - 哼 - 嘟 - 墀 - 岔 - 嵴 - 庾 - 惺 - 憨 - 抖 - 揆 - 撼 - 擂 - 擒 - 晔 - 暝 - 朕 - 栀 - 桡 - 樨 - 沐 - 沭 - 洐 - 浉 - 浞 - 涿 - 湟 - 爹 - 牕 - 玎 - 玹 - 琶 - 璎 - 睁 - 砾 - 綖 - 绞 - 缙 - 腱 - 膻 - 萃 - 藳 - 蜑 - 蝮 - 讹 - 诬 - 遑 - 郴 - 陟 - 雑 - 頴 - 顷 - 鬃 - 鲀 - 龚 - C - и - 乖 - 侪 - 侮 - 偃 - 偕 - 冤 - 冼 - 勺 - 叟 - 哄 - 埈 - 奚 - 姨 - 娆 - 娟 - 娣 - 娥 - 宵 - 寰 - 岖 - 嶷 - 巅 - 巯 - 彪 - 捏 - 掏 - 掞 - 撷 - 攥 - 斡 - 旬 - 晦 - 朽 - 杻 - 棨 - 榖 - 槚 - 槱 - 泯 - 泷 - 泼 - 涝 - 渌 - 湫 - 漾 - 濉 - 濠 - 灞 - 炊 - 烺 - 狯 - 猄 - 瓘 - 畲 - 疯 - 痘 - 痢 - 瘰 - 睢 - 竣 - 笮 - 箨 - 篦 - 缜 - 聿 - 胍 - 胛 - 胼 - 脾 - 舫 - 芷 - 莞 - 萁 - 蔑 - 薷 - 螣 - 袓 - 謇 - 谯 - 赁 - 赘 - 趴 - 蹴 - 躺 - 轶 - 込 - 郤 - 酪 - 醮 - 鎏 - 钴 - 钼 - 铯 - 铼 - 铽 - 闰 - 阉 - 阏 - 隘 - 雳 - 霹 - 韭 - 颞 - 馈 - 馔 - 骂 - 骘 - 髎 - 鲎 - 鸯 - 鹪 - 麂 - 婺 - 怵 - 绮 - 蚺 - 飒 - 瓮 - A - × - И - 亿 - 伫 - 佘 - 佚 - 佯 - 俣 - 俳 - 俸 - 倻 - 偈 - 傉 - 傕 - 兑 - 刍 - 刮 - 剃 - 勍 - 匮 - 匾 - 叅 - 叩 - 呋 - 呐 - 呔 - 呵 - 呻 - 咆 - 哑 - 哦 - 唁 - 唆 - 唾 - 啉 - 喃 - 喘 - 喧 - 喹 - 喽 - 嘻 - 噌 - 嚓 - 嚣 - 圃 - 圩 - 圪 - 圹 - 坜 - 坨 - 坯 - 埇 - 埕 - 埗 - 塬 - 奣 - 妍 - 妒 - 妡 - 姮 - 娇 - 婢 - 婪 - 嫉 - 嫖 - 嬴 - 孖 - 宥 - 専 - 尻 - 屉 - 屹 - 岫 - 峒 - 峥 - 峪 - 帚 - 幺 - 廆 - 弁 - 彤 - 徊 - 徘 - 徬 - 怜 - 怨 - 恍 - 悌 - 悫 - 惬 - 惶 - 愔 - 愧 - 懔 - 懦 - 扉 - 扳 - 扼 - 拇 - 拌 - 拢 - 挠 - 挫 - 捡 - 捣 - 捩 - 掛 - 掰 - 揩 - 搅 - 搏 - 搔 - 撇 - 攒 - 攸 - 敛 - 斝 - 斟 - 旌 - 昴 - 昵 - 暄 - 曜 - 杓 - 杙 - 杵 - 枞 - 枥 - 柞 - 栎 - 栢 - 桄 - 桝 - 棁 - 椋 - 楮 - 榉 - 榛 - 榭 - 榻 - 槛 - 樾 - 橇 - 歹 - 殒 - 毯 - 氖 - 氦 - 沌 - 沥 - 泠 - 泮 - 洣 - 洽 - 浔 - 浸 - 涓 - 淤 - 渺 - 湍 - 湜 - 溢 - 潇 - 澥 - 濡 - 濯 - 炜 - 烙 - 烹 - 焚 - 焮 - 煎 - 熈 - 熏 - 熠 - 牁 - 牂 - 犷 - 猝 - 獭 - 獴 - 玭 - 玶 - 珈 - 珐 - 珰 - 琬 - 瑭 - 璆 - 璹 - 瓯 - 畋 - 疤 - 疹 - 痪 - 痴 - 瘠 - 瘫 - 皕 - 盂 - 盱 - 眙 - 眶 - 瞳 - 矗 - 砝 - 砵 - 硎 - 硖 - 祋 - 祜 - 禀 - 禑 - 禛 - 秃 - 秧 - 秽 - 稔 - 稼 - 笄 - 笆 - 笨 - 筮 - 筷 - 簸 - 簿 - 籁 - 籴 - 紊 - 紑 - 繄 - 纻 - 绂 - 绚 - 绫 - 缪 - 缵 - 罔 - 羚 - 羯 - 翃 - 翥 - 耍 - 耻 - 胪 - 胭 - 胯 - 胳 - 脲 - 腩 - 腻 - 膈 - 膊 - 膏 - 膑 - 膛 - 臯 - 臻 - 舵 - 艮 - 艷 - 芗 - 芰 - 苓 - 苛 - 苡 - 茧 - 荦 - 荼 - 菉 - 萘 - 蒟 - 蒺 - 蓍 - 蔵 - 薖 - 薨 - 蘸 - 虏 - 虬 - 蛉 - 蛊 - 蜃 - 蝌 - 蝎 - 蝾 - 螂 - 蟀 - 蟆 - 蟑 - 衫 - 裤 - 褧 - 褪 - 觚 - 諡 - 讚 - 讫 - 谌 - 貘 - 贮 - 赡 - 跌 - 跪 - 踩 - 蹶 - 躯 - 轭 - 辕 - 遐 - 邾 - 郫 - 鄙 - 酃 - 酐 - 酢 - 鋆 - 钊 - 钌 - 钗 - 钫 - 钲 - 钹 - 钿 - 铣 - 铮 - 铲 - 锣 - 锶 - 镂 - 镉 - 镍 - 镎 - 镠 - 闱 - 闼 - 阕 - 隈 - 隗 - 霄 - 霈 - 霓 - 靡 - 靳 - 餵 - 饪 - 饯 - 饵 - 馀 - 馥 - 馯 - 驭 - 驷 - 骁 - 骥 - 骷 - 髅 - 魇 - 鮎 - 鳚 - 鸩 - 鸲 - 鸵 - 鹂 - 鹛 - 鹬 - 黾 - 鼢 - 鼷 - 龛 - 玠 - 俅 - 塾 - 嬤 - 盏 - 緁 - オ - カ - ド - ヤ - 倩 - 偲 - 傩 - 壽 - 嫣 - 峯 - 崧 - 暎 - 桶 - 炫 - 畈 - 皓 - 瞎 - 糠 - 蠋 - 褓 - 轲 - 鋐 - / - ─ - 䴕 - 仞 - 俐 - 倘 - 倚 - 倦 - 唛 - 圜 - 垢 - 奂 - 妲 - 孛 - 寀 - 寝 - 尨 - 屐 - 岈 - 嵗 - 帖 - 帷 - 忤 - 恽 - 拧 - 拷 - 挚 - 挞 - 捍 - 揉 - 搁 - 昱 - 枭 - 柩 - 栃 - 検 - 樵 - 歆 - 氙 - 汹 - 浚 - 淞 - 淼 - 溅 - 煲 - 琊 - 琐 - 瑮 - 甾 - 疡 - 盼 - 眩 - 砀 - 砥 - 竈 - 纮 - 罘 - 羡 - 耽 - 胥 - 臼 - 舘 - 苳 - 荁 - 荧 - 荪 - 葳 - 蓑 - 藔 - 蚪 - 螃 - 螟 - 蟋 - 蟥 - 蠹 - 袒 - 諲 - 诙 - 谠 - 跻 - 轧 - 邰 - 醯 - 钨 - 锑 - 镓 - 镗 - 镥 - 阈 - 陂 - 顽 - 飙 - 饴 - 鳃 - 黜 - 鼐 - 侂 - 嵋 - 嵖 - 忱 - 惪 - 戛 - 汜 - 洙 - 洵 - 濮 - 痼 - 県 - 裬 - 譬 - 谪 - 鳟 - 鹌 - 慑 - 撮 - 蛲 - 骈 - 阖 - 麾 - 麒 - 鹩 - 鹋 - 鲻 - 驸 - 铵 - 铆 - 鄱 - 郦 - 郝 - 邳 - 邬 - 辱 - 辍 - 趟 - 赎 - 赂 - 贻 - 豁 - 谒 - 诽 - 诡 - 诠 - 诀 - 褒 - 袜 - 衬 - 蜻 - 蜴 - 蜓 - 蛎 - 蚀 - 蘑 - 薮 - 蕲 - 蕗 - 蓣 - 菠 - 莽 - 莩 - 荽 - 荛 - 苷 - 苁 - 芎 - 腮 - 脖 - 脐 - 肴 - 羰 - 羞 - 罹 - 绰 - 绢 - 簪 - 稗 - 秤 - 禺 - 磲 - 磡 - 砻 - 砷 - 砗 - 瞻 - 睫 - 皎 - 痉 - 疮 - 疍 - 疃 - 琥 - 狙 - 牢 - 烁 - 炔 - 澡 - 滁 - 湳 - 渲 - 渤 - 渣 - 渎 - 淖 - 淅 - 汕 - 氓 - 槲 - 楸 - 楷 - 椹 - 棺 - 査 - 柰 - 柝 - 曙 - 昙 - 掳 - 捆 - 拐 - 憾 - 憍 - 愕 - 惘 - 恼 - 忻 - 庹 - 幔 - 巾 - 嵯 - 崛 - 婵 - 婕 - 娠 - 姒 - 妤 - 妊 - 奘 - 夯 - 夭 - 坳 - 囚 - 嘌 - 嗲 - 喱 - 哔 - 咽 - 咀 - 呤 - 叭 - 匕 - 刃 - 儡 - 僖 - 傀 - 侈 - 佟 - т - 黧 - 鳔 - 魅 - 颓 - 鞑 - 靼 - 陌 - 钕 - 郸 - 逗 - 迭 - 跖 - 趁 - 讼 - 衅 - 螨 - 蜈 - 蛳 - 蚶 - 蚣 - 菏 - 莺 - 荔 - 耧 - 竿 - 瘿 - 珧 - 玖 - 牻 - 燮 - 溉 - 泻 - 泣 - 沦 - 殴 - 棣 - 桫 - 曷 - 曝 - 暹 - 晰 - 昼 - 扔 - 戮 - 憬 - 憧 - 慷 - 慨 - 悖 - 弈 - 幡 - 孪 - 垃 - 圾 - 嘧 - 嗓 - 啶 - 匍 - 剖 - 倭 - 乓 - 麝 - 鲫 - 鲡 - 骼 - 豌 - 蜕 - 蔽 - 葜 - 萸 - 菝 - 荸 - 舶 - 膀 - 聋 - 翡 - 羧 - 缆 - 绸 - 糟 - 箬 - 甥 - 牺 - 炀 - 溲 - 歙 - 橄 - 梧 - 柠 - 暑 - 幌 - 帜 - 奸 - 卉 - 匿 - 傣 - 倪 - 伺 - — - 骚 - 隍 - 阐 - 跆 - 蝴 - 葶 - 苈 - 翌 - 縻 - 篷 - 皂 - 淆 - 榴 - 榧 - 楝 - 梢 - 晤 - 摧 - 惜 - 帘 - 寸 - 咕 - 厕 - 匝 - 釜 - 遮 - 谴 - 蝙 - 蜘 - 蜍 - 葫 - 茱 - 肪 - 肚 - 瑰 - 瀑 - 氰 - 歉 - 檬 - 桤 - 桌 - 斛 - 惧 - 垩 - 啤 - 啄 - 厢 - 兖 - . - 鹦 - 镰 - 逍 - 螯 - 藜 - 蔬 - 莆 - 苋 - 芩 - 睛 - 溧 - 柽 - 斥 - 掘 - 挝 - 峭 - 厌 - 侠 - 乂 - 鼱 - 迄 - 藨 - 苹 - 竺 - 痕 - 椭 - 蛭 - 蒜 - 菩 - 玫 - 榈 - 棠 - 梵 - 寓 - 逮 - 趋 - 荠 - 罂 - 猩 - 槟 - 隧 - 蓼 - 荨 - 啡 - 粟 - 稣 - 氟 - 橡 - 柿 - 怖 - 蛱 - 胁 - 祯 - 硝 - 挪 - 尝 - 酋 - 薹 - 硫 - 蛄 - 粹 - 璃 - 爸 - 浏 - 咖 - 估 - 隋 - 柬 - 檗 - 蔷 - 碍 - 氨 - 蝽 - 玻 - 恢 - 矛 - 怎 - 疫 - 棕 - 娱 - 鳅 - 茛 - 蜱 - 矶 - 哺 - 械 - 堇 - 蚤 - 瑚 - 稳 - 苔 - 疆 - 乾 - 挥 - 孩 - 萄 - 藓 - 括 - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.5 use_preprocessor: true token_type: bpe bpemodel: data/zh-CN_token_list/bpe_unigram4409/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_zh-CN_bpe4409_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: vgg_rnn encoder_conf: rnn_type: lstm bidirectional: true use_projection: true num_layers: 4 hidden_size: 1024 output_size: 1024 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: num_layers: 2 hidden_size: 1024 sampling_probability: 0 att_conf: atype: location adim: 1024 aconv_chans: 10 aconv_filts: 100 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
fahadtouseef/wav2vec2-base-timit-demo-colab_2
fahadtouseef
2022-05-02T14:18:38Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-02T11:50:57Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab_2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3801 - Wer: 0.3035 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.7227 | 3.52 | 500 | 2.6961 | 1.0 | | 1.1237 | 7.04 | 1000 | 0.6088 | 0.5315 | | 0.4886 | 10.56 | 1500 | 0.4709 | 0.4353 | | 0.3148 | 14.08 | 2000 | 0.4341 | 0.3942 | | 0.2229 | 17.61 | 2500 | 0.4035 | 0.3616 | | 0.1693 | 21.13 | 3000 | 0.3868 | 0.3289 | | 0.1393 | 24.65 | 3500 | 0.3993 | 0.3135 | | 0.118 | 28.17 | 4000 | 0.3801 | 0.3035 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
waboucay/camembert-base-finetuned-xnli_fr-finetuned-nli-rua_wl
waboucay
2022-05-02T14:00:24Z
4
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "nli", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T13:58:49Z
--- language: - fr tags: - nli metrics: - f1 --- ## Eval results We obtain the following results on ```validation``` and ```test``` sets: | Set | F1<sub>micro</sub> | F1<sub>macro</sub> | |------------|--------------------|--------------------| | validation | 69.9 | 69.9 | | test | 68.8 | 68.8 |
waboucay/camembert-base-finetuned-nli-rua_wl
waboucay
2022-05-02T13:54:59Z
4
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "nli", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T13:48:52Z
--- language: - fr tags: - nli metrics: - f1 --- ## Eval results We obtain the following results on ```validation``` and ```test``` sets: | Set | F1<sub>micro</sub> | F1<sub>macro</sub> | |------------|--------------------|--------------------| | validation | 73.8 | 73.7 | | test | 74.4 | 74.3 |
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
ali2066
2022-05-02T13:37:28Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T13:12:40Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2555 - Precision: 1.0 - Recall: 0.0200 - F1: 0.0393 - Accuracy: 0.0486 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 95 | 0.5756 | nan | 0.0 | nan | 0.715 | | No log | 2.0 | 190 | 0.5340 | 0.6429 | 0.1579 | 0.2535 | 0.735 | | No log | 3.0 | 285 | 0.5298 | 0.5833 | 0.3684 | 0.4516 | 0.745 | | No log | 4.0 | 380 | 0.5325 | 0.5789 | 0.3860 | 0.4632 | 0.745 | | No log | 5.0 | 475 | 0.5452 | 0.4815 | 0.4561 | 0.4685 | 0.705 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
kSaluja/new-test-model2
kSaluja
2022-05-02T12:58:39Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-25T14:30:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: new-test-model2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # new-test-model2 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1040 - Precision: 0.9722 - Recall: 0.9757 - F1: 0.9739 - Accuracy: 0.9808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 151 | 0.1819 | 0.9360 | 0.9405 | 0.9382 | 0.9540 | | No log | 2.0 | 302 | 0.1196 | 0.9637 | 0.9639 | 0.9638 | 0.9703 | | No log | 3.0 | 453 | 0.1322 | 0.9614 | 0.9682 | 0.9648 | 0.9711 | | 0.2764 | 4.0 | 604 | 0.1071 | 0.9677 | 0.9725 | 0.9701 | 0.9763 | | 0.2764 | 5.0 | 755 | 0.1084 | 0.9709 | 0.9766 | 0.9737 | 0.9790 | | 0.2764 | 6.0 | 906 | 0.1015 | 0.9717 | 0.9739 | 0.9728 | 0.9791 | | 0.0342 | 7.0 | 1057 | 0.1208 | 0.9686 | 0.9727 | 0.9706 | 0.9785 | | 0.0342 | 8.0 | 1208 | 0.1068 | 0.9680 | 0.9752 | 0.9716 | 0.9798 | | 0.0342 | 9.0 | 1359 | 0.1028 | 0.9719 | 0.9743 | 0.9731 | 0.9807 | | 0.0129 | 10.0 | 1510 | 0.1040 | 0.9722 | 0.9757 | 0.9739 | 0.9808 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
tomh/toxigen_hatebert
tomh
2022-05-02T12:42:51Z
1,476
11
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "arxiv:2203.09509", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-01T13:02:09Z
--- language: - en tags: - text-classification --- Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar. This model comes from the paper [ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection](https://arxiv.org/abs/2203.09509) and can be used to detect implicit hate speech. Please visit the [Github Repository](https://github.com/microsoft/TOXIGEN) for the training dataset and further details. ```bibtex @inproceedings{hartvigsen2022toxigen, title = "{T}oxi{G}en: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection", author = "Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece", booktitle = "Proceedings of the 60th Annual Meeting of the Association of Computational Linguistics", year = "2022" } ```
DioLiu/distilbert-base-uncased-finetuned-sst2-newdata
DioLiu
2022-05-02T12:40:09Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T12:18:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-sst2-newdata results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2-newdata This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0588 - Accuracy: 0.9911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0543 | 1.0 | 1116 | 0.0307 | 0.9911 | | 0.0235 | 2.0 | 2232 | 0.0372 | 0.9911 | | 0.0102 | 3.0 | 3348 | 0.0486 | 0.9914 | | 0.0003 | 4.0 | 4464 | 0.0563 | 0.9914 | | 0.0008 | 5.0 | 5580 | 0.0588 | 0.9911 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
madatnlp/kor-math-roberta-finetune
madatnlp
2022-05-02T11:44:14Z
4
0
transformers
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-30T11:16:10Z
--- tags: - generated_from_keras_callback model-index: - name: madatnlp/kor-math-roberta-finetune results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # madatnlp/kor-math-roberta-finetune This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3205 - Validation Loss: 1.1407 - Epoch: 26 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_bfloat16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.4242 | 2.0873 | 0 | | 1.9159 | 1.6264 | 1 | | 1.5933 | 1.4521 | 2 | | 1.3806 | 1.3584 | 3 | | 1.2487 | 1.2904 | 4 | | 1.1464 | 1.2388 | 5 | | 1.0552 | 1.2076 | 6 | | 0.9889 | 1.1818 | 7 | | 0.9118 | 1.1607 | 8 | | 0.8459 | 1.1349 | 9 | | 0.7838 | 1.1193 | 10 | | 0.7389 | 1.1193 | 11 | | 0.6864 | 1.1080 | 12 | | 0.6495 | 1.1001 | 13 | | 0.6103 | 1.1001 | 14 | | 0.5795 | 1.0990 | 15 | | 0.5436 | 1.0954 | 16 | | 0.5136 | 1.0997 | 17 | | 0.4906 | 1.0954 | 18 | | 0.4565 | 1.1021 | 19 | | 0.4347 | 1.1075 | 20 | | 0.4131 | 1.1075 | 21 | | 0.3924 | 1.1220 | 22 | | 0.3741 | 1.1298 | 23 | | 0.3549 | 1.1352 | 24 | | 0.3395 | 1.1286 | 25 | | 0.3205 | 1.1407 | 26 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
nanopass/distilbert-base-uncased-emotion-2
nanopass
2022-05-02T09:43:02Z
5
0
transformers
[ "transformers", "pytorch", "tf", "jax", "distilbert", "text-classification", "emotion", "en", "dataset:emotion", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T09:42:09Z
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - emotion - pytorch license: apache-2.0 datasets: - emotion metrics: - Accuracy, F1 Score --- # Distilbert-base-uncased-emotion ## Model description: [Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model. [Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters ``` learning rate 2e-5, batch size 64, num_train_epochs=8, ``` ## Model Performance Comparision on Emotion Dataset from Twitter: | Model | Accuracy | F1 Score | Test Sample per Second | | --- | --- | --- | --- | | [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 | | [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 | | [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 | | [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 | ## How to Use the model: ```python from transformers import pipeline classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True) prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", ) print(prediction) """ Output: [[ {'label': 'sadness', 'score': 0.0006792712374590337}, {'label': 'joy', 'score': 0.9959300756454468}, {'label': 'love', 'score': 0.0009452480007894337}, {'label': 'anger', 'score': 0.0018055217806249857}, {'label': 'fear', 'score': 0.00041110432357527316}, {'label': 'surprise', 'score': 0.0002288572577526793} ]] """ ``` ## Dataset: [Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion). ## Training procedure [Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb) ## Eval results ```json { 'test_accuracy': 0.938, 'test_f1': 0.937932884041714, 'test_loss': 0.1472451239824295, 'test_mem_cpu_alloc_delta': 0, 'test_mem_cpu_peaked_delta': 0, 'test_mem_gpu_alloc_delta': 0, 'test_mem_gpu_peaked_delta': 163454464, 'test_runtime': 5.0164, 'test_samples_per_second': 398.69 } ``` ## Reference: * [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/)
driboune/skin_type
driboune
2022-05-02T08:08:40Z
183
3
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-04-29T15:59:55Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: skin_type results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8222222328186035 --- # skin_type Aiming for fairness in image classification for humans, knowing the skin type of subjects is relevant to make sure the model performs correctly on all skin types. Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### dark skin ![dark skin](images/dark_skin.jpg) #### light skin ![light skin](images/light_skin.jpg)