modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-07 06:34:03
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
544 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-07 06:33:46
card
stringlengths
11
1.01M
DMetaSoul/sbert-chinese-general-v2-distill
DMetaSoul
2022-04-02T09:58:33Z
15
6
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "semantic-search", "chinese", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-04-02T09:58:18Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - semantic-search - chinese --- # DMetaSoul/sbert-chinese-general-v2-distill 此模型是之前[开源通用语义匹配模型](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2)的蒸馏版本(仅4层 BERT),适用于**通用语义匹配**场景,从效果来看该模型在各种任务上**泛化能力更好且编码速度更快**。 离线训练好的大模型如果直接用于线上推理,对计算资源有苛刻的需求,而且难以满足业务环境对延迟、吞吐量等性能指标的要求,这里我们使用蒸馏手段来把大模型轻量化。从 12 层 BERT 蒸馏为 4 层后,模型参数量缩小到 44%,大概 latency 减半、throughput 翻倍、精度下降 6% 左右(具体结果详见下文评估小节)。 # Usage ## 1. Sentence-Transformers 通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装: ``` pip install -U sentence-transformers ``` 然后使用下面的代码来载入该模型并进行文本表征向量的提取: ```python from sentence_transformers import SentenceTransformer sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v2-distill') embeddings = model.encode(sentences) print(embeddings) ``` ## 2. HuggingFace Transformers 如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取: ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v2-distill') model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v2-distill') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation 这里主要跟蒸馏前对应的 teacher 模型作了对比: *性能:* | | Teacher | Student | Gap | | ---------- | --------------------- | ------------------- | ----- | | Model | BERT-12-layers (102M) | BERT-4-layers (45M) | 0.44x | | Cost | 23s | 12s | -47% | | Latency | 38ms | 20ms | -47% | | Throughput | 418 sentence/s | 791 sentence/s | 1.9x | *精度:* | | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | **Avg** | | -------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | ------- | | **Teacher** | 77.19% | 72.59% | 36.79% | 76.91% | 49.62% | 16.24% | 63.15% | 56.07% | | **Student** | 76.49% | 73.33% | 26.46% | 64.26% | 46.02% | 11.83% | 52.45% | 50.12% | | **Gap** (abs.) | - | - | - | - | - | - | - | -5.95% | *基于1万条数据测试,GPU设备是V100,batch_size=16,max_seq_len=256* ## Citing & Authors E-mail: xiaowenbin@dmetasoul.com
junnyu/flash_small_wwm_cluecorpussmall
junnyu
2022-04-02T09:46:27Z
4
0
transformers
[ "transformers", "pytorch", "flash", "fill-mask", "license:mit", "autotrain_compatible", "region:us" ]
fill-mask
2022-04-02T02:59:48Z
--- license: mit inference: False --- # training logs - https://wandb.ai/junyu/huggingface/runs/1jg2jlgt # install - https://github.com/JunnYu/FLASHQuad_pytorch # usage ```python import torch from flash import FLASHForMaskedLM from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("junnyu/flash_small_wwm_cluecorpussmall") model = FLASHForMaskedLM.from_pretrained("junnyu/flash_small_wwm_cluecorpussmall") model.eval() text = "天气预报说今天的天[MASK]很好,那么我[MASK]一起去公园玩吧!" inputs = tokenizer(text, return_tensors="pt", padding="max_length", max_length=512, return_token_type_ids=False) #这里必须是512,不然结果可能不对。 with torch.no_grad(): pt_outputs = model(**inputs).logits[0] pt_outputs_sentence = "pytorch: " for i, id in enumerate(tokenizer.encode(text)): if id == tokenizer.mask_token_id: val,idx = pt_outputs[i].softmax(-1).topk(k=5) tokens = tokenizer.convert_ids_to_tokens(idx) new_tokens = [] for v,t in zip(val.cpu(),tokens): new_tokens.append(f"{t}+{round(v.item(),4)}") pt_outputs_sentence += "[" + "||".join(new_tokens) + "]" else: pt_outputs_sentence += "".join( tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)) print(pt_outputs_sentence) # pytorch: 天气预报说今天的天[气+0.994||天+0.0015||空+0.0014||晴+0.0005||阳+0.0003]很好,那么我[们+0.9563||就+0.0381||也+0.0032||俩+0.0004||来+0.0002]一起去公园玩吧! ```
Chikashi/t5-small-finetuned-wikihow_3epoch
Chikashi
2022-04-02T07:42:15Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wikihow", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-01T21:20:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikihow metrics: - rouge model-index: - name: t5-small-finetuned-wikihow_3epoch results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wikihow type: wikihow args: all metrics: - name: Rouge1 type: rouge value: 25.5784 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikihow_3epoch This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.5163 - Rouge1: 25.5784 - Rouge2: 8.9929 - Rougel: 21.5345 - Rougelsum: 24.9382 - Gen Len: 18.384 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.9421 | 0.25 | 5000 | 2.6545 | 23.2336 | 7.5502 | 19.5899 | 22.5521 | 18.4076 | | 2.8411 | 0.51 | 10000 | 2.6103 | 24.3524 | 8.2068 | 20.5238 | 23.6679 | 18.2606 | | 2.7983 | 0.76 | 15000 | 2.5836 | 24.8169 | 8.4826 | 20.8765 | 24.1686 | 18.3211 | | 2.7743 | 1.02 | 20000 | 2.5627 | 24.9904 | 8.5625 | 21.0344 | 24.3416 | 18.3786 | | 2.7452 | 1.27 | 25000 | 2.5508 | 25.1497 | 8.6872 | 21.152 | 24.4751 | 18.3524 | | 2.7353 | 1.53 | 30000 | 2.5384 | 25.2909 | 8.7408 | 21.2344 | 24.629 | 18.4453 | | 2.7261 | 1.78 | 35000 | 2.5322 | 25.3748 | 8.7802 | 21.312 | 24.7191 | 18.3754 | | 2.7266 | 2.03 | 40000 | 2.5265 | 25.4095 | 8.8915 | 21.3871 | 24.7685 | 18.4013 | | 2.706 | 2.29 | 45000 | 2.5211 | 25.4372 | 8.8926 | 21.4124 | 24.7902 | 18.3776 | | 2.7073 | 2.54 | 50000 | 2.5176 | 25.4925 | 8.9668 | 21.5103 | 24.8608 | 18.4303 | | 2.703 | 2.8 | 55000 | 2.5163 | 25.5784 | 8.9929 | 21.5345 | 24.9382 | 18.384 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Suman123/upside-down-detector
Suman123
2022-04-02T07:33:31Z
0
0
null
[ "region:us" ]
null
2022-04-01T12:56:45Z
TASK 1 of Faltima Fellowship- UpsideDown detector
nikhil6041/wav2vec2-large-xls-r-300m-hindi-colab
nikhil6041
2022-04-02T06:04:25Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-02T03:35:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-hindi-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
satoshiz01/Flipped_CIFAR10_vision
satoshiz01
2022-04-02T05:09:07Z
0
0
null
[ "region:us" ]
null
2022-04-02T03:30:55Z
**Google Colab Notebook link:** https://colab.research.google.com/drive/1iA8nvb93VLcrDfIt17AOIHnkVdLSNcW_?usp=sharing This repo contains files for defining and creating a simple convolutional network for classifying/detecting the orientation of CIFAR-10 images (either normal orientation or flipped upside down/180 degrees). The following files are in this repo: Coding_Challenge_for_Fatima_Fellowship.ipynb -- a copy of the Google Collab notebook with the code/output/writeup best_model.pth -- dictionary of best model stats/weights found during training cifar10flip_trn.pt -- saved training dataset of ~50% flipped CIFAR10 images cifar10flip_tst.pt -- saved training dataset of ~50% flipped CIFAR10 images image_examples.png -- an array of example imags from flipped CIFAR10 dataset write-up -- write up of data processing, model results, and potential improvements (also in Google Colab) wrong_predictions.zip -- a zip file of PNG images that were incorrectly classified by my model (each file name provide information on the image's prediction, true label, and its class)
huggingtweets/clortown
huggingtweets
2022-04-02T04:51:29Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-02T02:36:56Z
--- language: en thumbnail: http://www.huggingtweets.com/clortown/1648875085007/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1488574779351187458/RlIQNUFG_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">yeosang elf agenda</div> <div style="text-align: center; font-size: 14px;">@clortown</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from yeosang elf agenda. | Data | yeosang elf agenda | | --- | --- | | Tweets downloaded | 3140 | | Retweets | 538 | | Short tweets | 463 | | Tweets kept | 2139 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cupnlna/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clortown's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/uii743r9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/uii743r9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/clortown') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BigSalmon/Points4
BigSalmon
2022-04-02T03:04:08Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-02T02:57:31Z
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/Points4") model = AutoModelForCausalLM.from_pretrained("BigSalmon/Points4") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - middle schools do not have recess - should get back to doing it - amazing for communication - and getting kids to move around text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity. *** - ``` It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27 Keywords to sentences or sentence.
TheJarmanitor/fatima-fellowship-model
TheJarmanitor
2022-04-02T03:03:42Z
0
0
null
[ "region:us" ]
null
2022-04-02T03:01:06Z
model and notebook for the Fatima Fellowship 2022 coding Challenge
youssefadarrab/TP_NLP_SNLI_Adarrab_Baziz_Malige
youssefadarrab
2022-04-02T00:40:26Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T21:11:05Z
# CentraleSupelec - Natural language processing # Practical session n°7 ## Natural Language Inferencing (NLI): (NLI) is a classical NLP (Natural Language Processing) problem that involves taking two sentences (the premise and the hypothesis ), and deciding how they are related (if the premise *entails* the hypothesis, *contradicts* it, or *neither*). Ex: | Premise | Label | Hypothesis | | --- | --- | --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. | ### Stanford NLI (SNLI) corpus In this labwork, I propose to use the Stanford NLI (SNLI) corpus ( https://nlp.stanford.edu/projects/snli/ ), available in the *Datasets* library by Huggingface. from datasets import load_dataset snli = load_dataset("snli") #Removing sentence pairs with no label (-1) snli = snli.filter(lambda example: example['label'] != -1) ## Quick summary of the model This is the model from : Youssef Adarrab, Othmane Baziz and Alain Malige - Fist we import the corpus and do some visualization - Second we apply DistilBert for sequence classification - We illustrate through our work the code used for training, to obtain better results, one should run the training on more epochs
JustAdvanceTechonology/medical_research_dataset_marian-finetuned-kde4-fr-to-en
JustAdvanceTechonology
2022-04-02T00:07:29Z
4
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-31T10:16:30Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: JustAdvanceTechonology/medical_research_dataset_marian-finetuned-kde4-fr-to-en results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # JustAdvanceTechonology/medical_research_dataset_marian-finetuned-kde4-fr-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6429 - Validation Loss: 0.8071 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.6423 | 0.8071 | 0 | | 0.6424 | 0.8071 | 1 | | 0.6429 | 0.8071 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.5.0 - Datasets 2.0.0 - Tokenizers 0.10.1
lgris/wav2vec2-large-xlsr-open-brazilian-portuguese
lgris
2022-04-01T20:32:58Z
268
9
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "PyTorch", "hf-asr-leaderboard", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "arxiv:2012.03411", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch - hf-asr-leaderboard license: apache-2.0 model-index: - name: Lucas Gris XLSR Wav2Vec2 Large 53 Brazilian Portuguese results: - task: name: Speech Recognition type: automatic-speech-recognition metrics: - name: Test WER type: wer value: 12.905054857823264% --- # Wav2vec 2.0 With Open Brazilian Portuguese Datasets This a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets: - [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus. - [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers. - [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz. - [Common Voice 6.1](https://commonvoice.mozilla.org/pt) (_only train_): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages to train ASR models. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt). The set in Portuguese (mostly Brazilian variant) used in this work is the 6.1 version (pt_63h_2020-12-11) that contains about 50 validated hours and 1,120 unique speakers. - [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control. These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/drive/folders/1XTKIUB4kp3oYOavwH97wq8IPFsxP5sNz?usp=sharing). This model was trained in 80k updates. #### Datasets in number of instances and number of frames The following image shows the overall distribution of the dataset: ![datasets](https://drive.google.com/uc?export=view&id=1DF2_PehB2pZlEJLcBA7yeZQ9EAuLGh_r) #### Transcription examples | Text | Transcription | |------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------| | É comum os usuários confundirem software livre com software livre | É comum os __usuares__ __confunder em__ __softwerlivr__ com __softwerlivre__ | | Ele fez tanto ghostwriting que ele começa a se sentir como um fantasma também | Ele fez tanto __golstraitn__ que ele __começou__ a se sentir como um fantasma também | | Arnold apresentou um gráfico mostrando quantas cegonhas ele havia contado nos últimos dez anos | Arnold apresentou um gráfico mostrando quantas __segonhas__ ele havia contado nos últimos dez anos | | Mais cedo ou mais tarde eles descobrirão como ler esses hieróglifos | Mais __sedo__ ou mais tarde eles descobriram como __de__ esses __ierogrôficos__ | | Viver juntos compartilhar objetivos e ter um bom relacionamento | __E ver__ juntos __signafica__ viver juntos ou __fartlhar__ objetivos ter um bom __relacionamentoo__ | | Da mesma forma uma patente pode impedir que concorrentes desenvolvam produtos similares | Da mesma forma uma patente pode impedir que concorrentes __desenvolva__ produtos similares | | Duas mulheres e uma menina levantam com troféus | Duas mulheres e uma menina levantam com __trofés__ | | Esse acrobata de circo deve ter um sistema vestibular bem treinado pensou o espectador | Esse acrobata de __cirko__ deve ter um sistema vestibular __bemtreinado__ pensou o espectador | | Durante a exposição o tribunal pode fazer quaisquer perguntas ou esclarecimentos que considere apropriados | Durante a exposição o tribunal pode fazer quaisquer perguntas ou esclarecimentos que considere __apropriado__ | ## Imports and dependencies ```python %%capture !pip install datasets !pip install jiwer !pip install torchaudio !pip install transformers !pip install soundfile ``` ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys ``` ## Preparation ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 wer = load_metric("wer") device = "cuda" ``` ```python model_name = 'lgris/wav2vec2-large-xlsr-open-brazilian-portuguese' model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ``` ```python def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["predicted"] = [pred.lower() for pred in batch["predicted"]] batch["target"] = batch["sentence"] return batch ``` ## Tests ### Test against Common Voice (In-domain) ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) for pred, target in zip(result["predicted"][:10], result["target"][:10]): print(pred, "|", target) ``` 0.12905054857823264 nem o varanin os altros influmindo os de teterno um bombederster | nem o radar nem os outros instrumentos detectaram o bombardeiro stealth pedir dinheiro é emprestado das pessoas do aldeia | pedir dinheiro emprestado às pessoas da aldeia oito | oito teno calcos | trancá-los realizaram a investigação para resolver o problema | realizar uma investigação para resolver o problema iotube ainda é a melhor plataforma de vídeos | o youtube ainda é a melhor plataforma de vídeos menina e menino beijando nas sombras | menina e menino beijando nas sombras eu sou o senhor | eu sou o senhor duas metcas sentam-se para baixo randes jornais | duas mulheres que sentam-se para baixo lendo jornais eu originalmente esperava | eu originalmente esperava **Result**: 12.90% ### Test against [TEDx](http://www.openslr.org/100/) (Out-of-domain) ```python !gdown --id 1HJEnvthaGYwcV_whHEywgH2daIN4bQna !tar -xf tedx.tar.gz ``` ```python dataset = load_dataset('csv', data_files={'test': 'tedx/test.csv'})['test'] def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) for pred, target in zip(result["predicted"][:10], result["target"][:10]): print(pred, "|", target) ``` 0.35215851987208774 com isso a gente vê que essa rede de pactuação de de deparcerias nos remete a um raciocínio lógico que ao que a gente crê que é a prevenção | com isso a gente vê que essa rede de pactuação de parcerias nos remete a um raciocínio lógico que é o que a gente crê que é a prevenção ente vai para o resultado | e aí a gente vai pro resultado curiosidade hé o que eu descobri desde que comecei a fazer pesquisa lá no ensino médio | e a curiosidade é algo que descobri desde que comecei a fazer pesquisa lá no ensino médio val des quemesho | há vários caminhos que é uma opcissão por comer soldado | que é uma obsessão por comer saudável isso é tão é forte algoltão universal que existem dados que mostram que setenta e cinco por cento das reuniões são dominadas pela voz masculina | e isso é tão forte é algo tão universal que existem dados que mostram que das reuniões são dominadas pela voz masculina não era exatamente isso não estávamos deveto | e não era exatamente isso que nós estávamos a ver durante meci do médio ofiz pesquisa estudei numa escola que chamam a fundação liberate ficava relativamente próximo daqui | durante o ensino médio eu fiz pesquisa estudei numa escola que se chama fundação liberato que fica relativamente próxima daqui oito anos atrás eu fui apresentado por uma doença que até então eu não conhecia e que é bem provável que a maior parte de nós todos aqui não conheçamos | oito anos atrás fui apresentado para uma doença que até então eu não conhecia e que é bem provável que a maior parte de nós todos aqui não conheçamos o terceiro é o museu do ripiopeco | o terceiro é o museu do hip hop **Result**: 35.21%
lgris/bp400-xlsr
lgris
2022-04-01T20:31:02Z
91
3
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "PyTorch", "hf-asr-leaderboard", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "dataset:tedx", "dataset:sid", "arxiv:2107.11414", "arxiv:2012.03411", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge - tedx - sid metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch - hf-asr-leaderboard model-index: - name: bp400-xlsr results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7.0 type: mozilla-foundation/common_voice_7_0 args: pt metrics: - name: Test WER type: wer value: 14.0 license: apache-2.0 --- # bp400-xlsr: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset **Paper:** https://arxiv.org/abs/2107.11414 This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets: - [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus. - [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt). - [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control. - [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers. - [Multilingual TEDx](http://www.openslr.org/100): a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech. - [Sidney](https://igormq.github.io/datasets/) (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation; - [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz. These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets. | Dataset | Train | Valid | Test | |--------------------------------|-------:|------:|------:| | CETUC | 93.9h | -- | 5.4h | | Common Voice | 37.6h | 8.9h | 9.5h | | LaPS BM | 0.8h | -- | 0.1h | | MLS | 161.0h | -- | 3.7h | | Multilingual TEDx (Portuguese) | 144.2h | -- | 1.8h | | SID | 5.0h | -- | 1.0h | | VoxForge | 2.8h | -- | 0.1h | | Total | 437.2h | 8.9h | 21.6h | The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/drive/folders/1eRUExXRF2XK8JxUjIzbLBkLa5wuR3nig?usp=sharing). #### Summary | | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG | |----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------| | bp\_400 (demonstration below) | 0.052 | 0.140 | 0.074 | 0.117 | 0.121 | 0.245 | 0.118 | 0.124 | | bp\_400 + 3-gram | 0.033 | 0.095 | 0.046 | 0.123 | 0.112 | 0.212 | 0.123 | 0.106 | | bp\_400 + 4-gram (demonstration below) | **0.030** | 0.096 | 0.043 | **0.106** | 0.118 | 0.229 | **0.117** | **0.105** | | bp\_400 + 5-gram | 0.033 | 0.094 | 0.043 | 0.123 | **0.111** | **0.210** | 0.123 | **0.105** | | bp\_400 + Transf. | 0.032 | **0.092** | **0.036** | 0.130 | 0.115 | 0.215 | 0.125 | 0.106 | #### Transcription examples | Text | Transcription | |------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------| |alguém sabe a que horas começa o jantar | alguém sabe a que horas **começo** jantar | |lila covas ainda não sabe o que vai fazer no fundo|**lilacovas** ainda não sabe o que vai fazer no fundo| |que tal um pouco desse bom spaghetti|**quetá** um pouco **deste** bom **ispaguete**| |hong kong em cantonês significa porto perfumado|**rongkong** **en** **cantones** significa porto perfumado| |vamos hackear esse problema|vamos **rackar** esse problema| |apenas a poucos metros há uma estação de ônibus|apenas **ha** poucos metros **á** uma estação de ônibus| |relâmpago e trovão sempre andam juntos|**relampagotrevão** sempre andam juntos| ## Demonstration ```python MODEL_NAME = "lgris/bp400-xlsr" ``` ### Imports and dependencies ```python %%capture !pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html !pip install datasets !pip install jiwer !pip install transformers !pip install soundfile !pip install pyctcdecode !pip install https://github.com/kpu/kenlm/archive/master.zip ``` ```python import jiwer import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) from pyctcdecode import build_ctcdecoder import torch import re import sys ``` ### Helpers ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = 16_000 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") batch["target"] = batch["sentence"] return batch ``` ```python def calc_metrics(truths, hypos): wers = [] mers = [] wils = [] for t, h in zip(truths, hypos): try: wers.append(jiwer.wer(t, h)) mers.append(jiwer.mer(t, h)) wils.append(jiwer.wil(t, h)) except: # Empty string? pass wer = sum(wers)/len(wers) mer = sum(mers)/len(mers) wil = sum(wils)/len(wils) return wer, mer, wil ``` ```python def load_data(dataset): data_files = {'test': f'{dataset}/test.csv'} dataset = load_dataset('csv', data_files=data_files)["test"] return dataset.map(map_to_array) ``` ### Model ```python class STT: def __init__(self, model_name, device='cuda' if torch.cuda.is_available() else 'cpu', lm=None): self.model_name = model_name self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) self.processor = Wav2Vec2Processor.from_pretrained(model_name) self.vocab_dict = self.processor.tokenizer.get_vocab() self.sorted_dict = { k.lower(): v for k, v in sorted(self.vocab_dict.items(), key=lambda item: item[1]) } self.device = device self.lm = lm if self.lm: self.lm_decoder = build_ctcdecoder( list(self.sorted_dict.keys()), self.lm ) def batch_predict(self, batch): features = self.processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(self.device) attention_mask = features.attention_mask.to(self.device) with torch.no_grad(): logits = self.model(input_values, attention_mask=attention_mask).logits if self.lm: logits = logits.cpu().numpy() batch["predicted"] = [] for sample_logits in logits: batch["predicted"].append(self.lm_decoder.decode(sample_logits)) else: pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = self.processor.batch_decode(pred_ids) return batch ``` ### Download datasets ```python %%capture !gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI !mkdir bp_dataset !unzip bp_dataset -d bp_dataset/ ``` ### Tests ```python stt = STT(MODEL_NAME) ``` #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.05159104708285062 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.14031426198658084 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.07432133838383838 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.11678793514817509 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.12152357273433984 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.24666815906766504 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.11873106060606062 ### Tests with LM ```python !rm -rf ~/.cache !gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa') # !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp # stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa') ``` ### Cetuc ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.030266462438593742 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.09577710237417715 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.043617424242424235 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.10642133314350002 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.11839021001747055 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.22929952467810416 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.11716314935064935
birgermoell/psst-libri960_big
birgermoell
2022-04-01T20:17:17Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-01T19:05:31Z
pssteval INFO: ASR metrics for split `valid` FER: 9.8% PER: 20.9%
juaner/distilbert-base-uncased-finetuned-cola
juaner
2022-04-01T18:20:42Z
5
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T17:59:52Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: juaner/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # juaner/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1909 - Validation Loss: 0.5553 - Train Matthews Correlation: 0.5279 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5191 | 0.4491 | 0.4718 | 0 | | 0.3270 | 0.4571 | 0.5196 | 1 | | 0.1909 | 0.5553 | 0.5279 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
FrankCorrigan/results
FrankCorrigan
2022-04-01T18:15:40Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "dataset:samsum", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-01T01:41:22Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - samsum model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [linydub/bart-large-samsum](https://huggingface.co/linydub/bart-large-samsum) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.0158 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 0.9563 | | No log | 2.0 | 2 | 0.9877 | | No log | 3.0 | 3 | 1.0158 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0 - Datasets 2.0.0 - Tokenizers 0.11.6
FrankCorrigan/test-model
FrankCorrigan
2022-04-01T17:54:00Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-04-01T01:46:45Z
--- license: apache-2.0 ---
McGill-NLP/bart-qg-nq-checkpoint
McGill-NLP
2022-04-01T17:35:04Z
26
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "arxiv:1910.13461", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-01T16:32:49Z
--- license: cc-by-4.0 --- # BART-base fine-tuned on NaturalQuestions for **Question Generation** [BART Model](https://arxiv.org/pdf/1910.13461.pdf) fine-tuned on [Google NaturalQuestions](https://ai.google.com/research/NaturalQuestions/) for **Question Generation** by treating long answer as input, and question as output. ## Details of BART The **BART** model was presented in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by *Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer* in Here the abstract: We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance. ## Details of the downstream task (QG) - Dataset 📚 🧐 Dataset: ```NaturalQuestions``` from Google (https://ai.google.com/research/NaturalQuestions/) | Dataset | Split | # samples | | -------- | ----- | --------- | | NaturalQuestions | train | 97650 | | NaturalQuestions | valid | 10850 | ## Model fine-tuning 🏋️‍ The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/QG/train.py) ## Model in Action 🚀 ```python from transformers import AutoModel, BartTokenizer #Load the tokenizer tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') #Load the model model = AutoModelForSeq2SeqLM.from_pretrained("McGill-NLP/bart-qg-nq-checkpoint") ``` ## Citation If you want to cite this model you can use this: ```bibtex @inproceedings{kulshreshtha-etal-2021-back, title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval", author = "Kulshreshtha, Devang and Belfer, Robert and Serban, Iulian Vlad and Reddy, Siva", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.566", pages = "7064--7078", abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.", } ``` > Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
ahmedzaky91/Fatima-Fake_news_calssifier
ahmedzaky91
2022-04-01T16:54:24Z
0
0
null
[ "region:us" ]
null
2022-04-01T00:00:39Z
## This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on Fake and real dataset on kaggle ## The following hyperparameters were used during training: learning_rate: 5e-05 train_batch_size: 8 num_epochs: 2
vicl/canine-c-finetuned-mrpc
vicl
2022-04-01T16:33:28Z
4
1
transformers
[ "transformers", "pytorch", "tensorboard", "canine", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T16:05:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: canine-c-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8627450980392157 - name: F1 type: f1 value: 0.9014084507042254 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine-c-finetuned-mrpc This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4066 - Accuracy: 0.8627 - F1: 0.9014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.5014 | 0.7696 | 0.8479 | | No log | 2.0 | 460 | 0.4755 | 0.7892 | 0.8622 | | 0.5096 | 3.0 | 690 | 0.3645 | 0.8431 | 0.8869 | | 0.5096 | 4.0 | 920 | 0.4066 | 0.8627 | 0.9014 | | 0.2619 | 5.0 | 1150 | 0.4551 | 0.8431 | 0.8877 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
avialfont/ner-dummy-model
avialfont
2022-04-01T14:59:22Z
5
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-01T10:59:27Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ner-dummy-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ner-dummy-model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.6
eren23/pneumonia_test_attempt
eren23
2022-04-01T14:41:01Z
57
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-19T16:31:28Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pneumonia_test_attempt results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9783163070678711 --- # pneumonia-bielefeld-dl-course This registry contains the model for making pneumonia predictions and was prepared for Bielefeld University Deep Learning course homework. The code used for this implementation mostly comes from here: https://github.com/nateraw/huggingpics it was a ready pipeline for model fine-tuning with huggingface and PyTorch Lightning for another dataset.
notexist/ttt
notexist
2022-04-01T13:16:50Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-01T12:45:30Z
--- license: apache-2.0 ---
bharatR/up_down
bharatR
2022-04-01T12:38:05Z
0
0
null
[ "classification", "en", "dataset:cifar10-custom", "region:us" ]
null
2022-04-01T12:19:00Z
--- language: en tags: - classification datasets: - cifar10-custom metrics: - accuracy --- # Up-Down Classification This repo has the weights of resnet-18 model training on cifar-10 custom data, where some images are made upside down, and the goal is to predict the orientation of the image(0/1 classification task).
birgermoell/psst-base-rep
birgermoell
2022-04-01T12:02:45Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-01T07:58:20Z
The model is a reproduction of the baseline trained with Wav2vec2-small on PSST pssteval INFO: ASR metrics for split `valid` FER: 10.4% PER: 23.1%
xxr/bert-base-uncased-multi-128
xxr
2022-04-01T11:40:30Z
3
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-01T05:36:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null model_index: - name: bert-base-uncased-multi-128 results: - task: name: Masked Language Modeling type: fill-mask --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-multi-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.6636 | 1.0 | 812 | 3.2325 | | 3.2963 | 2.0 | 1624 | 3.1937 | | 3.1132 | 3.0 | 2436 | 3.2984 | | 2.9386 | 4.0 | 3248 | 3.2430 | | 2.7742 | 5.0 | 4060 | 3.1272 | | 2.5954 | 6.0 | 4872 | 3.1778 | | 2.501 | 7.0 | 5684 | 3.1649 | | 2.4073 | 8.0 | 6496 | 2.9395 | | 2.2933 | 9.0 | 7308 | 3.1262 | | 2.2218 | 10.0 | 8120 | 2.9994 | | 2.1558 | 11.0 | 8932 | 2.9922 | | 2.0873 | 12.0 | 9744 | 2.8414 | | 2.0104 | 13.0 | 10556 | 2.9351 | | 1.9364 | 14.0 | 11368 | 2.9253 | | 1.9045 | 15.0 | 12180 | 2.8701 | | 1.9152 | 16.0 | 12992 | 2.7101 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.7.1 - Datasets 1.16.1 - Tokenizers 0.10.3
z5ying/distilgpt2-finetuned-wikitext2
z5ying
2022-04-01T10:47:57Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-01T07:10:02Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [z5ying/distilgpt2-finetuned-wikitext2](https://huggingface.co/z5ying/distilgpt2-finetuned-wikitext2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 118 | 3.0306 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
osanseviero/llama-alpaca-snake
osanseviero
2022-04-01T09:45:01Z
62
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "llama-leaderboard", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-04-01T09:20:01Z
--- tags: - image-classification - pytorch - huggingpics - llama-leaderboard metrics: - accuracy model-index: - name: llama-alpaca-snake results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.7910447716712952 --- # llama-alpaca-snake Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### alpaca ![alpaca](images/alpaca.jpg) #### llamas ![llamas](images/llamas.jpg) #### snake ![snake](images/snake.jpg)
abdusah/aradia-ctc-data2vec-ft
abdusah
2022-04-01T08:19:29Z
5
0
transformers
[ "transformers", "pytorch", "data2vec-audio", "automatic-speech-recognition", "abdusahmbzuai/arabic_speech_massive_300hrs", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T14:34:56Z
--- tags: - automatic-speech-recognition - abdusahmbzuai/arabic_speech_massive_300hrs - generated_from_trainer model-index: - name: aradia-ctc-data2vec-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aradia-ctc-data2vec-ft This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-data2vec-ft](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-data2vec-ft) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset. It achieves the following results on the evaluation set: - Loss: 3.0464 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | No log | 0.43 | 100 | 3.3600 | 1.0 | | No log | 0.87 | 200 | 3.0887 | 1.0 | | No log | 1.3 | 300 | 3.0779 | 1.0 | | No log | 1.74 | 400 | 3.0551 | 1.0 | | 4.8553 | 2.17 | 500 | 3.0526 | 1.0 | | 4.8553 | 2.61 | 600 | 3.0560 | 1.0 | | 4.8553 | 3.04 | 700 | 3.1251 | 1.0 | | 4.8553 | 3.48 | 800 | 3.0870 | 1.0 | | 4.8553 | 3.91 | 900 | 3.0822 | 1.0 | | 3.1133 | 4.35 | 1000 | 3.0484 | 1.0 | | 3.1133 | 4.78 | 1100 | 3.0558 | 1.0 | | 3.1133 | 5.22 | 1200 | 3.1019 | 1.0 | | 3.1133 | 5.65 | 1300 | 3.0914 | 1.0 | | 3.1133 | 6.09 | 1400 | 3.0691 | 1.0 | | 3.109 | 6.52 | 1500 | 3.0589 | 1.0 | | 3.109 | 6.95 | 1600 | 3.0508 | 1.0 | | 3.109 | 7.39 | 1700 | 3.0540 | 1.0 | | 3.109 | 7.82 | 1800 | 3.0546 | 1.0 | | 3.109 | 8.26 | 1900 | 3.0524 | 1.0 | | 3.1106 | 8.69 | 2000 | 3.0569 | 1.0 | | 3.1106 | 9.13 | 2100 | 3.0622 | 1.0 | | 3.1106 | 9.56 | 2200 | 3.0518 | 1.0 | | 3.1106 | 10.0 | 2300 | 3.0749 | 1.0 | | 3.1106 | 10.43 | 2400 | 3.0698 | 1.0 | | 3.1058 | 10.87 | 2500 | 3.0665 | 1.0 | | 3.1058 | 11.3 | 2600 | 3.0555 | 1.0 | | 3.1058 | 11.74 | 2700 | 3.0589 | 1.0 | | 3.1058 | 12.17 | 2800 | 3.0611 | 1.0 | | 3.1058 | 12.61 | 2900 | 3.0561 | 1.0 | | 3.1071 | 13.04 | 3000 | 3.0480 | 1.0 | | 3.1071 | 13.48 | 3100 | 3.0492 | 1.0 | | 3.1071 | 13.91 | 3200 | 3.0574 | 1.0 | | 3.1071 | 14.35 | 3300 | 3.0538 | 1.0 | | 3.1071 | 14.78 | 3400 | 3.0505 | 1.0 | | 3.1061 | 15.22 | 3500 | 3.0600 | 1.0 | | 3.1061 | 15.65 | 3600 | 3.0596 | 1.0 | | 3.1061 | 16.09 | 3700 | 3.0623 | 1.0 | | 3.1061 | 16.52 | 3800 | 3.0800 | 1.0 | | 3.1061 | 16.95 | 3900 | 3.0583 | 1.0 | | 3.1036 | 17.39 | 4000 | 3.0534 | 1.0 | | 3.1036 | 17.82 | 4100 | 3.0563 | 1.0 | | 3.1036 | 18.26 | 4200 | 3.0481 | 1.0 | | 3.1036 | 18.69 | 4300 | 3.0477 | 1.0 | | 3.1036 | 19.13 | 4400 | 3.0505 | 1.0 | | 3.1086 | 19.56 | 4500 | 3.0485 | 1.0 | | 3.1086 | 20.0 | 4600 | 3.0481 | 1.0 | | 3.1086 | 20.43 | 4700 | 3.0615 | 1.0 | | 3.1086 | 20.87 | 4800 | 3.0658 | 1.0 | | 3.1086 | 21.3 | 4900 | 3.0505 | 1.0 | | 3.1028 | 21.74 | 5000 | 3.0492 | 1.0 | | 3.1028 | 22.17 | 5100 | 3.0485 | 1.0 | | 3.1028 | 22.61 | 5200 | 3.0483 | 1.0 | | 3.1028 | 23.04 | 5300 | 3.0479 | 1.0 | | 3.1028 | 23.48 | 5400 | 3.0509 | 1.0 | | 3.1087 | 23.91 | 5500 | 3.0530 | 1.0 | | 3.1087 | 24.35 | 5600 | 3.0486 | 1.0 | | 3.1087 | 24.78 | 5700 | 3.0514 | 1.0 | | 3.1087 | 25.22 | 5800 | 3.0505 | 1.0 | | 3.1087 | 25.65 | 5900 | 3.0508 | 1.0 | | 3.1043 | 26.09 | 6000 | 3.0501 | 1.0 | | 3.1043 | 26.52 | 6100 | 3.0467 | 1.0 | | 3.1043 | 26.95 | 6200 | 3.0466 | 1.0 | | 3.1043 | 27.39 | 6300 | 3.0465 | 1.0 | | 3.1043 | 27.82 | 6400 | 3.0465 | 1.0 | | 3.1175 | 28.26 | 6500 | 3.0466 | 1.0 | | 3.1175 | 28.69 | 6600 | 3.0466 | 1.0 | | 3.1175 | 29.13 | 6700 | 3.0465 | 1.0 | | 3.1175 | 29.56 | 6800 | 3.0465 | 1.0 | | 3.1175 | 30.0 | 6900 | 3.0464 | 1.0 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
jkhan447/sentiment-model-sample-27go-emotion
jkhan447
2022-04-01T08:13:56Z
4
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:go_emotions", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-28T06:05:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - go_emotions metrics: - accuracy model-index: - name: sentiment-model-sample-27go-emotion results: - task: name: Text Classification type: text-classification dataset: name: go_emotions type: go_emotions args: simplified metrics: - name: Accuracy type: accuracy value: 0.5888888888888889 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-model-sample-27go-emotion This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the go_emotions dataset. It achieves the following results on the evaluation set: - Loss: 4.1765 - Accuracy: 0.5889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
z5ying/mbart-large-cc25-finetuned-source-to-target
z5ying
2022-04-01T03:43:40Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-07T18:25:31Z
--- tags: - generated_from_trainer model-index: - name: mbart-large-cc25-finetuned-source-to-target results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-cc25-finetuned-source-to-target This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
dchung117/distilbert-base-uncased-finetuned-squad-d5716d28
dchung117
2022-04-01T02:02:28Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
2022-04-01T01:51:41Z
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Mr-Wick/xlnet-base-cased
Mr-Wick
2022-04-01T01:31:59Z
3
0
transformers
[ "transformers", "tf", "xlnet", "question-answering", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
question-answering
2022-03-26T12:52:07Z
--- tags: - generated_from_keras_callback model-index: - name: xlnet-base-cased results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16530, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.12.0
anisdismail/celebA-orientation-detection
anisdismail
2022-03-31T21:51:37Z
0
2
null
[ "image-classification", "pytorch", "en", "dataset:nielsr/CelebA-faces", "license:cc-by-nc-4.0", "model-index", "region:us" ]
image-classification
2022-03-31T19:48:26Z
--- language: - en license: cc-by-nc-4.0 tags: - image-classification - pytorch datasets: - nielsr/CelebA-faces model-index: - name: celebA_orientation_detection_model results: - task: type: image_classification # Required. Example: automatic-speech-recognition name: Image Classification # Optional. Example: Speech Recognition dataset: type: nielsr/CelebA-faces name: CelebA-faces metrics: - type: f1score # Required. Example: wer value: 0.97 # Required. Example: 20.90 name: Val F1 Score # Optional. Example: Test WER --- ## Detecting the Orientation of CelebA pictures using Deep Learning This model has been trained on a modified version of the CelebA-faces dataset, which was made from flipping 20,000 images upside down and keeping 20,000 images intact.<br> The model relies on Resnet-18 as a backbone and is connected to one output node to classify whether the images are flipped upside down (1) or not (0).
arjundd/dosma-models
arjundd
2022-03-31T21:39:54Z
0
0
null
[ "mri", "knee", "segmentation", "en", "region:us" ]
null
2022-03-31T18:30:03Z
--- language: en tags: - mri - knee - segmentation --- # DOSMA models These models are those that are made publicly available in the [DOSMA](https://github.com/ad12/DOSMA). More information on these models can be found in the [documentation](https://dosma.readthedocs.io/en/latest/models.html). ## Citation If you use any models, please cite any reference for the model in addition to the DOSMA reference below: ``` @inproceedings{desai2019dosma, title={DOSMA: A deep-learning, open-source framework for musculoskeletal MRI analysis}, author={Desai, Arjun D and Barbieri, Marco and Mazzoli, Valentina and Rubin, Elka and Black, Marianne S and Watkins, Lauren E and Gold, Garry E and Hargreaves, Brian A and Chaudhari, Akshay S}, booktitle={Proc 27th Annual Meeting ISMRM, Montreal}, pages={1135}, year={2019} } ```
abdusah/aradia-ctc-hubert-ft
abdusah
2022-03-31T20:56:27Z
14
0
transformers
[ "transformers", "pytorch", "hubert", "automatic-speech-recognition", "abdusahmbzuai/arabic_speech_massive_300hrs", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T08:14:31Z
--- tags: - automatic-speech-recognition - abdusahmbzuai/arabic_speech_massive_300hrs - generated_from_trainer model-index: - name: aradia-ctc-hubert-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aradia-ctc-hubert-ft This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-hubert-ft](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-hubert-ft) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.8536 - Wer: 0.3737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.43 | 100 | 3.6934 | 1.0 | | No log | 0.87 | 200 | 3.0763 | 1.0 | | No log | 1.3 | 300 | 2.9737 | 1.0 | | No log | 1.74 | 400 | 2.5734 | 1.0 | | 5.0957 | 2.17 | 500 | 1.1900 | 0.9011 | | 5.0957 | 2.61 | 600 | 0.9726 | 0.7572 | | 5.0957 | 3.04 | 700 | 0.8960 | 0.6209 | | 5.0957 | 3.48 | 800 | 0.7851 | 0.5515 | | 5.0957 | 3.91 | 900 | 0.7271 | 0.5115 | | 1.0312 | 4.35 | 1000 | 0.7053 | 0.4955 | | 1.0312 | 4.78 | 1100 | 0.6823 | 0.4737 | | 1.0312 | 5.22 | 1200 | 0.6768 | 0.4595 | | 1.0312 | 5.65 | 1300 | 0.6635 | 0.4488 | | 1.0312 | 6.09 | 1400 | 0.6602 | 0.4390 | | 0.6815 | 6.52 | 1500 | 0.6464 | 0.4310 | | 0.6815 | 6.95 | 1600 | 0.6455 | 0.4394 | | 0.6815 | 7.39 | 1700 | 0.6630 | 0.4312 | | 0.6815 | 7.82 | 1800 | 0.6521 | 0.4126 | | 0.6815 | 8.26 | 1900 | 0.6282 | 0.4284 | | 0.544 | 8.69 | 2000 | 0.6248 | 0.4178 | | 0.544 | 9.13 | 2100 | 0.6510 | 0.4104 | | 0.544 | 9.56 | 2200 | 0.6527 | 0.4013 | | 0.544 | 10.0 | 2300 | 0.6511 | 0.4064 | | 0.544 | 10.43 | 2400 | 0.6734 | 0.4061 | | 0.4478 | 10.87 | 2500 | 0.6756 | 0.4145 | | 0.4478 | 11.3 | 2600 | 0.6727 | 0.3990 | | 0.4478 | 11.74 | 2700 | 0.6619 | 0.4007 | | 0.4478 | 12.17 | 2800 | 0.6614 | 0.4019 | | 0.4478 | 12.61 | 2900 | 0.6695 | 0.4004 | | 0.3919 | 13.04 | 3000 | 0.6778 | 0.3966 | | 0.3919 | 13.48 | 3100 | 0.6872 | 0.3971 | | 0.3919 | 13.91 | 3200 | 0.6882 | 0.3945 | | 0.3919 | 14.35 | 3300 | 0.7177 | 0.4010 | | 0.3919 | 14.78 | 3400 | 0.6888 | 0.4043 | | 0.3767 | 15.22 | 3500 | 0.7124 | 0.4202 | | 0.3767 | 15.65 | 3600 | 0.7276 | 0.4120 | | 0.3767 | 16.09 | 3700 | 0.7265 | 0.4034 | | 0.3767 | 16.52 | 3800 | 0.7392 | 0.4077 | | 0.3767 | 16.95 | 3900 | 0.7403 | 0.3965 | | 0.3603 | 17.39 | 4000 | 0.7445 | 0.4016 | | 0.3603 | 17.82 | 4100 | 0.7579 | 0.4012 | | 0.3603 | 18.26 | 4200 | 0.7225 | 0.3963 | | 0.3603 | 18.69 | 4300 | 0.7355 | 0.3951 | | 0.3603 | 19.13 | 4400 | 0.7482 | 0.3925 | | 0.3153 | 19.56 | 4500 | 0.7723 | 0.3972 | | 0.3153 | 20.0 | 4600 | 0.7469 | 0.3898 | | 0.3153 | 20.43 | 4700 | 0.7800 | 0.3944 | | 0.3153 | 20.87 | 4800 | 0.7827 | 0.3897 | | 0.3153 | 21.3 | 4900 | 0.7935 | 0.3914 | | 0.286 | 21.74 | 5000 | 0.7984 | 0.3750 | | 0.286 | 22.17 | 5100 | 0.7945 | 0.3830 | | 0.286 | 22.61 | 5200 | 0.8011 | 0.3775 | | 0.286 | 23.04 | 5300 | 0.7978 | 0.3824 | | 0.286 | 23.48 | 5400 | 0.8161 | 0.3833 | | 0.2615 | 23.91 | 5500 | 0.7823 | 0.3858 | | 0.2615 | 24.35 | 5600 | 0.8312 | 0.3863 | | 0.2615 | 24.78 | 5700 | 0.8427 | 0.3819 | | 0.2615 | 25.22 | 5800 | 0.8432 | 0.3802 | | 0.2615 | 25.65 | 5900 | 0.8286 | 0.3794 | | 0.2408 | 26.09 | 6000 | 0.8224 | 0.3824 | | 0.2408 | 26.52 | 6100 | 0.8228 | 0.3823 | | 0.2408 | 26.95 | 6200 | 0.8324 | 0.3795 | | 0.2408 | 27.39 | 6300 | 0.8564 | 0.3744 | | 0.2408 | 27.82 | 6400 | 0.8629 | 0.3774 | | 0.2254 | 28.26 | 6500 | 0.8545 | 0.3778 | | 0.2254 | 28.69 | 6600 | 0.8492 | 0.3767 | | 0.2254 | 29.13 | 6700 | 0.8511 | 0.3751 | | 0.2254 | 29.56 | 6800 | 0.8491 | 0.3753 | | 0.2254 | 30.0 | 6900 | 0.8536 | 0.3737 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
ghees/FatimeFellowship
ghees
2022-03-31T20:47:24Z
0
0
null
[ "region:us" ]
null
2022-03-31T20:45:21Z
Preprocessing before feeding to model ``` from sentence_transformers import SentenceTransformer model = SentenceTransformer('paraphrase-MiniLM-L6-v2', device='cuda') ... embeddings = model.encode([text]) return embeddings[0] ```
osanseviero/test_model_bertmesh
osanseviero
2022-03-31T20:35:05Z
4
0
transformers
[ "transformers", "pytorch", "bert", "custom_code", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-31T19:47:46Z
--- license: apache-2.0 --- # WellcomeBertMesh WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings ([Mesh](https://www.nlm.nih.gov/mesh/meshhome.html)). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications. # Model description The model is inspired from [BertMesh](https://pubmed.ncbi.nlm.nih.gov/32976559/) which is trained on the full text of biomedical publications and uses BioBert as its pretrained model. WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is [PubMedBert](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies. We train the model using data from the [BioASQ](http://bioasq.org) competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs. The model achieves 63% micro f1 with a 0.5 threshold for all labels. The code for developing the model is open source and can be found in https://github.com/wellcometrust/grants_tagger # How to use ⚠️ You need transformers 4.17+ for the example to work due to its recent support for custom models. You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass `trust_remote_code=True`. You can get access to the probabilities for all labels by omitting `return_labels=True`. ``` from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Wellcome/WellcomeBertMesh" ) model = AutoModel.from_pretrained( "Wellcome/WellcomeBertMesh", trust_remote_code=True ) text = "This grant is about malaria and not about HIV." inputs = tokenizer([text], padding="max_length") labels = model(**inputs, return_labels=True) print(labels) ``` You can inspect the model code if you navigate to the files and see `model.py`.
arampacha/gpt-neo-therapist-small
arampacha
2022-03-31T20:34:26Z
17
1
transformers
[ "transformers", "pytorch", "tensorboard", "onnx", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-30T08:40:54Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: gpt-neo-therapist-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-therapist-small This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.6731 - Rouge1: 39.5028 - Rouge2: 6.43 - Rougel: 24.0091 - Rougelsum: 35.4481 - Gen Len: 204.1329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 24 - gradient_accumulation_steps: 64 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 9.9955 | 0.97 | 7 | 6.8195 | 18.6047 | 1.0194 | 14.8565 | 17.9774 | 212.0983 | | 6.9729 | 1.97 | 14 | 5.6783 | 26.3789 | 3.0779 | 18.5195 | 24.8592 | 203.0925 | | 5.2614 | 2.97 | 21 | 5.0506 | 34.9428 | 4.921 | 21.9741 | 32.1122 | 206.2775 | | 5.0599 | 3.97 | 28 | 4.7372 | 38.5235 | 6.2251 | 23.5923 | 34.5633 | 204.2428 | | 4.5479 | 4.97 | 35 | 4.6731 | 39.5028 | 6.43 | 24.0091 | 35.4481 | 204.1329 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
WENGSYX/Deberta-Chinese-Large
WENGSYX
2022-03-31T20:08:59Z
56
16
transformers
[ "transformers", "pytorch", "deberta", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# Deberta-Chinese ​ 本项目,基于微软开源的Deberta模型,在中文领域进行预训练。开源本模型,旨在为其他人提供更多预训练语言模型选择。 ​ 本预训练模型,基于WuDaoCorpora语料库预训练而成。WuDaoCorpora是北京智源人工智能研究院(智源研究院)构建的大规模、高质量数据集,用于支撑“悟道”大模型项目研究。 ​ 使用WWM与n-gramMLM 等预训练方法进行预训练。 | 预训练模型 | 学习率 | batchsize | 设备 | 语料库 | 时间 | 优化器 | | --------------------- | ------ | --------- | ------ | ------ | ---- | ------ | | Deberta-Chinese-Large | 1e-5 | 512 | 2*3090 | 200G | 14天 | AdamW | ​ ### 加载与使用 依托于huggingface-transformers ``` tokenizer = BertTokenizer.from_pretrained("WENGSYX/Deberta-Chinese-Large") model = AutoModel.from_pretrained("WENGSYX/Deberta-Chinese-Large") ``` #### 注意,请使用BertTokenizer加载中文词表
huggingtweets/stillconor
huggingtweets
2022-03-31T17:49:05Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-31T16:59:05Z
--- language: en thumbnail: http://www.huggingtweets.com/stillconor/1648748939988/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1485398297984389121/DmUfFheN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">conor</div> <div style="text-align: center; font-size: 14px;">@stillconor</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from conor. | Data | conor | | --- | --- | | Tweets downloaded | 3199 | | Retweets | 102 | | Short tweets | 432 | | Tweets kept | 2665 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1z83yigq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stillconor's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/30hsnorw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/30hsnorw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/stillconor') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
israfelsr/UpsideDownClassifier
israfelsr
2022-03-31T17:06:27Z
0
0
null
[ "region:us" ]
null
2022-03-31T15:41:33Z
# UpsideDownClassifier This classifier was trained using the [auto-cats-and-dogs](https://huggingface.co/datasets/nateraw/auto-cats-and-dogs) dataset. It was trained over 5 epochs using a pretrained resent18. The configuration for the model was ``` config = { "batch_size": 64, "num_epochs": 5, "lr": 0.005, "betas": (0.9, 0.999), "eps": 1e-6, "lr": 8e-3, "do_eval": True } ``` ## Traning Plots We can see in the figures below the training plots for accuracy and the loss in both, training and validation sets. ### Accuracy Plot ![Accuracy](https://huggingface.co/israfelsr/UpsideDownClassifier/blob/main/accuracy.png) ### Loss Plot ![Loss](https://huggingface.co/israfelsr/UpsideDownClassifier/blob/main/loss.png) ## Some Results Evaluating on the Test Set, we obtain: - Accuracy = 0.9696 A batch with some missclassifications can be seen in the picture below. ![Results](https://huggingface.co/israfelsr/UpsideDownClassifier/blob/main/results.png)
huggingtweets/youtube
huggingtweets
2022-03-31T14:06:33Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-31T14:05:50Z
--- language: en thumbnail: http://www.huggingtweets.com/youtube/1648735587597/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1427292844612595720/RC1YSvuT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">YouTube</div> <div style="text-align: center; font-size: 14px;">@youtube</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from YouTube. | Data | YouTube | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 23 | | Short tweets | 104 | | Tweets kept | 3123 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dx34obn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @youtube's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/youtube') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
chrisjay/fonxlsr
chrisjay
2022-03-31T13:35:06Z
40
7
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "hf-asr-leaderboard", "fon", "dataset:fon_dataset", "arxiv:2103.07762", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: fon datasets: - fon_dataset metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week - hf-asr-leaderboard license: apache-2.0 model-index: - name: Fon XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: fon type: fon_dataset args: fon metrics: - name: Test WER type: wer value: 14.97 --- # Wav2Vec2-Large-XLSR-53-Fon Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on [Fon (or Fongbe)](https://en.wikipedia.org/wiki/Fon_language) using the [Fon Dataset](https://github.com/laleye/pyFongbe/tree/master/data). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import json import random import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor #Load test_dataset from saved files in folder from datasets import load_dataset, load_metric #for test for root, dirs, files in os.walk(test/): test_dataset= load_dataset("json", data_files=[os.path.join(root,i) for i in files],split="train") #Remove unnecessary chars chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”]' def remove_special_characters(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " " return batch test_dataset = test_dataset.map(remove_special_characters) processor = Wav2Vec2Processor.from_pretrained("chrisjay/wav2vec2-large-xlsr-53-fon") model = Wav2Vec2ForCTC.from_pretrained("chrisjay/wav2vec2-large-xlsr-53-fon") #No need for resampling because audio dataset already at 16kHz #resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"]=speech_array.squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on our unique Fon test data. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re for root, dirs, files in os.walk(test/): test_dataset = load_dataset("json", data_files=[os.path.join(root,i) for i in files],split="train") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”]' batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " " return batch test_dataset = test_dataset.map(remove_special_characters) wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("chrisjay/wav2vec2-large-xlsr-53-fon") model = Wav2Vec2ForCTC.from_pretrained("chrisjay/wav2vec2-large-xlsr-53-fon") model.to("cuda") # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = speech_array[0].numpy() batch["sampling_rate"] = sampling_rate batch["target_text"] = batch["sentence"] return batch test_dataset = test_dataset.map(speech_file_to_array_fn) #Evaluation on test dataset def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 14.97 % ## Training The [Fon dataset](https://github.com/laleye/pyFongbe/tree/master/data) was split into `train`(8235 samples), `validation`(1107 samples), and `test`(1061 samples). The script used for training can be found [here](https://colab.research.google.com/drive/11l6qhJCYnPTG1TQZ8f3EvKB9z12TQi4g?usp=sharing) # Collaborators on this project - Chris C. Emezue ([Twitter](https://twitter.com/ChrisEmezue))|(chris.emezue@gmail.com) - Bonaventure F.P. Dossou (HuggingFace Username: [bonadossou](https://huggingface.co/bonadossou))|([Twitter](https://twitter.com/bonadossou))|(femipancrace.dossou@gmail.com) ## This is a joint project continuing our research on [OkwuGbé: End-to-End Speech Recognition for Fon and Igbo](https://arxiv.org/abs/2103.07762)
scasutt/wav2vec2-base_toy_train_data_slow_10pct
scasutt
2022-03-31T13:12:54Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-27T02:28:24Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_slow_10pct results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_slow_10pct This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3248 - Wer: 0.7175 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0663 | 2.1 | 500 | 3.0725 | 0.9982 | | 1.1679 | 4.2 | 1000 | 1.3620 | 0.8889 | | 0.6789 | 6.3 | 1500 | 1.2182 | 0.8160 | | 0.5764 | 8.4 | 2000 | 1.2469 | 0.7667 | | 0.4603 | 10.5 | 2500 | 1.2851 | 0.7533 | | 0.4085 | 12.6 | 3000 | 1.2351 | 0.7401 | | 0.3583 | 14.7 | 3500 | 1.2455 | 0.7367 | | 0.3158 | 16.81 | 4000 | 1.3663 | 0.7261 | | 0.2817 | 18.91 | 4500 | 1.3248 | 0.7175 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
frtna/jwt300_mt-Italian-to-Spanish_transformers
frtna
2022-03-31T11:18:09Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:new_dataset", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-29T09:49:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - new_dataset metrics: - sacrebleu model-index: - name: jwt300_mt-Italian-to-Spanish_transformers results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: new_dataset type: new_dataset args: jwt300_mt metrics: - name: Sacrebleu type: sacrebleu value: 0.9057 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # jwt300_mt-Italian-to-Spanish_transformers This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the new_dataset dataset. It achieves the following results on the evaluation set: - Loss: 2.4425 - Sacrebleu: 0.9057 - Gen Len: 18.1276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 2.7545 | 1.0 | 2229 | 2.4425 | 0.9057 | 18.1276 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.11.6
scasutt/wav2vec2-base_toy_train_data_random_low_pass
scasutt
2022-03-31T10:42:02Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T08:21:35Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_random_low_pass results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_random_low_pass This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3227 - Wer: 0.7288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0795 | 2.1 | 500 | 3.2227 | 0.9982 | | 1.21 | 4.2 | 1000 | 1.3713 | 0.8879 | | 0.742 | 6.3 | 1500 | 1.2660 | 0.8296 | | 0.5877 | 8.4 | 2000 | 1.2921 | 0.7794 | | 0.4823 | 10.5 | 2500 | 1.2899 | 0.7565 | | 0.4036 | 12.6 | 3000 | 1.3486 | 0.7494 | | 0.391 | 14.7 | 3500 | 1.2701 | 0.7466 | | 0.3426 | 16.81 | 4000 | 1.3570 | 0.7279 | | 0.3015 | 18.91 | 4500 | 1.3227 | 0.7288 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
unjustify/autotrain-commonsence-689620825
unjustify
2022-03-31T06:38:08Z
7
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain", "en", "dataset:unjustify/autotrain-data-commonsence", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T06:18:51Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - unjustify/autotrain-data-commonsence co2_eq_emissions: 20.656741915705204 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 689620825 - CO2 Emissions (in grams): 20.656741915705204 ## Validation Metrics - Loss: 0.7315372824668884 - Accuracy: 0.6354949675117849 - Precision: 0.63792194092827 - Recall: 0.6191451241361658 - AUC: 0.6912165223485615 - F1: 0.6283932978308872 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/unjustify/autotrain-commonsence-689620825 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("unjustify/autotrain-commonsence-689620825", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("unjustify/autotrain-commonsence-689620825", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
ai4bharat/MultiIndicParaphraseGeneration
ai4bharat
2022-03-31T06:21:30Z
19
1
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "paraphrase-generation", "multilingual", "nlp", "indicnlp", "as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te", "dataset:ai4bharat/IndicParaphrase", "arxiv:2203.05437", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-16T17:37:59Z
--- tags: - paraphrase-generation - multilingual - nlp - indicnlp datasets: - ai4bharat/IndicParaphrase language: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te license: - mit --- # MultiIndicParaphraseGeneration This repository contains the [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint finetuned on the 11 languages of [IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) dataset. For finetuning details, see the [paper](https://arxiv.org/abs/2203.05437). <ul> <li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li> <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for decoding. </li> <li> Trained on large Indic language corpora (5.53 million sentences). </li> <li> All languages, have been represented in Devanagari script to encourage transfer learning among the related languages. </li> </ul> ## Using this model in `transformers` ``` from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM from transformers import AlbertTokenizer, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration", do_lower_case=False, use_fast=False, keep_accents=True) # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration", do_lower_case=False, use_fast=False, keep_accents=True) model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration") # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration") # Some initial mapping bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>") eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>") pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>") # To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>'] # First tokenize the input. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". inp = tokenizer("दिल्ली यूनिवर्सिटी देश की प्रसिद्ध यूनिवर्सिटी में से एक है. </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # For generation. Pardon the messiness. Note the decoder_start_token_id. model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>")) # Decode to get output strings decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # दिल्ली विश्वविद्यालय देश की प्रमुख विश्वविद्यालयों में शामिल है। # Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library. ``` # Note: If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script. ## Benchmarks Scores on the `IndicParaphrase` test sets are as follows: Language | BLEU / Self-BLEU / iBLEU ---------|---------------------------- as | 1.66 / 2.06 / 0.54 bn | 11.57 / 1.69 / 7.59 gu | 22.10 / 2.76 / 14.64 hi | 27.29 / 2.87 / 18.24 kn | 15.40 / 2.98 / 9.89 ml | 10.57 / 1.70 / 6.89 mr | 20.38 / 2.20 / 13.61 or | 19.26 / 2.10 / 12.85 pa | 14.87 / 1.35 / 10.00 ta | 18.52 / 2.88 / 12.10 te | 16.70 / 3.34 / 10.69 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437" } ```
danhsf/distilbert-base-uncased-finetuned-emotion
danhsf
2022-03-31T02:39:15Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-28T02:00:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9265 - name: F1 type: f1 value: 0.926557813198531 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2201 - Accuracy: 0.9265 - F1: 0.9266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8631 | 1.0 | 250 | 0.3221 | 0.904 | 0.9011 | | 0.254 | 2.0 | 500 | 0.2201 | 0.9265 | 0.9266 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5
yy642
2022-03-31T02:22:21Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T20:09:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-mnli-rte-wnli-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-mnli-rte-wnli-5 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4400 - Accuracy: 0.9209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2253 | 1.0 | 16558 | 0.2346 | 0.9139 | | 0.1667 | 2.0 | 33116 | 0.2973 | 0.9143 | | 0.1207 | 3.0 | 49674 | 0.3361 | 0.9203 | | 0.0553 | 4.0 | 66232 | 0.4400 | 0.9209 | | 0.033 | 5.0 | 82790 | 0.5175 | 0.9203 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.0.0 - Tokenizers 0.11.6
michiyasunaga/BioLinkBERT-base
michiyasunaga
2022-03-31T00:51:21Z
6,225
36
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "exbert", "linkbert", "biolinkbert", "fill-mask", "question-answering", "text-classification", "token-classification", "en", "dataset:pubmed", "arxiv:2203.15827", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2022-03-08T07:22:12Z
--- license: apache-2.0 language: en datasets: - pubmed tags: - bert - exbert - linkbert - biolinkbert - feature-extraction - fill-mask - question-answering - text-classification - token-classification widget: - text: "Sunitinib is a tyrosine kinase inhibitor" --- ## BioLinkBERT-base BioLinkBERT-base model pretrained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts along with citation link information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT). This model achieves state-of-the-art performance on several biomedical NLP benchmarks such as [BLURB](https://microsoft.github.io/BLURB/) and [MedQA-USMLE](https://github.com/jind11/MedQA). ## Model description LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document. LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval). ## Intended uses & limitations The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification. You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text). ### How to use To use the model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/BioLinkBERT-base') model = AutoModel.from_pretrained('michiyasunaga/BioLinkBERT-base') inputs = tokenizer("Sunitinib is a tyrosine kinase inhibitor", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases. ## Evaluation results When fine-tuned on downstream tasks, LinkBERT achieves the following results. **Biomedical benchmarks ([BLURB](https://microsoft.github.io/BLURB/), [MedQA](https://github.com/jind11/MedQA), [MMLU](https://github.com/hendrycks/test), etc.):** BioLinkBERT attains new state-of-the-art. | | BLURB score | PubMedQA | BioASQ | MedQA-USMLE | | ---------------------- | -------- | -------- | ------- | -------- | | PubmedBERT-base | 81.10 | 55.8 | 87.5 | 38.1 | | **BioLinkBERT-base** | **83.39** | **70.2** | **91.4** | **40.0** | | **BioLinkBERT-large** | **84.30** | **72.2** | **94.8** | **44.6** | | | MMLU-professional medicine | | ---------------------- | -------- | | GPT-3 (175 params) | 38.7 | | UnifiedQA (11B params) | 43.2 | | **BioLinkBERT-large (340M params)** | **50.7** | ## Citation If you find LinkBERT useful in your project, please cite the following: ```bibtex @InProceedings{yasunaga2022linkbert, author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang}, title = {LinkBERT: Pretraining Language Models with Document Links}, year = {2022}, booktitle = {Association for Computational Linguistics (ACL)}, } ```
michiyasunaga/LinkBERT-base
michiyasunaga
2022-03-31T00:38:32Z
847
7
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "exbert", "linkbert", "fill-mask", "question-answering", "text-classification", "token-classification", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:2203.15827", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2022-03-08T07:21:51Z
--- license: apache-2.0 language: en datasets: - wikipedia - bookcorpus tags: - bert - exbert - linkbert - feature-extraction - fill-mask - question-answering - text-classification - token-classification --- ## LinkBERT-base LinkBERT-base model pretrained on English Wikipedia articles along with hyperlink information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT). ## Model description LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document. LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval). ## Intended uses & limitations The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification. You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text). ### How to use To use the model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/LinkBERT-base') model = AutoModel.from_pretrained('michiyasunaga/LinkBERT-base') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases. ## Evaluation results When fine-tuned on downstream tasks, LinkBERT achieves the following results. **General benchmarks ([MRQA](https://github.com/mrqa/MRQA-Shared-Task-2019) and [GLUE](https://gluebenchmark.com/)):** | | HotpotQA | TriviaQA | SearchQA | NaturalQ | NewsQA | SQuAD | GLUE | | ---------------------- | -------- | -------- | -------- | -------- | ------ | ----- | -------- | | | F1 | F1 | F1 | F1 | F1 | F1 | Avg score | | BERT-base | 76.0 | 70.3 | 74.2 | 76.5 | 65.7 | 88.7 | 79.2 | | **LinkBERT-base** | **78.2** | **73.9** | **76.8** | **78.3** | **69.3** | **90.1** | **79.6** | | BERT-large | 78.1 | 73.7 | 78.3 | 79.0 | 70.9 | 91.1 | 80.7 | | **LinkBERT-large** | **80.8** | **78.2** | **80.5** | **81.0** | **72.6** | **92.7** | **81.1** | ## Citation If you find LinkBERT useful in your project, please cite the following: ```bibtex @InProceedings{yasunaga2022linkbert, author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang}, title = {LinkBERT: Pretraining Language Models with Document Links}, year = {2022}, booktitle = {Association for Computational Linguistics (ACL)}, } ```
hoangbinhmta99/wav2vec-NCKH-2022
hoangbinhmta99
2022-03-31T00:28:52Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "feature-extraction", "audio", "speech", "Transformer", "automatic-speech-recognition", "vi", "dataset:vivos", "dataset:common_voice", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-30T04:39:46Z
--- language: vi datasets: - vivos - common_voice metrics: - wer pipeline_tag: automatic-speech-recognition tags: - audio - speech - Transformer license: cc-by-nc-4.0 model-index: - name: Wav2vec2 NCKH Vietnamese 2022 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice vi type: common_voice args: vi metrics: - name: Test WER type: wer value: No --- Convert from model .pt to transformer Link: https://huggingface.co/tommy19970714/wav2vec2-base-960h Bash: ```bash pip install transformers[sentencepiece] pip install fairseq -U git clone https://github.com/huggingface/transformers.git cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py . wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt -O ./wav2vec_small.pt mkdir dict wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt mkdir outputs python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./finetuned/wav2vec_small.pt --dict_path ./dict/dict.ltr.txt --not_finetuned ``` # install and upload model ``` curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash git lfs install sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/hoangbinhmta99/wav2vec-demo ls cd wav2vec-demo/ git status git add . git commit -m "First model version" git config --global user.email [yourname] git config --global user.name [yourpass] git commit -m "First model version" git push ```
GleamEyeBeast/ascend_with_english
GleamEyeBeast
2022-03-30T23:35:00Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:timit_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-30T22:09:15Z
--- tags: - generated_from_trainer datasets: - timit_asr model-index: - name: ascend_with_english results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ascend_with_english This model is a fine-tuned version of [GleamEyeBeast/ascend](https://huggingface.co/GleamEyeBeast/ascend) on the timit_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.3049 - Wer: 0.2251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 289 | 0.3524 | 0.3016 | | 0.4246 | 2.0 | 578 | 0.3132 | 0.2607 | | 0.4246 | 3.0 | 867 | 0.3044 | 0.2373 | | 0.2008 | 4.0 | 1156 | 0.3075 | 0.2302 | | 0.2008 | 5.0 | 1445 | 0.3049 | 0.2251 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
UBC-NLP/MARBERTv2
UBC-NLP
2022-03-30T21:52:31Z
3,124
8
transformers
[ "transformers", "pytorch", "tf", "bert", "fill-mask", "Arabic BERT", "MSA", "Twitter", "Masked Langauge Model", "ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - ar tags: - Arabic BERT - MSA - Twitter - Masked Langauge Model widget: - text: "اللغة العربية هي لغة [MASK]." --- <img src="https://raw.githubusercontent.com/UBC-NLP/marbert/main/ARBERT_MARBERT.jpg" alt="drawing" width="30%" height="30%" align="right"/> **MARBERTv2** is one of three models described in our **ACL 2021 paper** **["ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic"](https://aclanthology.org/2021.acl-long.551.pdf)**. We find that results with ARBERT and MARBERT on QA are not competitive, a clear discrepancy from what we have observed thus far on other tasksWe hypothesize this is because the two models are pre-trained with a sequence length of only 128, which does not allow them to sufficiently capture both a question and its likely answer within the same sequence window during the pre-training. To rectify this, we further pre-train the stronger model, MARBERT, on the same MSA data as ARBERT in addition to AraNews dataset but with a bigger sequence length of 512 tokens for 40 epochs. We call this further pre-trained model **MARBERTv2**, noting it has **29B tokens**. MARBERTv2 acquires best performance on all but one test set, where XLM-RLarge marginally outperforms us (only in F1). For more information, please visit our own GitHub [repo](https://github.com/UBC-NLP/marbert). # BibTex If you use our models (ARBERT, MARBERT, or MARBERTv2) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated): ```bibtex @inproceedings{abdul-mageed-etal-2021-arbert, title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic", author = "Abdul-Mageed, Muhammad and Elmadany, AbdelRahim and Nagoudi, El Moatez Billah", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.551", doi = "10.18653/v1/2021.acl-long.551", pages = "7088--7105", abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.", } ``` ## Acknowledgments We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
mrm8488/biomedtra-small-es
mrm8488
2022-03-30T21:07:50Z
3
2
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "pretraining", "Spanish", "Electra", "Bio", "Medical", "es", "dataset:cowese", "arxiv:1406.2661", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: es tags: - Spanish - Electra - Bio - Medical datasets: - cowese --- ## 🦠 BIOMEDtra 🏥 **BIOMEDtra** (small) is an Electra like model (discriminator in this case) trained on [Spanish Biomedical Crawled Corpus](https://zenodo.org/record/5510033#.Yhdk1ZHMLJx). As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB): **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ## Training details The model was trained using the Electra base code for 3 days on 1 GPU (Tesla V100 16GB). ## Dataset details The largest Spanish biomedical and heath corpus to date gathered from a massive Spanish health domain crawler over more than 3,000 URLs were downloaded and preprocessed. The collected data have been preprocessed to produce the **CoWeSe** (Corpus Web Salud Español) resource, a large-scale and high-quality corpus intended for biomedical and health NLP in Spanish. ## Model details ⚙ |Param| # Value| |-----|--------| |Layers| 12 | |Hidden | 256 | |Params| 14M | ## Evaluation metrics (for discriminator) 🧾 |Metric | # Score | |-------|---------| |Accuracy| 0.9561| |Precision| 0.808| |Recall | 0.531 | |AUC | 0.949| ## Benchmarks 🔨 WIP 🚧 ## How to use the discriminator in `transformers` ```py from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("mrm8488/biomedtra-small-es") tokenizer = ElectraTokenizerFast.from_pretrained("mrm8488/biomedtra-small-es") sentence = "Los españoles tienden a sufir déficit de vitamina c" fake_sentence = "Los españoles tienden a déficit sufrir de vitamina c" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % prediction, end="") for prediction in predictions.tolist()] ``` ## Acknowledgments TBA ## Citation If you want to cite this model you can use this: ```bibtex @misc{mromero2022biomedtra, title={Spanish BioMedical Electra (small)}, author={Romero, Manuel}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/mrm8488/biomedtra-small-es}, year={2022} } ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
vlsb/autotrain-security-texts-classification-roberta-688020754
vlsb
2022-03-30T20:55:42Z
15
2
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain", "unk", "dataset:vlsb/autotrain-data-security-texts-classification-roberta", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T20:52:41Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - vlsb/autotrain-data-security-texts-classification-roberta co2_eq_emissions: 3.1151249696839685 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 688020754 - CO2 Emissions (in grams): 3.1151249696839685 ## Validation Metrics - Loss: 0.2810373902320862 - Accuracy: 0.8928571428571429 - Precision: 0.9272727272727272 - Recall: 0.8869565217391304 - AUC: 0.9500805152979066 - F1: 0.9066666666666666 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vlsb/autotrain-security-texts-classification-roberta-688020754 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("vlsb/autotrain-security-texts-classification-roberta-688020754", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("vlsb/autotrain-security-texts-classification-roberta-688020754", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
waboucay/camembert-base-finetuned-xnli_fr
waboucay
2022-03-30T17:47:05Z
5
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "nli", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-11T08:54:07Z
--- language: - fr tags: - nli metrics: - f1 --- ## Eval results We obtain the following results on ```validation``` and ```test``` sets: | Set | F1<sub>micro</sub> | F1<sub>macro</sub> | |------------|--------------------|--------------------| | validation | 89.2 | 87.6 | | test | 88.9 | 87.4 |
horsbug98/Part_1_XLM_Model_E1
horsbug98
2022-03-30T17:13:01Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "generated_from_trainer", "dataset:tydiqa", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-16T18:22:10Z
--- license: mit tags: - generated_from_trainer datasets: - tydiqa model-index: - name: debug_xlm_task1_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # debug_xlm_task1_1 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa secondary_task dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 2.0.0 - Tokenizers 0.10.3
manu/lilt-infoxlm-base
manu
2022-03-30T14:47:15Z
7
3
transformers
[ "transformers", "pytorch", "liltrobertalike", "fill-mask", "token-classification", "es", "fr", "ru", "en", "it", "dataset:iit-cdip", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-30T07:26:57Z
--- language: - es - fr - ru - en - it tags: - token-classification - fill-mask license: mit datasets: - iit-cdip --- This model is the pretrained infoxlm checkpoint from the paper "LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding". Original repository: https://github.com/jpWang/LiLT To use it, it is necessary to fork the modeling and configuration files from the original repository, and load the pretrained model from the corresponding classes (LiLTRobertaLikeConfig, LiLTRobertaLikeForRelationExtraction, LiLTRobertaLikeForTokenClassification, LiLTRobertaLikeModel). They can also be preloaded with the AutoConfig/model factories as such: ```python from transformers import AutoModelForTokenClassification, AutoConfig from path_to_custom_classes import ( LiLTRobertaLikeConfig, LiLTRobertaLikeForRelationExtraction, LiLTRobertaLikeForTokenClassification, LiLTRobertaLikeModel ) def patch_transformers(): AutoConfig.register("liltrobertalike", LiLTRobertaLikeConfig) AutoModel.register(LiLTRobertaLikeConfig, LiLTRobertaLikeModel) AutoModelForTokenClassification.register(LiLTRobertaLikeConfig, LiLTRobertaLikeForTokenClassification) # etc... ``` To load the model, it is then possible to use: ```python # patch_transformers() must have been executed beforehand tokenizer = AutoTokenizer.from_pretrained("microsoft/infoxlm-base") model = AutoModel.from_pretrained("manu/lilt-infoxlm-base") model = AutoModelForTokenClassification.from_pretrained("manu/lilt-infoxlm-base") # to be fine-tuned on a token classification task ```
GioReg/ita1
GioReg
2022-03-30T14:42:06Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-28T20:17:13Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: ita1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ita1 This model is a fine-tuned version of [m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0](https://huggingface.co/m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5892 - Accuracy: 0.776 - F1: 0.5912 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
abdusah/aradia-ctc-v1
abdusah
2022-03-30T13:48:41Z
23
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "abdusahmbzuai/arabic_speech_massive_300hrs", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-23T10:58:05Z
--- tags: - automatic-speech-recognition - abdusahmbzuai/arabic_speech_massive_300hrs - generated_from_trainer model-index: - name: aradia-ctc-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aradia-ctc-v1 This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-v1](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-v1) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.7171 - Wer: 0.3336 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.22 | 100 | 5.1889 | 1.0 | | No log | 0.43 | 200 | 3.1129 | 1.0 | | No log | 0.65 | 300 | 3.0503 | 1.0 | | No log | 0.87 | 400 | 3.0279 | 1.0 | | 6.2756 | 1.09 | 500 | 2.9965 | 1.0 | | 6.2756 | 1.3 | 600 | 2.3618 | 0.9993 | | 6.2756 | 1.52 | 700 | 1.2715 | 0.8758 | | 6.2756 | 1.74 | 800 | 0.9971 | 0.7156 | | 6.2756 | 1.96 | 900 | 0.8927 | 0.6382 | | 1.712 | 2.17 | 1000 | 0.8252 | 0.5926 | | 1.712 | 2.39 | 1100 | 0.7794 | 0.5434 | | 1.712 | 2.61 | 1200 | 0.7557 | 0.5092 | | 1.712 | 2.83 | 1300 | 0.7347 | 0.5203 | | 1.712 | 3.04 | 1400 | 0.7189 | 0.4929 | | 0.9305 | 3.26 | 1500 | 0.6820 | 0.4595 | | 0.9305 | 3.48 | 1600 | 0.6792 | 0.4504 | | 0.9305 | 3.69 | 1700 | 0.6596 | 0.4442 | | 0.9305 | 3.91 | 1800 | 0.6756 | 0.4432 | | 0.9305 | 4.13 | 1900 | 0.6663 | 0.4392 | | 0.737 | 4.35 | 2000 | 0.6479 | 0.4372 | | 0.737 | 4.56 | 2100 | 0.6353 | 0.4203 | | 0.737 | 4.78 | 2200 | 0.6251 | 0.4088 | | 0.737 | 5.0 | 2300 | 0.6209 | 0.4177 | | 0.737 | 5.22 | 2400 | 0.6639 | 0.4094 | | 0.6247 | 5.43 | 2500 | 0.6408 | 0.3970 | | 0.6247 | 5.65 | 2600 | 0.6373 | 0.3932 | | 0.6247 | 5.87 | 2700 | 0.6411 | 0.3928 | | 0.6247 | 6.09 | 2800 | 0.6378 | 0.3897 | | 0.6247 | 6.3 | 2900 | 0.6396 | 0.3929 | | 0.5443 | 6.52 | 3000 | 0.6544 | 0.3864 | | 0.5443 | 6.74 | 3100 | 0.6218 | 0.3786 | | 0.5443 | 6.96 | 3200 | 0.6200 | 0.3784 | | 0.5443 | 7.17 | 3300 | 0.6157 | 0.3791 | | 0.5443 | 7.39 | 3400 | 0.6317 | 0.3798 | | 0.4845 | 7.61 | 3500 | 0.6540 | 0.3771 | | 0.4845 | 7.83 | 3600 | 0.6436 | 0.3670 | | 0.4845 | 8.04 | 3700 | 0.6335 | 0.3695 | | 0.4845 | 8.26 | 3800 | 0.6579 | 0.3610 | | 0.4845 | 8.48 | 3900 | 0.6170 | 0.3613 | | 0.4279 | 8.69 | 4000 | 0.6523 | 0.3617 | | 0.4279 | 8.91 | 4100 | 0.6349 | 0.3577 | | 0.4279 | 9.13 | 4200 | 0.6344 | 0.3673 | | 0.4279 | 9.35 | 4300 | 0.6215 | 0.3641 | | 0.4279 | 9.56 | 4400 | 0.6513 | 0.3608 | | 0.3825 | 9.78 | 4500 | 0.6386 | 0.3605 | | 0.3825 | 10.0 | 4600 | 0.6724 | 0.3549 | | 0.3825 | 10.22 | 4700 | 0.6776 | 0.3602 | | 0.3825 | 10.43 | 4800 | 0.6739 | 0.3544 | | 0.3825 | 10.65 | 4900 | 0.6688 | 0.3557 | | 0.3477 | 10.87 | 5000 | 0.6674 | 0.3564 | | 0.3477 | 11.09 | 5100 | 0.6786 | 0.3476 | | 0.3477 | 11.3 | 5200 | 0.6818 | 0.3478 | | 0.3477 | 11.52 | 5300 | 0.6874 | 0.3470 | | 0.3477 | 11.74 | 5400 | 0.6993 | 0.3424 | | 0.3101 | 11.96 | 5500 | 0.6950 | 0.3404 | | 0.3101 | 12.17 | 5600 | 0.6872 | 0.3406 | | 0.3101 | 12.39 | 5700 | 0.6846 | 0.3424 | | 0.3101 | 12.61 | 5800 | 0.7051 | 0.3405 | | 0.3101 | 12.83 | 5900 | 0.7051 | 0.3378 | | 0.2859 | 13.04 | 6000 | 0.6955 | 0.3403 | | 0.2859 | 13.26 | 6100 | 0.7115 | 0.3390 | | 0.2859 | 13.48 | 6200 | 0.7074 | 0.3384 | | 0.2859 | 13.69 | 6300 | 0.7002 | 0.3376 | | 0.2859 | 13.91 | 6400 | 0.7171 | 0.3360 | | 0.2714 | 14.13 | 6500 | 0.7193 | 0.3341 | | 0.2714 | 14.35 | 6600 | 0.7132 | 0.3347 | | 0.2714 | 14.56 | 6700 | 0.7184 | 0.3353 | | 0.2714 | 14.78 | 6800 | 0.7171 | 0.3331 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
shalpin87/dialoGPT-homer-simpson
shalpin87
2022-03-30T13:06:45Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "arxiv:1911.00536", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-29T20:28:40Z
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- ## dialogGPT-homer-simpson This model has been fine tuned with the entire scripts of Homer Simpson from the T.V. show The Simpsons It will give some nice answers seemingly from Homers brain in the Simpsons Universe during single turn conversation, letting you chat to Homer Simpson ## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT) DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations. The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test. The model is trained on 147M multi-turn dialogue from Reddit discussion thread. * Multi-turn generation examples from an interactive environment: |Role | Response | |---------|--------| |User | Who are you? | | HomerBot | Homer Simpson .| |User | What is your favorite Restaurant ? | | HomerBot | Moes Tavern. | |User | Have you ever been in a band?! | | HomerBot | no. | Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT) ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536) ### How to use Multi-Turn #### NOTE: Multi-Turn seems to be broken, after a few exchanges the output will mostly be exclamation marks. Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("shalpin87/dialoGPT-homer-simpson") model = AutoModelForCausalLM.from_pretrained("shalpin87/dialoGPT-homer-simpson") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("DialoG-PT-HomerBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ``` ### How to use Single Turn ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("shalpin87/dialoGPT-homer-simpson") model = AutoModelForCausalLM.from_pretrained("shalpin87/dialoGPT-homer-simpson") questions = [ "What is your name?", "Who are you?", "Where do you work?", "Who really killed Mr Burns?", "Have you ever stolen from the Kwik-E-Mart?", "Did you kill Frank Grimes?", "Who was the worst member of the Be Sharps?", "Hey where did Barney go?", "What is your favorite bar to have a beer?", "What is the best beer in Springfield?", "Is Bart working for the Mob?", "I think there was an incident in sector 7 G", "Is Ned Flanders house okay?", "Oh my god it's Sideshow Bob", "What is a Flaming Moe?", "What is happening to Apu?", "Who quit the band?" ] # Let's chat for step in range(len(questions)): print("***************************************") print("Q. {}".format(questions[step])) new_user_input_ids = tokenizer.encode(questions[step] + tokenizer.eos_token, return_tensors='pt') # new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') bot_input_ids = new_user_input_ids num_return_seqs=1 chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=200, top_p=0.55, temperature = 0.85, num_return_sequences=num_return_seqs ) botname = "HomerBot" for i in range(0,num_return_seqs): print("{}:{}: {}".format(i, botname, tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][i], skip_special_tokens=True))) chat_history_ids = [] ``` Sample Output ``` *************************************** Q. What is your name? 0:HomerBot: Homer Simpson. *************************************** Q. Who are you? 0:HomerBot: Homer Simpson. *************************************** Q. Where do you work? 0:HomerBot: I work at a nuclear power plant. *************************************** Q. Who really killed Mr Burns? 0:HomerBot: I dunno. Some guy named Ot-to. *************************************** Q. Have you ever stolen from the Kwik-E-Mart? 0:HomerBot: No. *************************************** Q. Did you kill Frank Grimes? 0:HomerBot: No. *************************************** Q. Who was the worst member of the Be Sharps? 0:HomerBot: Homer Simpson, sir. *************************************** Q. Hey where did Barney go? 0:HomerBot: He's been eating that thing for a week. *************************************** Q. What is your favorite bar to have a beer? 0:HomerBot: The Flanders' Club. *************************************** Q. What is the best beer in Springfield? 0:HomerBot: Pepi! *************************************** Q. Is Bart working for the Mob? 0:HomerBot: I think he's trying to tell us something. *************************************** Q. I think there was an incident in sector 7 G 0:HomerBot: Uh, let's not jump to any conclusions. The dog didn't run away. ```
yinde/dummy-model
yinde
2022-03-30T11:59:15Z
10
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T11:37:44Z
Fake news classifier This model trains a text classification model to detect fake news articles, it uses distilbert-base-uncased-finetuned-sst-2-english pretrained model to work on fake and real news dataset from kaggle (https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset)
Peltarion/xlm-roberta-longformer-base-4096
Peltarion
2022-03-30T09:23:58Z
75
8
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "longformer", "multilingual", "dataset:wikitext", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- tags: - longformer language: multilingual license: apache-2.0 datasets: - wikitext --- ## XLM-R Longformer Model XLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus. The reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at [Peltarion](https://peltarion.com/) and was fine-tuned on multilingual quesion-answering tasks, with code available [here](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer#xlm-r). Since both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps. ## How to Use The model can be used as expected to fine-tune on a downstream task. For instance for QA. ```python import torch from transformers import AutoModel, AutoTokenizer MAX_SEQUENCE_LENGTH = 4096 MODEL_NAME_OR_PATH = "markussagen/xlm-roberta-longformer-base-4096" tokenizer = AutoTokenizer.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, padding="max_length", truncation=True, ) model = AutoModelForQuestionAnswering.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, ) ``` ## Training Procedure The model have been trained on the WikiText-103 corpus, using a **48GB** GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full [training script](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer/blob/main/scripts/finetune_qa_models.py) and [Github repo](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer) for more information ```sh wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip unzip wikitext-103-raw-v1.zip export DATA_DIR=./wikitext-103-raw scripts/run_long_lm.py \ --model_name_or_path xlm-roberta-base \ --model_name xlm-roberta-to-longformer \ --output_dir ./output \ --logging_dir ./logs \ --val_file_path $DATA_DIR/wiki.valid.raw \ --train_file_path $DATA_DIR/wiki.train.raw \ --seed 42 \ --max_pos 4096 \ --adam_epsilon 1e-8 \ --warmup_steps 500 \ --learning_rate 3e-5 \ --weight_decay 0.01 \ --max_steps 6000 \ --evaluate_during_training \ --logging_steps 50 \ --eval_steps 50 \ --save_steps 6000 \ --max_grad_norm 1.0 \ --per_device_eval_batch_size 2 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 64 \ --overwrite_output_dir \ --fp16 \ --do_train \ --do_eval ```
shrishail/t5_paraphrase_msrp_paws
shrishail
2022-03-30T05:47:27Z
38
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "paraphrase-generation", "text-generation", "Conditional Generation", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-29T13:13:11Z
--- language: "en" tags: - paraphrase-generation - text-generation - Conditional Generation inference: false --- # Simple model for Paraphrase Generation ​ ## Model description ​ T5-based model for generating paraphrased sentences. It is trained on the labeled [MSRP](https://www.microsoft.com/en-us/download/details.aspx?id=52398) and [Google PAWS](https://github.com/google-research-datasets/paws) dataset. ​ ## How to use ​ ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("shrishail/t5_paraphrase_msrp_paws") model = AutoModelForSeq2SeqLM.from_pretrained("shrishail/t5_paraphrase_msrp_paws") ​ sentence = "This is something which i cannot understand at all" text = "paraphrase: " + sentence + " </s>" encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, do_sample=True, top_k=120, top_p=0.95, early_stopping=True, num_return_sequences=5 ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True) print(line) ​ ```
nlp-waseda/gpt2-small-japanese
nlp-waseda
2022-03-30T04:28:17Z
26
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "ja", "dataset:wikipedia", "dataset:cc100", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-30T03:34:11Z
--- language: - ja license: cc-by-sa-4.0 datasets: - wikipedia - cc100 widget: - text: "早稲田 大学 で 自然 言語 処理 を" --- # nlp-waseda/gpt2-small-japanese This model is Japanese GPT-2 pretrained on Japanese Wikipedia and CC-100. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. Note that the texts should be segmented into words using Juman++ in advance. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='nlp-waseda/gpt2-small-japanese') >>> set_seed(42) >>> generator("早稲田 大学 で 自然 言語 処理 を", max_length=30, do_sample=True, pad_token_id=2, num_return_sequences=5) [{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 帰国 後 、 早稲田 大学 理工 学部 に 入学 し ます 。 卒業 後 、 早稲田 大学 工学 研究 科 、'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 アメリカ の 大学 で 学士 号 を 取得 、 修士 の 取得 で 博士 号 を 取得 。 2008 年'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 勉強 して い ます 。 学部 は 日本 語 学科 を 専攻 して い ます 。 英語 が 話せる と いう'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 して いた 。 2011 年 に 第 26 回 日本 化学 会 学生 委員 会 奨励 賞 ( 第 2 年次 審査'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 中心 と する 言語 学 研究 を 行って いる 。 東京 都 ・ 豊島 区 の お 見合い 相手 。'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import ReformerTokenizer, GPT2Model tokenizer = ReformerTokenizer.from_pretrained('nlp-waseda/gpt2-small-japanese') model = GPT2Model.from_pretrained('nlp-waseda/gpt2-small-japanese') text = "早稲田 大学 で 自然 言語 処理 を" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training data The GPT-2 model was pretrained on Japanese Wikipedia, dumped on 2022-03-20, and the Japanese portion of CC-100. ## Training procedure ### Preprocessing The texts are normalized using zenhan, segmented into words using Juman++, and tokenized using SentencePiece. Juman++ 2.0.0-rc3 was used for pretraining. The model was trained on 8 NVIDIA A100 GPUs.
lazyturtl/roomidentifier
lazyturtl
2022-03-30T04:10:41Z
89
3
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-30T04:10:32Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: roomidentifier results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9375 --- # roomidentifier Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Bathroom ![Bathroom](images/Bathroom.jpg) #### Bedroom ![Bedroom](images/Bedroom.jpg) #### DinningRoom ![DinningRoom](images/DinningRoom.jpg) #### Kitchen ![Kitchen](images/Kitchen.jpg) #### LivingRoom ![LivingRoom](images/LivingRoom.jpg)
samayash/finetuning-financial-news-sentiment
samayash
2022-03-30T03:36:40Z
4
3
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T03:27:02Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-financial-news-sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-financial-news-sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3345 - Accuracy: 0.8751 - F1: 0.8751 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
aaraki/vit-base-patch16-224-in21k-finetuned-cifar10
aaraki
2022-03-30T01:41:47Z
8,239
10
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:cifar10", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-30T00:18:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cifar10 metrics: - accuracy model-index: - name: vit-base-patch16-224-in21k-finetuned-cifar10 results: - task: name: Image Classification type: image-classification dataset: name: cifar10 type: cifar10 args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9788 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-cifar10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 0.2564 - Accuracy: 0.9788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4291 | 1.0 | 390 | 0.2564 | 0.9788 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
BigSalmon/InformalToFormalLincoln33
BigSalmon
2022-03-30T01:24:08Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-30T01:19:07Z
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln33") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln33") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - middle schools do not have recess - should get back to doing it - amazing for communication - and getting kids to move around text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity. *** - ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence.
cammiemw/bert-marco-hdct
cammiemw
2022-03-30T01:21:38Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T01:09:55Z
--- license: cc-by-nc-4.0 ---
DrishtiSharma/poem-gen-spanish-t5-small-v5
DrishtiSharma
2022-03-29T23:25:30Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-29T18:54:38Z
--- license: mit tags: - generated_from_trainer model-index: - name: poem-gen-spanish-t5-small-v5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poem-gen-spanish-t5-small-v5 This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000125 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.9366 | 0.73 | 30000 | 2.9656 | | 2.7518 | 1.46 | 60000 | 2.9120 | | 2.6018 | 2.19 | 90000 | 2.8870 | | 2.5262 | 2.93 | 120000 | 2.8646 | | 2.3886 | 3.66 | 150000 | 2.8816 | | 2.2758 | 4.39 | 180000 | 2.8900 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
BigSalmon/PointsOneSent
BigSalmon
2022-03-29T21:26:49Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-29T21:19:54Z
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsOneSent") model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsOneSent") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - ``` It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27
efederici/sentence-it5-small
efederici
2022-03-29T17:29:14Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "feature-extraction", "sentence-similarity", "transformers", "it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-27T15:19:10Z
--- pipeline_tag: sentence-similarity language: - it tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-IT5-small This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-small)) small model trained for asymmetric semantic search. Query is a keyword, Paragraph is a short news article. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"] model = SentenceTransformer('efederici/sentence-IT5-small') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-small') model = AutoModel.from_pretrained('efederici/sentence-IT5-small') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
krinal214/augmented
krinal214
2022-03-29T16:58:16Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-29T15:02:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: augmented results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # augmented This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5104 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0609 | 1.0 | 9787 | 0.5104 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
gabitoo1234/autotrain-mut_all_text-680820343
gabitoo1234
2022-03-29T16:09:31Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain", "es", "dataset:gabitoo1234/autotrain-data-mut_all_text", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-29T14:22:14Z
--- tags: autotrain language: es widget: - text: "I love AutoTrain 🤗" datasets: - gabitoo1234/autotrain-data-mut_all_text co2_eq_emissions: 115.48848403681228 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 680820343 - CO2 Emissions (in grams): 115.48848403681228 ## Validation Metrics - Loss: 0.3041240870952606 - Accuracy: 0.9462770369425126 - Macro F1: 0.7836898686625933 - Micro F1: 0.9462770369425126 - Weighted F1: 0.9449148298990091 - Macro Precision: 0.8344505891491089 - Micro Precision: 0.9462770369425126 - Weighted Precision: 0.9451247372908952 - Macro Recall: 0.7568785255994025 - Micro Recall: 0.9462770369425126 - Weighted Recall: 0.9462770369425126 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/gabitoo1234/autotrain-mut_all_text-680820343 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("gabitoo1234/autotrain-mut_all_text-680820343", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("gabitoo1234/autotrain-mut_all_text-680820343", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
tbosse/bert-base-german-cased-finetuned-subj_v1
tbosse
2022-03-29T15:59:49Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-29T14:22:30Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-german-cased-finetuned-subj_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-german-cased-finetuned-subj_v1 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1594 - Precision: 0.1875 - Recall: 0.0077 - F1: 0.0147 - Accuracy: 0.9508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 136 | 0.1591 | 1.0 | 0.0051 | 0.0102 | 0.9523 | | No log | 2.0 | 272 | 0.1571 | 0.375 | 0.0077 | 0.015 | 0.9518 | | No log | 3.0 | 408 | 0.1594 | 0.1875 | 0.0077 | 0.0147 | 0.9508 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
sayef/fsner-bert-base-uncased
sayef
2022-03-29T14:20:35Z
9
6
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2008.10570", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# FSNER Implemented by [sayef](https://huggingface.co/sayef). # Overview The FSNER model was proposed in [Example-Based Named Entity Recognition](https://arxiv.org/abs/2008.10570) by Morteza Ziyadi, Yuting Sun, Abhishek Goswami, Jade Huang, Weizhu Chen. To identify entity spans in a new domain, it uses a train-free few-shot learning approach inspired by question-answering. ## Abstract > We present a novel approach to named entity recognition (NER) in the presence of scarce data that we call example-based NER. Our train-free few-shot learning approach takes inspiration from question-answering to identify entity spans in a new and unseen domain. In comparison with the current state-of-the-art, the proposed method performs significantly better, especially when using a low number of support examples. ## Model Training Details | identifier | epochs | datasets | | ---------- |:------:|:-----------------------------------------------------------------------------------------------:| | [sayef/fsner-bert-base-uncased](https://huggingface.co/sayef/fsner-bert-base-uncased) | 25 | ontonotes5, conll2003, wnut2017, mit_movie_trivia, mit_restaurant and fin (Alvarado et al.). | ## Installation and Example Usage You can use the FSNER model in 3 ways: 1. Install directly from PyPI: `pip install fsner` and import the model as shown in the code example below or 2. Install from source: `python install .` and import the model as shown in the code example below or 3. Clone [repo](https://github.com/sayef/fsner) and add absolute path of `fsner/src` directory to your PYTHONPATH and import the model as shown in the code example below ```python import json from fsner import FSNERModel, FSNERTokenizerUtils, pretty_embed query_texts = [ "Does Luke's serve lunch?", "Chang does not speak Taiwanese very well.", "I like Berlin." ] # Each list in supports are the examples of one entity type # Wrap entities around with [E] and [/E] in the examples. # Each sentence should have only one pair of [E] ... [/E] support_texts = { "Restaurant": [ "What time does [E] Subway [/E] open for breakfast?", "Is there a [E] China Garden [/E] restaurant in newark?", "Does [E] Le Cirque [/E] have valet parking?", "Is there a [E] McDonalds [/E] on main street?", "Does [E] Mike's Diner [/E] offer huge portions and outdoor dining?" ], "Language": [ "Although I understood no [E] French [/E] in those days , I was prepared to spend the whole day with Chien - chien .", "like what the hell 's that called in [E] English [/E] ? I have to register to be here like since I 'm a foreigner .", "So , I 'm also working on an [E] English [/E] degree because that 's my real interest .", "Al - Jazeera TV station , established in November 1996 in Qatar , is an [E] Arabic - language [/E] news TV station broadcasting global news and reports nonstop around the clock .", "They think it 's far better for their children to be here improving their [E] English [/E] than sitting at home in front of a TV . \"", "The only solution seemed to be to have her learn [E] French [/E] .", "I have to read sixty pages of [E] Russian [/E] today ." ] } device = 'cpu' tokenizer = FSNERTokenizerUtils("sayef/fsner-bert-base-uncased") queries = tokenizer.tokenize(query_texts).to(device) supports = tokenizer.tokenize(list(support_texts.values())).to(device) model = FSNERModel("sayef/fsner-bert-base-uncased") model.to(device) p_starts, p_ends = model.predict(queries, supports) # One can prepare supports once and reuse multiple times with different queries # ------------------------------------------------------------------------------ # start_token_embeddings, end_token_embeddings = model.prepare_supports(supports) # p_starts, p_ends = model.predict(queries, start_token_embeddings=start_token_embeddings, # end_token_embeddings=end_token_embeddings) output = tokenizer.extract_entity_from_scores(query_texts, queries, p_starts, p_ends, entity_keys=list(support_texts.keys()), thresh=0.50) print(json.dumps(output, indent=2)) # install displacy for pretty embed pretty_embed(query_texts, output, list(support_texts.keys())) ``` <!DOCTYPE html> <html lang="en"> <head> <title>displaCy</title> </head> <body style="font-size: 16px; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; padding: 4rem 2rem; direction: ltr"> <figure style="margin-bottom: 6rem"> <div class="entities" style="line-height: 2.5; direction: ltr"> <div class="entities" style="line-height: 2.5; direction: ltr">Does <mark class="entity" style="background: #7aecec; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Luke's <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Restaurant</span> </mark> serve lunch?</div> <div class="entities" style="line-height: 2.5; direction: ltr">Chang does not speak <mark class="entity" style="background: #bfeeb7; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Taiwanese <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Language</span> </mark> very well.</div> <div class="entities" style="line-height: 2.5; direction: ltr">I like Berlin.</div> </div> </figure> </body> </html> ## Datasets preparation 1. We need to convert dataset into the following format. Let's say we have a dataset file train.json like following. 2. Each list in supports are the examples of one entity type 3. Wrap entities around with [E] and [/E] in the examples. 4. Each example should have only one pair of [E] ... [/E]. ```json { "CARDINAL_NUMBER": [ "Washington , cloudy , [E] 2 [/E] to 6 degrees .", "New Dehli , sunny , [E] 6 [/E] to 19 degrees .", "Well this is number [E] two [/E] .", "....." ], "LANGUAGE": [ "They do n't have the Quicken [E] Dutch [/E] version ?", "they learned a lot of [E] German [/E] .", "and then [E] Dutch [/E] it 's Mifrau", "...." ], "MONEY": [ "Per capita personal income ranged from $ [E] 11,116 [/E] in Mississippi to $ 23,059 in Connecticut ... .", "The trade surplus was [E] 582 million US dollars [/E] .", "It settled with a loss of 4.95 cents at $ [E] 1.3210 [/E] a pound .", "...." ] } ``` 2. Converted ontonotes5 dataset can be found here: 1. [train](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.train.json) 2. [dev](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.dev.json) 3. Then trainer script can be used to train/evaluate your fsner model. ```bash fsner trainer --pretrained-model bert-base-uncased --mode train --train-data train.json --val-data val.json \ --train-batch-size 6 --val-batch-size 6 --n-examples-per-entity 10 --neg-example-batch-ratio 1/3 --max-epochs 25 --device gpu \ --gpus -1 --strategy ddp ```
maretamasaeva/roberta-finetuned-freeform
maretamasaeva
2022-03-29T14:19:27Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-finetuned-freeform results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-freeform This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6989 - Accuracy: 0.4668 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6919 | 1.0 | 8094 | 0.6910 | 0.4668 | | 0.6912 | 2.0 | 16188 | 0.6934 | 0.4668 | | 0.6904 | 3.0 | 24282 | 0.6976 | 0.4668 | | 0.6918 | 4.0 | 32376 | 0.6989 | 0.4668 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
gayanin/bart-med-term-conditional-masking-0
gayanin
2022-03-29T12:03:56Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-28T22:12:30Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-med-term-conditional-masking-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-med-term-conditional-masking-0 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5041 - Rouge2 Precision: 0.7497 - Rouge2 Recall: 0.5246 - Rouge2 Fmeasure: 0.5986 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.6381 | 1.0 | 13915 | 0.5595 | 0.734 | 0.5152 | 0.5873 | | 0.5429 | 2.0 | 27830 | 0.5243 | 0.7441 | 0.5225 | 0.5956 | | 0.5002 | 3.0 | 41745 | 0.5078 | 0.7482 | 0.5238 | 0.5976 | | 0.4607 | 4.0 | 55660 | 0.5041 | 0.7497 | 0.5246 | 0.5986 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms
scasutt
2022-03-29T11:29:52Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-28T18:54:42Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5945 - Wer: 0.4929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4049 | 1.05 | 250 | 3.3497 | 1.0 | | 3.0851 | 2.1 | 500 | 3.4440 | 1.0 | | 2.3512 | 3.15 | 750 | 1.5938 | 0.9317 | | 1.1762 | 4.2 | 1000 | 0.8481 | 0.7333 | | 0.903 | 5.25 | 1250 | 0.7180 | 0.6484 | | 0.6754 | 6.3 | 1500 | 0.6603 | 0.6044 | | 0.5961 | 7.35 | 1750 | 0.6410 | 0.5778 | | 0.5325 | 8.4 | 2000 | 0.6245 | 0.5545 | | 0.4685 | 9.45 | 2250 | 0.5925 | 0.5359 | | 0.4526 | 10.5 | 2500 | 0.5991 | 0.5345 | | 0.3975 | 11.55 | 2750 | 0.5916 | 0.5228 | | 0.3672 | 12.6 | 3000 | 0.5882 | 0.5037 | | 0.3774 | 13.65 | 3250 | 0.5693 | 0.5028 | | 0.3489 | 14.7 | 3500 | 0.5645 | 0.5018 | | 0.3593 | 15.75 | 3750 | 0.5977 | 0.5043 | | 0.3167 | 16.81 | 4000 | 0.6049 | 0.5018 | | 0.3225 | 17.86 | 4250 | 0.6172 | 0.4921 | | 0.2807 | 18.91 | 4500 | 0.5937 | 0.4923 | | 0.2889 | 19.96 | 4750 | 0.5945 | 0.4929 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
Rishav-hub/xlm-roberta-base-finetuned-panx-de
Rishav-hub
2022-03-29T11:05:37Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-29T10:26:12Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8591260810195721 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1352 - F1: 0.8591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.257 | 1.0 | 525 | 0.1512 | 0.8302 | | 0.1305 | 2.0 | 1050 | 0.1401 | 0.8447 | | 0.0817 | 3.0 | 1575 | 0.1352 | 0.8591 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
beston91/gpt2-xl_ft_logits_5k_experiment
beston91
2022-03-29T10:27:12Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-29T03:13:26Z
--- tags: - generated_from_trainer model-index: - name: gpt2-xl_ft_logits_5k_experiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-xl_ft_logits_5k_experiment This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.8601 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100.0 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.9 | 7 | 6.1556 | | No log | 1.9 | 14 | 6.3365 | | No log | 2.9 | 21 | 6.5909 | | No log | 3.9 | 28 | 6.8601 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6 ### Perplexity Score: 17.589759826660156
KeithHorgan/TweetClimateAnalysis
KeithHorgan
2022-03-29T10:01:24Z
4
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain", "unk", "dataset:KeithHorgan98/autotrain-data-TweetClimateAnalysis", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-29T10:16:42Z
--- tags: autotrain language: unk widget: - text: "Climate Change is a hoax" - text: "It is freezing, where is global warming" datasets: - KeithHorgan98/autotrain-data-TweetClimateAnalysis co2_eq_emissions: 133.19491276284793 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 678720226 - CO2 Emissions (in grams): 133.19491276284793 ## Validation Metrics - Loss: 0.4864234924316406 - Accuracy: 0.865424430641822 - Macro F1: 0.7665472174344069 - Micro F1: 0.8654244306418221 - Weighted F1: 0.8586375445115083 - Macro Precision: 0.8281449061702826 - Micro Precision: 0.865424430641822 - Weighted Precision: 0.8619727477790186 - Macro Recall: 0.736576343905098 - Micro Recall: 0.865424430641822 - Weighted Recall: 0.865424430641822 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/KeithHorgan98/autotrain-TweetClimateAnalysis-678720226 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
ai4bharat/MultiIndicWikiBioUnified
ai4bharat
2022-03-29T09:25:58Z
5
1
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "wikibio", "multilingual", "nlp", "indicnlp", "as", "bn", "hi", "kn", "ml", "or", "pa", "ta", "te", "dataset:ai4bharat/IndicWikiBio", "arxiv:2203.05437", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-16T11:35:33Z
--- tags: - wikibio - multilingual - nlp - indicnlp datasets: - ai4bharat/IndicWikiBio language: - as - bn - hi - kn - ml - or - pa - ta - te licenses: - cc-by-nc-4.0 widget: - <TAG> name </TAG> नवतेज भारती <TAG> image </TAG> NavtejBharati . jpg <TAG> birth name </TAG> नवतेज <TAG> birth date </TAG> 1938 <TAG> birth place </TAG> रोडे , भारतीय पंजाब , भारत । पंजाब <TAG> occupation </TAG> लेखक , कवि <TAG> nationality </TAG> कैनेडा । कैनेडियन <TAG> ethnicity </TAG> पंजाबी लोक । पंजाबी </s> <2hi> --- # MultiIndicWikiBioUnified MultiIndicWikiBioUnified is a multilingual, sequence-to-sequence pre-trained model, a [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint fine-tuned on the 9 languages of [IndicWikiBio](https://huggingface.co/datasets/ai4bharat/IndicWikiBio) dataset. For fine-tuning details, see the [paper](https://arxiv.org/abs/2203.05437). You can use MultiIndicWikiBio to build biography generation applications for Indian languages by fine-tuning the model with supervised training data. Some salient features of the MultiIndicWikiBio are: <ul> <li >Supported languages: Assamese, Bengali, Hindi, Oriya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li> <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for fine-tuning and decoding. </li> <li> Fine-tuned on an Indic language corpora (34,653 examples). </li> <li> All languages have been represented in Devanagari script to encourage transfer learning among the related languages. </li> </ul> You can read more about MultiIndicWikiBioUnified in this <a href="https://arxiv.org/abs/2203.05437">paper</a>. ## Using this model in `transformers` ``` from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM from transformers import AlbertTokenizer, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioUnified", do_lower_case=False, use_fast=False, keep_accents=True) # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioUnified", do_lower_case=False, use_fast=False, keep_accents=True) model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicWikiBioUnified") # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicWikiBioUnified") # Some initial mapping bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>") eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>") pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>") # To get lang_id use any of ['<2as>', '<2bn>', '<2hi>', '<2kn>', '<2ml>', '<2or>', '<2pa>', '<2ta>', '<2te>'] # First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". inp = tokenizer("<TAG> name </TAG> भीखा लाल <TAG> office </TAG> विधायक - 318 - हसनगंज विधान सभा निर्वाचन क्षेत्र , उत्तर प्रदेश <TAG> term </TAG> 1957 से 1962 <TAG> nationality </TAG> भारतीय</s><2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids out = tokenizer("<2hi> भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:]) # For loss model_outputs.loss ## This is not label smoothed. # For logits model_outputs.logits # For generation. Pardon the messiness. Note the decoder_start_token_id. model.eval() # Set dropouts to zero model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>")) # Decode to get output strings decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। # Disclaimer Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the [Indic NLP Library](https://github.com/AI4Bharat/indic-bart/blob/main/indic_scriptmap.py). ``` # Note: If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script. ## Benchmarks Scores on the `IndicWikiBio` test sets are as follows: Language | RougeL ---------|---------------------------- as | 56.28 bn | 57.42 hi | 67.48 kn | 40.01 ml | 38.84 or | 67.13 pa | 52.88 ta | 51.82 te | 51.43 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437" } ``` # License The model is available under the MIT License.
Davlan/m2m100_418M-yor-eng-mt
Davlan
2022-03-29T09:21:03Z
5
0
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "arxiv:2103.08647", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
Hugging Face's logo --- language: - yo - en datasets: - JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) --- # m2m100_418M-eng-yor-mt ## Model description **m2m100_418M-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English. Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). #### Limitations and bias This model is limited by its training dataset. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset ## Training procedure This model was trained on NVIDIA V100 GPU ## Eval results on Test set (BLEU score) Fine-tuning m2m100_418M achieves **16.76 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57 ### BibTeX entry and citation info By David Adelani ``` ```
PereLluis13/Wav2Vec2-Large-XLSR-53-catalan
PereLluis13
2022-03-29T08:51:28Z
6,942
2
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ca", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: ca datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Catalan XLSR Wav2Vec Large 53 #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53` results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ca type: common_voice args: ca #TODO: metrics: - name: Test WER type: wer value: 8.11 --- # Disclaimer This model was trained on Common Voice 6, if you need a catalan model for ASR, I recommend checking [wav2vec2-xls-r-1b-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-1b-ca-lm) which is a 1b model with a LM on top trained on CV8+ with much better performance or [wav2vec2-xls-r-300m-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) which has the same size (300m) as this model but trained on CV8+ and the same LM. # Wav2Vec2-Large-XLSR-53-ca Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on catalan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the catalan test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ca", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) import jiwer # Chunk WER computation due to memory issues, taken from https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000))) ``` **Test Result**: 8.11 % ## Training The Common Voice `train`, `validation` datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up. The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py). Slight modifications were done in order to speed up the ordering by length during training, which can be found [here](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/6). Another version trained for catalan can be found [here](https://huggingface.co/ccoreilly/wav2vec2-large-xlsr-catala), which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset.
PereLluis13/wav2vec2-xls-r-1b-ca
PereLluis13
2022-03-29T08:44:49Z
17
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ca license: apache-2.0 tags: - automatic-speech-recognition - collectivat/tv3_parla - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - projecte-aina/parlament_parla - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 - collectivat/tv3_parla - projecte-aina/parlament_parla model-index: - name: wav2vec2-xls-r-1b-ca results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_8_0 ca type: mozilla-foundation/common_voice_8_0 args: ca metrics: - name: Test WER type: wer value: 11.030639657300516 - name: Test CER type: cer value: 2.8405630530040634 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: projecte-aina/parlament_parla ca type: projecte-aina/parlament_parla args: clean metrics: - name: Test WER type: wer value: 6.483115660665961 - name: Test CER type: cer value: 2.0212863746191828 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: collectivat/tv3_parla ca type: collectivat/tv3_parla args: ca metrics: - name: Test WER type: wer value: 17.917773414943988 - name: Test CER type: cer value: 8.872589572206396 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Catalan Dev Data type: speech-recognition-community-v2/dev_data args: ca metrics: - name: Test WER type: wer value: 27.126683954209097 - name: Test CER type: cer value: 14.213308815078726 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ca metrics: - name: Test WER type: wer value: 18.7 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-1b-ca This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
PereLluis13/wav2vec2-xls-r-300m-ca
PereLluis13
2022-03-29T08:43:53Z
52
2
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ca license: apache-2.0 tags: - automatic-speech-recognition - collectivat/tv3_parla - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - projecte-aina/parlament_parla - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 - collectivat/tv3_parla - projecte-aina/parlament_parla model-index: - name: wav2vec2-xls-r-300m-ca results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_8_0 ca type: mozilla-foundation/common_voice_8_0 args: ca metrics: - name: Test WER type: wer value: 13.170091241317552 - name: Test CER type: cer value: 3.356726205534543 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: projecte-aina/parlament_parla ca type: projecte-aina/parlament_parla args: clean metrics: - name: Test WER type: wer value: 8.048005647723261 - name: Test CER type: cer value: 2.240912911020065 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: collectivat/tv3_parla ca type: collectivat/tv3_parla args: ca metrics: - name: Test WER type: wer value: 23.320629787889285 - name: Test CER type: cer value: 10.439216202089989 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: speech-recognition-community-v2/dev_data ca type: speech-recognition-community-v2/dev_data args: ca metrics: - name: Test WER type: wer value: 31.99671115046487 - name: Test CER type: cer value: 15.820020687277325 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ca metrics: - name: Test WER type: wer value: 22.04 --- # wav2vec2-xls-r-300m-ca This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. It achieves the following results on the evaluation set (for the three datasets): - Loss: 0.2472 - Wer: 0.1499 ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data More information needed ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 18.0 - mixed_precision_training: Native AMP ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.2099 | 0.09 | 500 | 3.4125 | 1.0 | | 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 | | 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 | | 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 | | 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 | | 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 | | 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 | | 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 | | 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 | | 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 | | 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 | | 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 | | 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 | | 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 | | 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 | | 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 | | 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 | | 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 | | 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 | | 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 | | 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 | | 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 | | 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 | | 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 | | 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 | | 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 | | 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 | | 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 | | 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 | | 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 | | 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 | | 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 | | 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 | | 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 | | 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 | | 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 | | 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 | | 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 | | 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 | | 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 | | 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 | | 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 | | 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 | | 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 | | 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 | | 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 | | 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 | | 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 | | 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 | | 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 | | 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 | | 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 | | 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 | | 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 | | 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 | | 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 | | 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 | | 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 | | 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 | | 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 | | 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 | | 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
PereLluis13/wav2vec2-xls-r-300m-ca-lm
PereLluis13
2022-03-29T08:42:55Z
20
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ca license: apache-2.0 tags: - automatic-speech-recognition - collectivat/tv3_parla - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - projecte-aina/parlament_parla - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 - collectivat/tv3_parla - projecte-aina/parlament_parla model-index: - name: wav2vec2-xls-r-300m-ca-lm results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_8_0 ca type: mozilla-foundation/common_voice_8_0 args: ca metrics: - name: Test WER type: wer value: 6.771703090587865 - name: Test CER type: cer value: 2.1007777843712293 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: projecte-aina/parlament_parla ca type: projecte-aina/parlament_parla args: clean metrics: - name: Test WER type: wer value: 5.565360630662431 - name: Test CER type: cer value: 1.8594390167034354 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: collectivat/tv3_parla ca type: collectivat/tv3_parla args: ca metrics: - name: Test WER type: wer value: 13.53312545713516 - name: Test CER type: cer value: 8.684635913340556 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Catalan Dev Data type: speech-recognition-community-v2/dev_data args: ca metrics: - name: Test WER type: wer value: 26.04515843400164 - name: Test CER type: cer value: 15.056890012642224 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ca metrics: - name: Test WER type: wer value: 17.68 --- # wav2vec2-xls-r-300m-ca-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. It achieves the following results on the evaluation set (for the three datasets and without the LM): - Loss: 0.2472 - Wer: 0.1499 ## Model description Please check the original [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data More information needed ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 18.0 - mixed_precision_training: Native AMP ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.2099 | 0.09 | 500 | 3.4125 | 1.0 | | 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 | | 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 | | 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 | | 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 | | 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 | | 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 | | 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 | | 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 | | 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 | | 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 | | 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 | | 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 | | 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 | | 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 | | 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 | | 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 | | 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 | | 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 | | 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 | | 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 | | 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 | | 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 | | 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 | | 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 | | 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 | | 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 | | 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 | | 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 | | 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 | | 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 | | 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 | | 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 | | 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 | | 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 | | 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 | | 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 | | 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 | | 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 | | 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 | | 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 | | 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 | | 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 | | 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 | | 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 | | 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 | | 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 | | 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 | | 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 | | 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 | | 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 | | 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 | | 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 | | 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 | | 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 | | 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 | | 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 | | 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 | | 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 | | 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 | | 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 | | 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
PereLluis13/wav2vec2-xls-r-1b-ca-lm
PereLluis13
2022-03-29T08:41:46Z
3,126
4
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ca license: apache-2.0 tags: - automatic-speech-recognition - collectivat/tv3_parla - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - projecte-aina/parlament_parla - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 - collectivat/tv3_parla - projecte-aina/parlament_parla model-index: - name: wav2vec2-xls-r-1b-ca-lm results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_8_0 ca type: mozilla-foundation/common_voice_8_0 args: ca metrics: - name: Test WER type: wer value: 6.0722669958130644 - name: Test CER type: cer value: 1.9180697705166526 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: projecte-aina/parlament_parla ca type: projecte-aina/parlament_parla args: clean metrics: - name: Test WER type: wer value: 5.139820371024042 - name: Test CER type: cer value: 2.0163620128164722 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: collectivat/tv3_parla ca type: collectivat/tv3_parla args: ca metrics: - name: Test WER type: wer value: 11.207991684952073 - name: Test CER type: cer value: 7.32119307305963 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Catalan Dev Data type: speech-recognition-community-v2/dev_data args: ca metrics: - name: Test WER type: wer value: 22.870153690468661 - name: Test CER type: cer value: 13.59039190897598 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ca metrics: - name: Test WER type: wer value: 15.41 --- # wav2vec2-xls-r-1b-ca-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
jorge-henao/gpt2-small-spanish-disco-poetry-15
jorge-henao
2022-03-29T05:17:49Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-29T04:20:26Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: gpt2-small-spanish-disco-poetry-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-small-spanish-disco-poetry-15 This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.2465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
rampasek/prot_bert_bfd_rosetta204060aa
rampasek
2022-03-29T04:35:10Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "protein language model", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-29T04:02:40Z
--- language: protein tags: - protein language model datasets: - BFD - Custom Rosetta --- # ProtBert-BFD finetuned on Rosetta 20,40,60AA dataset This model is finetuned to predict Rosetta fold energy using a dataset of 300k protein sequences: 100k of 20AA, 100k of 40AA, and 100k of 60AA Current model in this repo: `prot_bert_bfd-finetuned-032822_1323` ## Performance - 20AA sequences (1k eval set):\ Metrics: 'mae': 0.100418, 'r2': 0.989028, 'mse': 0.016266, 'rmse': 0.127537 - 40AA sequences (10k eval set):\ Metrics: 'mae': 0.173888, 'r2': 0.963361, 'mse': 0.048218, 'rmse': 0.219587 - 60AA sequences (10k eval set):\ Metrics: 'mae': 0.235238, 'r2': 0.930164, 'mse': 0.088131, 'rmse': 0.2968 ## `prot_bert_bfd` from ProtTrans The starting pretrained model is from ProtTrans, trained on 2.1 billion proteins from BFD. It was trained on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in [this repository](https://github.com/agemagician/ProtTrans). > Created by [Ladislav Rampasek](https://rampasek.github.io)
rampasek/prot_bert_bfd_rosetta20aa
rampasek
2022-03-29T04:33:02Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "protein language model", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-28T04:13:53Z
--- language: protein tags: - protein language model datasets: - BFD - Custom Rosetta --- # ProtBert-BFD finetuned on Rosetta 20AA dataset This model is finetuned to predict Rosetta fold energy using a dataset of 100k 20AA sequences. Current model in this repo: `prot_bert_bfd-finetuned-032722_1752` ## Performance - 20AA sequences (1k eval set):\ Metrics: 'mae': 0.090115, 'r2': 0.991208, 'mse': 0.013034, 'rmse': 0.114165 - 40AA sequences (10k eval set):\ Metrics: 'mae': 0.537456, 'r2': 0.659122, 'mse': 0.448607, 'rmse': 0.669781 - 60AA sequences (10k eval set):\ Metrics: 'mae': 0.629267, 'r2': 0.506747, 'mse': 0.622476, 'rmse': 0.788972 ## `prot_bert_bfd` from ProtTrans The starting pretrained model is from ProtTrans, trained on 2.1 billion proteins from BFD. It was trained on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in [this repository](https://github.com/agemagician/ProtTrans). > Created by [Ladislav Rampasek](https://rampasek.github.io)
DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v9
DrishtiSharma
2022-03-29T00:52:52Z
5
2
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-29T00:13:34Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-sentiment-mesd-v9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-sentiment-mesd-v9 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3500 - Accuracy: 0.9154 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 40 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.86 | 3 | 1.7825 | 0.1846 | | 1.9553 | 1.86 | 6 | 1.7212 | 0.4308 | | 1.9553 | 2.86 | 9 | 1.6164 | 0.3769 | | 2.002 | 3.86 | 12 | 1.4904 | 0.3769 | | 1.6191 | 4.86 | 15 | 1.4426 | 0.4385 | | 1.6191 | 5.86 | 18 | 1.3516 | 0.5231 | | 1.6209 | 6.86 | 21 | 1.2176 | 0.5538 | | 1.6209 | 7.86 | 24 | 1.1683 | 0.5692 | | 1.371 | 8.86 | 27 | 1.0885 | 0.5923 | | 1.1568 | 9.86 | 30 | 1.0152 | 0.6385 | | 1.1568 | 10.86 | 33 | 0.9289 | 0.6385 | | 1.1023 | 11.86 | 36 | 0.9141 | 0.6308 | | 1.1023 | 12.86 | 39 | 0.8526 | 0.6462 | | 0.9448 | 13.86 | 42 | 0.8420 | 0.6769 | | 0.7972 | 14.86 | 45 | 0.7976 | 0.6692 | | 0.7972 | 15.86 | 48 | 0.8192 | 0.7308 | | 0.7793 | 16.86 | 51 | 0.7108 | 0.7615 | | 0.7793 | 17.86 | 54 | 0.6712 | 0.7769 | | 0.6468 | 18.86 | 57 | 0.6684 | 0.7923 | | 0.5083 | 19.86 | 60 | 0.6922 | 0.7385 | | 0.5083 | 20.86 | 63 | 0.6148 | 0.7923 | | 0.4988 | 21.86 | 66 | 0.5846 | 0.7923 | | 0.4988 | 22.86 | 69 | 0.6050 | 0.8154 | | 0.4123 | 23.86 | 72 | 0.5506 | 0.7846 | | 0.3511 | 24.86 | 75 | 0.6095 | 0.7846 | | 0.3511 | 25.86 | 78 | 0.5916 | 0.8154 | | 0.3268 | 26.86 | 81 | 0.5912 | 0.8077 | | 0.3268 | 27.86 | 84 | 0.5142 | 0.8538 | | 0.3036 | 28.86 | 87 | 0.5492 | 0.8077 | | 0.3066 | 29.86 | 90 | 0.6007 | 0.8231 | | 0.3066 | 30.86 | 93 | 0.5748 | 0.8231 | | 0.2538 | 31.86 | 96 | 0.6027 | 0.7692 | | 0.2538 | 32.86 | 99 | 0.6979 | 0.7462 | | 0.2281 | 33.86 | 102 | 0.7002 | 0.7615 | | 0.2183 | 34.86 | 105 | 0.6650 | 0.7769 | | 0.2183 | 35.86 | 108 | 0.5192 | 0.8462 | | 0.2202 | 36.86 | 111 | 0.5389 | 0.8308 | | 0.2202 | 37.86 | 114 | 0.5050 | 0.8385 | | 0.1906 | 38.86 | 117 | 0.5722 | 0.7769 | | 0.154 | 39.86 | 120 | 0.5239 | 0.8308 | | 0.154 | 40.86 | 123 | 0.4448 | 0.8615 | | 0.1474 | 41.86 | 126 | 0.4623 | 0.8615 | | 0.1474 | 42.86 | 129 | 0.4282 | 0.8615 | | 0.1345 | 43.86 | 132 | 0.5087 | 0.8615 | | 0.1567 | 44.86 | 135 | 0.4859 | 0.8385 | | 0.1567 | 45.86 | 138 | 0.6603 | 0.8077 | | 0.1731 | 46.86 | 141 | 0.5379 | 0.8385 | | 0.1731 | 47.86 | 144 | 0.8666 | 0.7538 | | 0.1606 | 48.86 | 147 | 0.7518 | 0.8 | | 0.1484 | 49.86 | 150 | 0.5986 | 0.8385 | | 0.1484 | 50.86 | 153 | 0.6368 | 0.8231 | | 0.2256 | 51.86 | 156 | 0.4639 | 0.8692 | | 0.2256 | 52.86 | 159 | 0.5533 | 0.8462 | | 0.1178 | 53.86 | 162 | 0.5038 | 0.8615 | | 0.0815 | 54.86 | 165 | 0.5052 | 0.8692 | | 0.0815 | 55.86 | 168 | 0.4337 | 0.8846 | | 0.0998 | 56.86 | 171 | 0.4422 | 0.8769 | | 0.0998 | 57.86 | 174 | 0.4317 | 0.8692 | | 0.0855 | 58.86 | 177 | 0.4025 | 0.8923 | | 0.0962 | 59.86 | 180 | 0.4605 | 0.8769 | | 0.0962 | 60.86 | 183 | 0.4356 | 0.8769 | | 0.0763 | 61.86 | 186 | 0.4614 | 0.8769 | | 0.0763 | 62.86 | 189 | 0.4382 | 0.8846 | | 0.0902 | 63.86 | 192 | 0.4701 | 0.8692 | | 0.0654 | 64.86 | 195 | 0.4922 | 0.8692 | | 0.0654 | 65.86 | 198 | 0.5413 | 0.8538 | | 0.0651 | 66.86 | 201 | 0.5759 | 0.8615 | | 0.0651 | 67.86 | 204 | 0.4238 | 0.9 | | 0.0822 | 68.86 | 207 | 0.3500 | 0.9154 | | 0.0625 | 69.86 | 210 | 0.3878 | 0.8923 | | 0.0625 | 70.86 | 213 | 0.4952 | 0.8615 | | 0.0548 | 71.86 | 216 | 0.4544 | 0.8615 | | 0.0548 | 72.86 | 219 | 0.5497 | 0.8769 | | 0.054 | 73.86 | 222 | 0.4434 | 0.8846 | | 0.0543 | 74.86 | 225 | 0.4732 | 0.8769 | | 0.0543 | 75.86 | 228 | 0.4425 | 0.8923 | | 0.0881 | 76.86 | 231 | 0.4788 | 0.8769 | | 0.0881 | 77.86 | 234 | 0.5448 | 0.8769 | | 0.061 | 78.86 | 237 | 0.4221 | 0.9077 | | 0.0567 | 79.86 | 240 | 0.4404 | 0.8769 | | 0.0567 | 80.86 | 243 | 0.4099 | 0.9 | | 0.052 | 81.86 | 246 | 0.5259 | 0.8769 | | 0.052 | 82.86 | 249 | 0.5874 | 0.8692 | | 0.0444 | 83.86 | 252 | 0.5555 | 0.8846 | | 0.0332 | 84.86 | 255 | 0.5156 | 0.8615 | | 0.0332 | 85.86 | 258 | 0.4564 | 0.8615 | | 0.0449 | 86.86 | 261 | 0.4826 | 0.8692 | | 0.0449 | 87.86 | 264 | 0.4726 | 0.8615 | | 0.0385 | 88.86 | 267 | 0.4206 | 0.8846 | | 0.0356 | 89.86 | 270 | 0.4050 | 0.8769 | | 0.0356 | 90.86 | 273 | 0.4161 | 0.8923 | | 0.0391 | 91.86 | 276 | 0.4100 | 0.9077 | | 0.0391 | 92.86 | 279 | 0.4047 | 0.9 | | 0.0249 | 93.86 | 282 | 0.4044 | 0.9 | | 0.0399 | 94.86 | 285 | 0.3968 | 0.8846 | | 0.0399 | 95.86 | 288 | 0.3802 | 0.9 | | 0.031 | 96.86 | 291 | 0.3689 | 0.9 | | 0.031 | 97.86 | 294 | 0.3616 | 0.9077 | | 0.036 | 98.86 | 297 | 0.3584 | 0.9077 | | 0.0386 | 99.86 | 300 | 0.3574 | 0.9077 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Chikashi/t5-small-finetuned-cnndm1
Chikashi
2022-03-28T22:00:26Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-28T14:55:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cnn_dailymail metrics: - rouge model-index: - name: t5-small-finetuned-cnndm1 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: cnn_dailymail type: cnn_dailymail args: 3.0.0 metrics: - name: Rouge1 type: rouge value: 24.4246 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-cnndm1 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.6853 - Rouge1: 24.4246 - Rouge2: 11.6944 - Rougel: 20.1717 - Rougelsum: 23.0424 - Gen Len: 18.9996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.912 | 0.14 | 5000 | 1.7167 | 24.4232 | 11.7049 | 20.1758 | 23.0345 | 18.9997 | | 1.8784 | 0.28 | 10000 | 1.7018 | 24.4009 | 11.6918 | 20.1561 | 23.0073 | 18.9997 | | 1.8628 | 0.42 | 15000 | 1.6934 | 24.385 | 11.683 | 20.1285 | 22.9823 | 18.9997 | | 1.8594 | 0.56 | 20000 | 1.6902 | 24.4407 | 11.6835 | 20.1734 | 23.0369 | 18.9996 | | 1.8537 | 0.7 | 25000 | 1.6864 | 24.3635 | 11.658 | 20.1318 | 22.9782 | 18.9993 | | 1.8505 | 0.84 | 30000 | 1.6856 | 24.4267 | 11.6991 | 20.1629 | 23.0361 | 18.9994 | | 1.8505 | 0.98 | 35000 | 1.6853 | 24.4246 | 11.6944 | 20.1717 | 23.0424 | 18.9996 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
hf-test/xls-r-300m-sv
hf-test
2022-03-28T20:07:57Z
28
3
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hello", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "sv", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - sv-SE license: apache-2.0 tags: - automatic-speech-recognition - generated_from_trainer - hf-asr-leaderboard - hello - model_for_talk - mozilla-foundation/common_voice_7_0 - robust-speech-event - sv datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Swedish results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: sv-SE metrics: - name: Test WER type: wer value: 16.98 - name: Test CER type: cer value: 5.66 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: sv metrics: - name: Test WER type: wer value: 27.01 - name: Test CER type: cer value: 13.14 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLS-R-300m-SV This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset. It achieves the following results on the evaluation set: - Loss: 0.3171 - Wer: 0.2468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.3349 | 1.45 | 500 | 3.2858 | 1.0 | | 2.9298 | 2.91 | 1000 | 2.9225 | 1.0000 | | 2.0839 | 4.36 | 1500 | 1.1546 | 0.8295 | | 1.7093 | 5.81 | 2000 | 0.6827 | 0.5701 | | 1.5855 | 7.27 | 2500 | 0.5597 | 0.4947 | | 1.4831 | 8.72 | 3000 | 0.4923 | 0.4527 | | 1.4416 | 10.17 | 3500 | 0.4670 | 0.4270 | | 1.3848 | 11.63 | 4000 | 0.4341 | 0.3980 | | 1.3749 | 13.08 | 4500 | 0.4203 | 0.4011 | | 1.3311 | 14.53 | 5000 | 0.4310 | 0.3961 | | 1.317 | 15.99 | 5500 | 0.3898 | 0.4322 | | 1.2799 | 17.44 | 6000 | 0.3806 | 0.3572 | | 1.2771 | 18.89 | 6500 | 0.3828 | 0.3427 | | 1.2451 | 20.35 | 7000 | 0.3702 | 0.3359 | | 1.2182 | 21.8 | 7500 | 0.3685 | 0.3270 | | 1.2152 | 23.26 | 8000 | 0.3650 | 0.3308 | | 1.1837 | 24.71 | 8500 | 0.3568 | 0.3187 | | 1.1721 | 26.16 | 9000 | 0.3659 | 0.3249 | | 1.1764 | 27.61 | 9500 | 0.3547 | 0.3145 | | 1.1606 | 29.07 | 10000 | 0.3514 | 0.3104 | | 1.1431 | 30.52 | 10500 | 0.3469 | 0.3062 | | 1.1047 | 31.97 | 11000 | 0.3313 | 0.2979 | | 1.1315 | 33.43 | 11500 | 0.3298 | 0.2992 | | 1.1022 | 34.88 | 12000 | 0.3296 | 0.2973 | | 1.0935 | 36.34 | 12500 | 0.3278 | 0.2926 | | 1.0676 | 37.79 | 13000 | 0.3208 | 0.2868 | | 1.0571 | 39.24 | 13500 | 0.3322 | 0.2885 | | 1.0536 | 40.7 | 14000 | 0.3245 | 0.2831 | | 1.0525 | 42.15 | 14500 | 0.3285 | 0.2826 | | 1.0464 | 43.6 | 15000 | 0.3223 | 0.2796 | | 1.0415 | 45.06 | 15500 | 0.3166 | 0.2774 | | 1.0356 | 46.51 | 16000 | 0.3177 | 0.2746 | | 1.04 | 47.96 | 16500 | 0.3150 | 0.2735 | | 1.0209 | 49.42 | 17000 | 0.3175 | 0.2731 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python eval.py --model_id hf-test/xls-r-300m-sv --dataset mozilla-foundation/common_voice_7_0 --config sv-SE --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id hf-test/xls-r-300m-sv --dataset speech-recognition-community-v2/dev_data --config sv --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ### Inference With LM ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "hf-test/xls-r-300m-sv" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "sv-SE", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text # => "jag lämnade grovjobbet åt honom" ``` ### Eval results on Common Voice 7 "test" (WER): | Without LM | With LM (run `./eval.py`) | |---|---| | 24.68 | 16.98 |
DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v2
DrishtiSharma
2022-03-28T19:04:20Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-28T17:20:20Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-sentiment-mesd-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-sentiment-mesd-v2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7213 - Accuracy: 0.3923 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.25e-05 - train_batch_size: 64 - eval_batch_size: 40 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.86 | 3 | 1.7961 | 0.1462 | | 1.9685 | 1.86 | 6 | 1.7932 | 0.1692 | | 1.9685 | 2.86 | 9 | 1.7891 | 0.2 | | 2.1386 | 3.86 | 12 | 1.7820 | 0.2923 | | 1.9492 | 4.86 | 15 | 1.7750 | 0.2923 | | 1.9492 | 5.86 | 18 | 1.7684 | 0.2846 | | 2.1143 | 6.86 | 21 | 1.7624 | 0.3231 | | 2.1143 | 7.86 | 24 | 1.7561 | 0.3308 | | 2.0945 | 8.86 | 27 | 1.7500 | 0.3462 | | 1.9121 | 9.86 | 30 | 1.7443 | 0.3385 | | 1.9121 | 10.86 | 33 | 1.7386 | 0.3231 | | 2.0682 | 11.86 | 36 | 1.7328 | 0.3231 | | 2.0682 | 12.86 | 39 | 1.7272 | 0.3769 | | 2.0527 | 13.86 | 42 | 1.7213 | 0.3923 | | 1.8705 | 14.86 | 45 | 1.7154 | 0.3846 | | 1.8705 | 15.86 | 48 | 1.7112 | 0.3846 | | 2.0263 | 16.86 | 51 | 1.7082 | 0.3769 | | 2.0263 | 17.86 | 54 | 1.7044 | 0.3846 | | 2.0136 | 18.86 | 57 | 1.7021 | 0.3846 | | 1.8429 | 19.86 | 60 | 1.7013 | 0.3846 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6