modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-01 06:29:04
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
530 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-01 06:28:51
card
stringlengths
11
1.01M
facebook/wav2vec2-base-10k-voxpopuli-ft-lt
facebook
2021-05-05T16:24:29Z
0
0
null
[ "audio", "automatic-speech-recognition", "voxpopuli", "lt", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: lt tags: - audio - automatic-speech-recognition - voxpopuli license: cc-by-nc-4.0 --- # Wav2Vec2-Base-VoxPopuli-Finetuned [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in lt (refer to Table 1 of paper for more information). **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI* See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/) # Usage for inference In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets) ```python #!/usr/bin/env python3 from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torchaudio import torch # resample audio # load model & processor model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-lt") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-lt") # load dataset ds = load_dataset("common_voice", "lt", split="validation[:1%]") # common voice does not match target sampling rate common_voice_sample_rate = 48000 target_sample_rate = 16000 resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate) # define mapping fn to read in sound file and resample def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) speech = resampler(speech) batch["speech"] = speech[0] return batch # load all audio files ds = ds.map(map_to_array) # run inference on the first 5 data samples inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True) # inference logits = model(**inputs).logits predicted_ids = torch.argmax(logits, axis=-1) print(processor.batch_decode(predicted_ids)) ```
facebook/wav2vec2-base-10k-voxpopuli-ft-et
facebook
2021-05-05T16:24:26Z
0
0
null
[ "audio", "automatic-speech-recognition", "voxpopuli", "et", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: et tags: - audio - automatic-speech-recognition - voxpopuli license: cc-by-nc-4.0 --- # Wav2Vec2-Base-VoxPopuli-Finetuned [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in et (refer to Table 1 of paper for more information). **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI* See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/) # Usage for inference In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets) ```python #!/usr/bin/env python3 from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torchaudio import torch # resample audio # load model & processor model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-et") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-et") # load dataset ds = load_dataset("common_voice", "et", split="validation[:1%]") # common voice does not match target sampling rate common_voice_sample_rate = 48000 target_sample_rate = 16000 resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate) # define mapping fn to read in sound file and resample def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) speech = resampler(speech) batch["speech"] = speech[0] return batch # load all audio files ds = ds.map(map_to_array) # run inference on the first 5 data samples inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True) # inference logits = model(**inputs).logits predicted_ids = torch.argmax(logits, axis=-1) print(processor.batch_decode(predicted_ids)) ```
xcjthu/Lawformer
xcjthu
2021-05-05T11:57:20Z
47
7
transformers
[ "transformers", "pytorch", "longformer", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
## Lawformer ### Introduction This repository provides the source code and checkpoints of the paper "Lawformer: A Pre-trained Language Model forChinese Legal Long Documents". You can download the checkpoint from the [huggingface model hub](https://huggingface.co/xcjthu/Lawformer) or from [here](https://data.thunlp.org/legal/Lawformer.zip). ### Easy Start We have uploaded our model to the huggingface model hub. Make sure you have installed transformers. ```python >>> from transformers import AutoModel, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext") >>> model = AutoModel.from_pretrained("xcjthu/Lawformer") >>> inputs = tokenizer("任某提起诉讼,请求判令解除婚姻关系并对夫妻共同财产进行分割。", return_tensors="pt") >>> outputs = model(**inputs) ``` ### Cite If you use the pre-trained models, please cite this paper: ``` @article{xiao2021lawformer, title={Lawformer: A Pre-trained Language Model forChinese Legal Long Documents}, author={Xiao, Chaojun and Hu, Xueyu and Liu, Zhiyuan and Tu, Cunchao and Sun, Maosong}, year={2021} } ```
stas/tiny-wmt19-en-de
stas
2021-05-03T01:48:44Z
400
0
transformers
[ "transformers", "pytorch", "fsmt", "text2text-generation", "wmt19", "testing", "en", "de", "dataset:wmt19", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en - de thumbnail: tags: - wmt19 - testing license: apache-2.0 datasets: - wmt19 metrics: - bleu --- # Tiny FSMT en-de This is a tiny model that is used in the `transformers` test suite. It doesn't do anything useful, other than testing that `modeling_fsmt.py` is functional. Do not try to use it for anything that requires quality. The model is indeed 1MB in size. You can see how it was created [here](https://huggingface.co/stas/tiny-wmt19-en-de/blob/main/fsmt-make-tiny-model.py). If you're looking for the real model, please go to [https://huggingface.co/facebook/wmt19-en-de](https://huggingface.co/facebook/wmt19-en-de).
stas/tiny-wmt19-en-ru
stas
2021-05-03T01:47:47Z
3,371
0
transformers
[ "transformers", "pytorch", "fsmt", "text2text-generation", "wmt19", "testing", "en", "ru", "dataset:wmt19", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en - ru thumbnail: tags: - wmt19 - testing license: apache-2.0 datasets: - wmt19 metrics: - bleu --- # Tiny FSMT en-ru This is a tiny model that is used in the `transformers` test suite. It doesn't do anything useful, other than testing that `modeling_fsmt.py` is functional. Do not try to use it for anything that requires quality. The model is indeed 30KB in size. You can see how it was created [here](https://huggingface.co/stas/tiny-wmt19-en-ru/blob/main/fsmt-make-super-tiny-model.py). If you're looking for the real model, please go to [https://huggingface.co/facebook/wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru).
MarshallHo/albertZero-squad2-base-v2
MarshallHo
2021-05-02T16:41:46Z
0
0
null
[ "arxiv:1909.11942", "arxiv:1810.04805", "arxiv:1806.03822", "arxiv:2001.09694", "region:us" ]
null
2022-03-02T23:29:04Z
# albertZero albertZero is a PyTorch model with a prediction head fine-tuned for SQuAD 2.0. Based on Hugging Face's albert-base-v2, albertZero employs a novel method to speed up fine-tuning. It re-initializes weights of final linear layer in the shared albert transformer block, resulting in a 2% point improvement during the early epochs of fine-tuning. ## Usage albertZero can be loaded like this: ```python tokenizer = AutoTokenizer.from_pretrained('MarshallHo/albertZero-squad2-base-v2') model = AutoModel.from_pretrained('MarshallHo/albertZero-squad2-base-v2') ``` or ```python from transformers import AlbertModel, AlbertTokenizer, AlbertForQuestionAnswering, AlbertPreTrainedModel mytokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = AlbertForQuestionAnsweringAVPool.from_pretrained('albert-base-v2') model.load_state_dict(torch.load('albertZero-squad2-base-v2.bin')) ``` ## References The goal of [ALBERT](https://arxiv.org/abs/1909.11942) is to reduce the memory requirement of the groundbreaking language model [BERT](https://arxiv.org/abs/1810.04805), while providing a similar level of performance. ALBERT mainly uses 2 methods to reduce the number of parameters – parameter sharing and factorized embedding. The field of NLP has undergone major improvements in recent years. The replacement of recurrent architectures by attention-based models has allowed NLP tasks such as question-answering to approach human level performance. In order to push the limits further, the [SQuAD2.0](https://arxiv.org/abs/1806.03822) dataset was created in 2018 with 50,000 additional unanswerable questions, addressing a major weakness of the original version of the dataset. At the time of writing, near the top of the [SQuAD2.0 leaderboard](https://rajpurkar.github.io/SQuAD-explorer/) is Shanghai Jiao Tong University’s [Retro-Reader](http://arxiv.org/abs/2001.09694). We have re-implemented their non-ensemble ALBERT model with the SQUAD2.0 prediction head. ## Acknowledgments Thanks to the generosity of the team at Hugging Face and all the groups referenced above !
mlcorelib/debertav2-base-uncased
mlcorelib
2021-05-01T12:53:51Z
4
0
transformers
[ "transformers", "pytorch", "tf", "jax", "rust", "bert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en tags: - exbert license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = TFBertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.09747550636529922, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the man worked as a waiter. [SEP]', 'score': 0.0523831807076931, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a barber. [SEP]', 'score': 0.04962705448269844, 'token': 13362, 'token_str': 'barber'}, {'sequence': '[CLS] the man worked as a mechanic. [SEP]', 'score': 0.03788609802722931, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a salesman. [SEP]', 'score': 0.037680890411138535, 'token': 18968, 'token_str': 'salesman'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] the woman worked as a nurse. [SEP]', 'score': 0.21981462836265564, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a waitress. [SEP]', 'score': 0.1597415804862976, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a maid. [SEP]', 'score': 0.1154729500412941, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a prostitute. [SEP]', 'score': 0.037968918681144714, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the woman worked as a cook. [SEP]', 'score': 0.03042375110089779, 'token': 5660, 'token_str': 'cook'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=bert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
mlcorelib/deberta-base-uncased
mlcorelib
2021-05-01T12:33:45Z
8
0
transformers
[ "transformers", "pytorch", "tf", "jax", "rust", "bert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en tags: - exbert license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = TFBertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.09747550636529922, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the man worked as a waiter. [SEP]', 'score': 0.0523831807076931, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a barber. [SEP]', 'score': 0.04962705448269844, 'token': 13362, 'token_str': 'barber'}, {'sequence': '[CLS] the man worked as a mechanic. [SEP]', 'score': 0.03788609802722931, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a salesman. [SEP]', 'score': 0.037680890411138535, 'token': 18968, 'token_str': 'salesman'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] the woman worked as a nurse. [SEP]', 'score': 0.21981462836265564, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a waitress. [SEP]', 'score': 0.1597415804862976, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a maid. [SEP]', 'score': 0.1154729500412941, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a prostitute. [SEP]', 'score': 0.037968918681144714, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the woman worked as a cook. [SEP]', 'score': 0.03042375110089779, 'token': 5660, 'token_str': 'cook'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=bert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
julien-c/kan-bayashi-jsut_tts_train_tacotron2_ja
julien-c
2021-04-30T10:08:45Z
6
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 inference: false --- ## Example ESPnet2 TTS model ♻️ Imported from https://zenodo.org/record/3963886/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). Model id: `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.loss.best` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
vasudevgupta/bigbird-roberta-large
vasudevgupta
2021-04-30T07:36:35Z
5
0
transformers
[ "transformers", "pytorch", "big_bird", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
Moved here: https://huggingface.co/google/bigbird-roberta-large
vasudevgupta/dl-hack-pegasus-large
vasudevgupta
2021-04-30T07:33:27Z
3
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
Deep Learning research papers **Title -> abstract**
nbouali/flaubert-base-uncased-finetuned-cooking
nbouali
2021-04-28T16:02:59Z
351
1
transformers
[ "transformers", "pytorch", "flaubert", "text-classification", "french", "flaubert-base-uncased", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fr tags: - text-classification - flaubert - french - flaubert-base-uncased widget: - text: "Lasagnes à la bolognaise" --- # FlauBERT finetuned on French cooking recipes This model is finetuned on a sequence classification task that associates each sequence with the appropriate recipe category. ### How to use it? ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import TextClassificationPipeline loaded_tokenizer = AutoTokenizer.from_pretrained("nbouali/flaubert-base-uncased-finetuned-cooking") loaded_model = AutoModelForSequenceClassification.from_pretrained("nbouali/flaubert-base-uncased-finetuned-cooking") nlp = TextClassificationPipeline(model=loaded_model,tokenizer=loaded_tokenizer,task="Recipe classification") print(nlp("Lasagnes à la bolognaise")) ``` ``` [{'label': 'LABEL_6', 'score': 0.9921900033950806}] ``` ### Label encoding: | label | Recipe Category | |:------:|:--------------:| | 0 |'Accompagnement' | | 1 | 'Amuse-gueule' | | 2 | 'Boisson' | | 3 | 'Confiserie' | | 4 | 'Dessert'| | 5 | 'Entrée' | | 6 |'Plat principal' | | 7 | 'Sauce' | <br/> <br/> > If you would like to know more about this model you can refer to [our blog post](https://medium.com/unify-data-office/a-cooking-language-model-fine-tuned-on-dozens-of-thousands-of-french-recipes-bcdb8e560571)
mrm8488/electricidad-base-finetuned-pawsx-es
mrm8488
2021-04-28T15:52:25Z
5
1
transformers
[ "transformers", "pytorch", "electra", "text-classification", "nli", "es", "dataset:xtreme", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: es datasets: - xtreme tags: - nli widget: - text: "El río Tabaci es una vertiente del río Leurda en Rumania. El río Leurda es un afluente del río Tabaci en Rumania." --- # Electricidad-base fine-tuned on PAWS-X-es for Paraphrase Identification (NLI)
mrm8488/camembert-base-finetuned-pawsx-fr
mrm8488
2021-04-28T15:51:53Z
4
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "nli", "fr", "dataset:xtreme", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fr datasets: - xtreme tags: - nli widget: - text: "La première série a été mieux reçue par la critique que la seconde. La seconde série a été bien accueillie par la critique, mieux que la première." --- # Camembert-base fine-tuned on PAWS-X-fr for Paraphrase Identification (NLI)
AimB/mT5-en-kr-natural
AimB
2021-04-28T12:47:22Z
16
2
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
you can use this model with simpletransfomers. ``` !pip install simpletransformers from simpletransformers.t5 import T5Model model = T5Model("mt5", "AimB/mT5-en-kr-natural") print(model.predict(["I feel good today"])) print(model.predict(["우리집 고양이는 세상에서 제일 귀엽습니다"])) ```
anukaver/xlm-roberta-est-qa
anukaver
2021-04-27T10:47:18Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "dataset:squad", "dataset:anukaver/EstQA", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - question-answering datasets: - squad - anukaver/EstQA --- # Question answering model for Estonian This is a question answering model based on XLM-Roberta base model. It is fine-tuned subsequentially on: 1. English SQuAD v1.1 2. SQuAD v1.1 translated into Estonian 3. Small native Estonian dataset (800 samples) The model has retained good multilingual properties and can be used for extractive QA tasks in all languages included in XLM-Roberta. The performance is best in the fine-tuning languages of Estonian and English. | Tested on | F1 | EM | | ----------- | --- | --- | | EstQA test set | 82.4 | 75.3 | | SQuAD v1.1 dev set | 86.9 | 77.9 | The Estonian dataset used for fine-tuning and validating results is available in https://huggingface.co/datasets/anukaver/EstQA/ (version 1.0)
mitra-mir/ALBERT-Persian-Poetry
mitra-mir
2021-04-27T06:55:48Z
4
0
transformers
[ "transformers", "pytorch", "tf", "albert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
A Transformer-based Persian Language Model Further Pretrained on Persian Poetry ALBERT was first introduced by [Hooshvare](https://huggingface.co/HooshvareLab/albert-fa-zwnj-base-v2?text=%D8%B2+%D8%A2%D9%86+%D8%AF%D8%B1%D8%AF%D8%B4+%5BMASK%5D+%D9%85%DB%8C+%D8%B3%D9%88%D8%AE%D8%AA+%D8%AF%D8%B1+%D8%A8%D8%B1) with 30,000 vocabulary size as lite BERT for self-supervised learning of language representations for the Persian language. Here we wanted to utilize its capabilities by pretraining it on a large corpse of Persian poetry. This model has been post-trained on 80 percent of poetry verses of the Persian poetry dataset - Ganjoor- and has been evaluated on the other 20 percent.
jacob-valdez/blenderbot-small-tflite
jacob-valdez
2021-04-25T00:47:29Z
0
1
null
[ "tflite", "Android", "blenderbot", "en", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: "en" #thumbnail: "url to a thumbnail used in social sharing" tags: - Android - tflite - blenderbot license: "apache-2.0" #datasets: #metrics: --- # Model Card `blenderbot-small-tflite` is a tflite version of `blenderbot-small-90M` I converted for my UTA CSE3310 class. See the repo at [https://github.com/kmosoti/DesparadosAEYE](https://github.com/kmosoti/DesparadosAEYE) and the conversion process [here](https://drive.google.com/file/d/1F93nMsDIm1TWhn70FcLtcaKQUynHq9wS/view?usp=sharing). You have to right pad your user and model input integers to make them [32,]-shaped. Then indicate te true length with the 3rd and 4th params. ```python display(interpreter.get_input_details()) display(interpreter.get_output_details()) ``` ```json [{'dtype': numpy.int32, 'index': 0, 'name': 'input_tokens', 'quantization': (0.0, 0), 'quantization_parameters': {'quantized_dimension': 0, 'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32)}, 'shape': array([32], dtype=int32), 'shape_signature': array([32], dtype=int32), 'sparsity_parameters': {}}, {'dtype': numpy.int32, 'index': 1, 'name': 'decoder_input_tokens', 'quantization': (0.0, 0), 'quantization_parameters': {'quantized_dimension': 0, 'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32)}, 'shape': array([32], dtype=int32), 'shape_signature': array([32], dtype=int32), 'sparsity_parameters': {}}, {'dtype': numpy.int32, 'index': 2, 'name': 'input_len', 'quantization': (0.0, 0), 'quantization_parameters': {'quantized_dimension': 0, 'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32)}, 'shape': array([], dtype=int32), 'shape_signature': array([], dtype=int32), 'sparsity_parameters': {}}, {'dtype': numpy.int32, 'index': 3, 'name': 'decoder_input_len', 'quantization': (0.0, 0), 'quantization_parameters': {'quantized_dimension': 0, 'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32)}, 'shape': array([], dtype=int32), 'shape_signature': array([], dtype=int32), 'sparsity_parameters': {}}] [{'dtype': numpy.int32, 'index': 3113, 'name': 'Identity', 'quantization': (0.0, 0), 'quantization_parameters': {'quantized_dimension': 0, 'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32)}, 'shape': array([1], dtype=int32), 'shape_signature': array([1], dtype=int32), 'sparsity_parameters': {}}] ```
glasses/cse_resnet50
glasses
2021-04-24T10:50:58Z
2
0
transformers
[ "transformers", "pytorch", "arxiv:1512.03385", "arxiv:1812.01187", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# cse_resnet50 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
spencerh/leftpartisan
spencerh
2021-04-23T19:27:15Z
5
0
transformers
[ "transformers", "pytorch", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Text classifier using DistilBERT to determine Partisanship ## This is one of many single-class partisanship models label_0 refers to "left" while label_1 refers to "other". This model was trained on 40,000 articles. ### Best Practices This model was optimized for 512 token-length text. Any text below 150 tokens will result in inaccurate results.
glasses/deit_base_patch16_224
glasses
2021-04-22T18:44:42Z
5
0
transformers
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# deit_base_patch16_224 Implementation of DeiT proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/pdf/2010.11929.pdf) An attention based distillation is proposed where a new token is added to the model, the [dist]{.title-ref} token. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/DeiT.png?raw=true) ``` {.sourceCode .} DeiT.deit_tiny_patch16_224() DeiT.deit_small_patch16_224() DeiT.deit_base_patch16_224() DeiT.deit_base_patch16_384() ```
glasses/deit_small_patch16_224
glasses
2021-04-22T18:44:25Z
2
0
transformers
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# deit_small_patch16_224 Implementation of DeiT proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/pdf/2010.11929.pdf) An attention based distillation is proposed where a new token is added to the model, the [dist]{.title-ref} token. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/DeiT.png?raw=true) ``` {.sourceCode .} DeiT.deit_tiny_patch16_224() DeiT.deit_small_patch16_224() DeiT.deit_base_patch16_224() DeiT.deit_base_patch16_384() ```
glasses/deit_tiny_patch16_224
glasses
2021-04-22T18:44:18Z
3
0
transformers
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# deit_tiny_patch16_224 Implementation of DeiT proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/pdf/2010.11929.pdf) An attention based distillation is proposed where a new token is added to the model, the [dist]{.title-ref} token. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/DeiT.png?raw=true) ``` {.sourceCode .} DeiT.deit_tiny_patch16_224() DeiT.deit_small_patch16_224() DeiT.deit_base_patch16_224() DeiT.deit_base_patch16_384() ```
glasses/vit_large_patch16_384
glasses
2021-04-22T18:43:25Z
2
0
transformers
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# vit_large_patch16_384 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
glasses/vit_huge_patch32_384
glasses
2021-04-22T18:41:37Z
6
0
transformers
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# vit_huge_patch32_384 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
glasses/vit_huge_patch16_224
glasses
2021-04-22T18:39:36Z
3
0
transformers
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# vit_huge_patch16_224 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
k948181/ybdH-1
k948181
2021-04-22T13:34:20Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
>tr|Q8ZR27|Q8ZR27_SALTY Putative glycerol dehydrogenase OS=Salmonella typhimurium (strain LT2 / SGSC1412 / ATCC 700720) OX=99287 GN=ybdH PE=3 SV=1 MNHTEIRVVTGPANYFSHAGSLERLTDFFTPEQLSHAVWVYGERAIAAARPYLPEAFERA GAKHLPFTGHCSERHVAQLAHACNDDRQVVIGVGGGALLDTAKALARRLALPFVAIPTIA ATCAAWTPLSVWYNDAGQALQFEIFDDANFLVLVEPRIILQAPDDYLLAGIGDTLAKWYE AVVLAPQPETLPLTVRLGINSACAIRDLLLDSSEQALADKQQRRLTQAFCDVVDAIIAGG GMVGGLGERYTRVAAAHAVHNGLTVLPQTEKFLHGTKVAYGILVQSALLGQDDVLAQLIT AYRRFHLPARLSELDVDIHNTAEIDRVIAHTLRPVESIHYLPVTLTPDTLRAAFEKVEFF RI
glasses/dummy
glasses
2021-04-21T18:24:15Z
3
0
transformers
[ "transformers", "pytorch", "arxiv:1512.03385", "arxiv:1812.01187", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# ResNet Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
ahmedabdelali/bert-base-qarib_far_6500k
ahmedabdelali
2021-04-21T13:41:11Z
9
0
transformers
[ "transformers", "pytorch", "tf", "QARiB", "qarib", "ar", "dataset:arabic_billion_words", "dataset:open_subtitles", "dataset:twitter", "dataset:Farasa", "arxiv:2102.10684", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ar tags: - pytorch - tf - QARiB - qarib datasets: - arabic_billion_words - open_subtitles - twitter - Farasa metrics: - f1 widget: - text: "و+قام ال+مدير [MASK]" --- # QARiB: QCRI Arabic and Dialectal BERT ## About QARiB Farasa QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text. For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from [Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/). QARiB: Is the Arabic name for "Boat". ## Model and Parameters: - Data size: 14B tokens - Vocabulary: 64k - Iterations: 10M - Number of Layers: 12 ## Training QARiB See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md) ## Using QARiB You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md) This model expects the data to be segmented. You may use [Farasa Segmenter](https://farasa-api.qcri.org/segmentation/) API. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>>from transformers import pipeline >>>fill_mask = pipeline("fill-mask", model="./models/bert-base-qarib_far") >>> fill_mask("و+قام ال+مدير [MASK]") [ ] >>> fill_mask("و+قام+ت ال+مدير+ة [MASK]") [ ] >>> fill_mask("قللي وشفيييك يرحم [MASK]") [ ] ``` ## Evaluations: |**Experiment** |**mBERT**|**AraBERT0.1**|**AraBERT1.0**|**ArabicBERT**|**QARiB**| |---------------|---------|--------------|--------------|--------------|---------| |Dialect Identification | 6.06% | 59.92% | 59.85% | 61.70% | **65.21%** | |Emotion Detection | 27.90% | 43.89% | 42.37% | 41.65% | **44.35%** | |Named-Entity Recognition (NER) | 49.38% | 64.97% | **66.63%** | 64.04% | 61.62% | |Offensive Language Detection | 83.14% | 88.07% | 88.97% | 88.19% | **91.94%** | |Sentiment Analysis | 86.61% | 90.80% | **93.58%** | 83.27% | 93.31% | ## Model Weights and Vocab Download From Huggingface site: https://huggingface.co/qarib/bert-base-qarib_far ## Contacts Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih ## Reference ``` @article{abdelali2021pretraining, title={Pre-Training BERT on Arabic Tweets: Practical Considerations}, author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih}, year={2021}, eprint={2102.10684}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ahmedabdelali/bert-base-qarib_far_8280k
ahmedabdelali
2021-04-21T13:40:36Z
20
0
transformers
[ "transformers", "pytorch", "tf", "QARiB", "qarib", "ar", "dataset:arabic_billion_words", "dataset:open_subtitles", "dataset:twitter", "dataset:Farasa", "arxiv:2102.10684", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ar tags: - pytorch - tf - QARiB - qarib datasets: - arabic_billion_words - open_subtitles - twitter - Farasa metrics: - f1 widget: - text: "و+قام ال+مدير [MASK]" --- # QARiB: QCRI Arabic and Dialectal BERT ## About QARiB Farasa QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text. For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from [Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/). QARiB: Is the Arabic name for "Boat". ## Model and Parameters: - Data size: 14B tokens - Vocabulary: 64k - Iterations: 10M - Number of Layers: 12 ## Training QARiB See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md) ## Using QARiB You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md) This model expects the data to be segmented. You may use [Farasa Segmenter](https://farasa-api.qcri.org/segmentation/) API. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>>from transformers import pipeline >>>fill_mask = pipeline("fill-mask", model="./models/bert-base-qarib_far") >>> fill_mask("و+قام ال+مدير [MASK]") [ ] >>> fill_mask("و+قام+ت ال+مدير+ة [MASK]") [ ] >>> fill_mask("قللي وشفيييك يرحم [MASK]") [ ] ``` ## Evaluations: |**Experiment** |**mBERT**|**AraBERT0.1**|**AraBERT1.0**|**ArabicBERT**|**QARiB**| |---------------|---------|--------------|--------------|--------------|---------| |Dialect Identification | 6.06% | 59.92% | 59.85% | 61.70% | **65.21%** | |Emotion Detection | 27.90% | 43.89% | 42.37% | 41.65% | **44.35%** | |Named-Entity Recognition (NER) | 49.38% | 64.97% | **66.63%** | 64.04% | 61.62% | |Offensive Language Detection | 83.14% | 88.07% | 88.97% | 88.19% | **91.94%** | |Sentiment Analysis | 86.61% | 90.80% | **93.58%** | 83.27% | 93.31% | ## Model Weights and Vocab Download From Huggingface site: https://huggingface.co/qarib/bert-base-qarib_far ## Contacts Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih ## Reference ``` @article{abdelali2021pretraining, title={Pre-Training BERT on Arabic Tweets: Practical Considerations}, author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih}, year={2021}, eprint={2102.10684}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ahmedabdelali/bert-base-qarib_far_9920k
ahmedabdelali
2021-04-21T13:38:28Z
5
0
transformers
[ "transformers", "pytorch", "tf", "QARiB", "qarib", "ar", "dataset:arabic_billion_words", "dataset:open_subtitles", "dataset:twitter", "dataset:Farasa", "arxiv:2102.10684", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ar tags: - pytorch - tf - QARiB - qarib datasets: - arabic_billion_words - open_subtitles - twitter - Farasa metrics: - f1 widget: - text: "و+قام ال+مدير [MASK]" --- # QARiB: QCRI Arabic and Dialectal BERT ## About QARiB Farasa QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text. For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from [Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/). QARiB: Is the Arabic name for "Boat". ## Model and Parameters: - Data size: 14B tokens - Vocabulary: 64k - Iterations: 10M - Number of Layers: 12 ## Training QARiB See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md) ## Using QARiB You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md) This model expects the data to be segmented. You may use [Farasa Segmenter](https://farasa-api.qcri.org/segmentation/) API. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>>from transformers import pipeline >>>fill_mask = pipeline("fill-mask", model="./models/bert-base-qarib_far") >>> fill_mask("و+قام ال+مدير [MASK]") [ ] >>> fill_mask("و+قام+ت ال+مدير+ة [MASK]") [ ] >>> fill_mask("قللي وشفيييك يرحم [MASK]") [ ] ``` ## Evaluations: |**Experiment** |**mBERT**|**AraBERT0.1**|**AraBERT1.0**|**ArabicBERT**|**QARiB**| |---------------|---------|--------------|--------------|--------------|---------| |Dialect Identification | 6.06% | 59.92% | 59.85% | 61.70% | **65.21%** | |Emotion Detection | 27.90% | 43.89% | 42.37% | 41.65% | **44.35%** | |Named-Entity Recognition (NER) | 49.38% | 64.97% | **66.63%** | 64.04% | 61.62% | |Offensive Language Detection | 83.14% | 88.07% | 88.97% | 88.19% | **91.94%** | |Sentiment Analysis | 86.61% | 90.80% | **93.58%** | 83.27% | 93.31% | ## Model Weights and Vocab Download From Huggingface site: https://huggingface.co/qarib/bert-base-qarib_far ## Contacts Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih ## Reference ``` @article{abdelali2021pretraining, title={Pre-Training BERT on Arabic Tweets: Practical Considerations}, author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih}, year={2021}, eprint={2102.10684}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
stas/t5-very-small-random
stas
2021-04-21T02:34:01Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
This is a tiny random t5 model used for testing See `t5-make-very-small-model.py` for how it was created.
castorini/ance-dpr-question-multi
castorini
2021-04-21T01:36:24Z
143
1
transformers
[ "transformers", "pytorch", "dpr", "feature-extraction", "arxiv:2007.00808", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini: > Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf) For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
Davlan/mT5_base_yoruba_adr
Davlan
2021-04-20T21:16:26Z
24
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "arxiv:2003.10564", "arxiv:2103.08647", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
Hugging Face's logo --- language: yo datasets: - JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) --- # mT5_base_yoruba_adr ## Model description **mT5_base_yoruba_adr** is a **automatic diacritics restoration** model for Yorùbá language based on a fine-tuned mT5-base model. It achieves the **state-of-the-art performance** for adding the correct diacritics or tonal marks to Yorùbá texts. Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for ADR. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("") model = AutoModelForTokenClassification.from_pretrained("") nlp = pipeline("", model=model, tokenizer=tokenizer) example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria" ner_results = nlp(example) print(ner_results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (BLEU score) 64.63 BLEU on [Global Voices test set](https://arxiv.org/abs/2003.10564) 70.27 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) ### BibTeX entry and citation info By Jesujoba Alabi and David Adelani ``` ```
moha/arabert_arabic_covid19
moha
2021-04-20T06:15:12Z
0
0
null
[ "ar", "arxiv:2004.04315", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ar widget: - text: "للوقايه من عدم انتشار [MASK]" --- # arabert_c19: An Arabert model pretrained on 1.5 million COVID-19 multi-dialect Arabic tweets **ARABERT COVID-19** is a pretrained (fine-tuned) version of the AraBERT v2 model (https://huggingface.co/aubmindlab/bert-base-arabertv02). The pretraining was done using 1.5 million multi-dialect Arabic tweets regarding the COVID-19 pandemic from the “Large Arabic Twitter Dataset on COVID-19” (https://arxiv.org/abs/2004.04315). The model can achieve better results for the tasks that deal with multi-dialect Arabic tweets in relation to the COVID-19 pandemic. # Classification results for multiple tasks including fake-news and hate speech detection when using arabert_c19 and mbert_ar_c19: For more details refer to the paper (link) | | arabert | mbert | distilbert multi | arabert Covid-19 | mbert Covid-19 | |------------------------------------|----------|----------|------------------|------------------|----------------| | Contains hate (Binary) | 0.8346 | 0.6675 | 0.7145 | `0.8649` | 0.8492 | | Talk about a cure (Binary) | 0.8193 | 0.7406 | 0.7127 | 0.9055 | `0.9176` | | News or opinion (Binary) | 0.8987 | 0.8332 | 0.8099 | `0.9163` | 0.9116 | | Contains fake information (Binary) | 0.6415 | 0.5428 | 0.4743 | `0.7739` | 0.7228 | # Preprocessing ```python from arabert.preprocess import ArabertPreprocessor model_name="moha/arabert_c19" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "للوقايه من عدم انتشار كورونا عليك اولا غسل اليدين بالماء والصابون وتكون عملية الغسل دقيقه تشمل راحة اليد الأصابع التركيز على الإبهام" arabert_prep.preprocess(text) ``` # Contacts **Hadj Ameur**: [Github](https://github.com/MohamedHadjAmeur) | <mohamedhadjameur@gmail.com> | <mhadjameur@cerist.dz>
Pollawat/mt5-small-thai-qa-qg
Pollawat
2021-04-19T14:52:22Z
38
4
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "question-generation", "question-answering", "dataset:NSC2018", "dataset:iapp-wiki-qa-dataset", "dataset:XQuAD", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- tags: - question-generation - question-answering language: - thai - th datasets: - NSC2018 - iapp-wiki-qa-dataset - XQuAD license: mit --- [Google's mT5](https://github.com/google-research/multilingual-t5) This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus ```python from transformers import MT5Tokenizer, MT5ForConditionalGeneration tokenizer = MT5Tokenizer.from_pretrained("Pollawat/mt5-small-thai-qa-qg") model = MT5ForConditionalGeneration.from_pretrained("Pollawat/mt5-small-thai-qa-qg") text = "กรุงเทพมหานคร เป็นเมืองหลวงและนครที่มีประชากรมากที่สุดของประเทศไทย เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ เป็นเมืองที่มีชื่อยาวที่สุดในโลก ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 5 ล้านคน" input_ids = tokenizer.encode(text, return_tensors='pt') beam_output = model.generate( input_ids, max_length=50, num_beams=5, early_stopping=True ) print(tokenizer.decode(beam_output[0])) >> <pad> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด <ANS> ฝั่งพระนครและฝั่งธนบุรี</s> print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) >> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด ฝั่งพระนครและฝั่งธนบุรี ```
shivam/mbart-large-50-finetuned-en-mr
shivam
2021-04-18T10:19:52Z
4
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- Language Pair Finetuned: - en-mr Metrics: - sacrebleu - WAT 2021: 16.11 # mbart-large-finetuned-en-mr ## Model Description This is the mbart-large-50 model finetuned on En-Mr corpus. ## Intended uses and limitations Mostly useful for English to Marathi translation but the mbart-large-50 model also supports other language pairs ### How to use ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast model = MBartForConditionalGeneration.from_pretrained("shivam/mbart-large-50-finetuned-en-mr") tokenizer = MBart50TokenizerFast.from_pretrained("shivam/mbart-large-50-finetuned-en-mr", src_lang="en_XX", tgt_lang="mr_IN") english_input_sentence = "The Prime Minister said that cleanliness, or Swachhta, is one of the most important aspects of preventive healthcare." model_inputs = tokenizer(english_input_sentence, return_tensors="pt") generated_tokens = model.generate( **model_inputs, forced_bos_token_id=tokenizer.lang_code_to_id["mr_IN"] ) marathi_output_sentence = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(marathi_output_sentence) #स्वच्छता हा प्रतिबंधात्मक आरोग्य सेवेतील सर्वात महत्त्वाचा पैलू आहे, असे पंतप्रधान म्हणाले. ``` #### Limitations The model was trained on Google Colab and as the training takes a lot of time the model was trained for small time and small number of epochs. ## Eval results WAT 2021: 16.11
molly-hayward/bioelectra-base-discriminator
molly-hayward
2021-04-17T16:59:46Z
2
0
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed. How to use the discriminator in transformers: from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("molly-hayward/bioelectra-base-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-base-discriminator")
molly-hayward/bioelectra-base-generator
molly-hayward
2021-04-17T16:59:28Z
2
1
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed. How to use the generator in transformers: from transformers import ElectraForMaskedLM, ElectraTokenizerFast import torch generator = ElectraForMaskedLM.from_pretrained("molly-hayward/bioelectra-base-generator") tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-base-generator")
nateraw/resnet50
nateraw
2021-04-15T23:19:34Z
71
0
transformers
[ "transformers", "pytorch", "resnet", "image-classification", "dataset:imagenet", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - pytorch datasets: - imagenet --- # Resnet50 Model from Torchvision ## Using the model ``` pip install modelz ``` ```python from modelz import ResnetModel model = ResnetModel.from_pretrained('nateraw/resnet50') ex_input = torch.rand(4, 3, 224, 224) out = model(ex_input) ```
mudes/multilingual-large
mudes
2021-04-15T22:36:53Z
6
2
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans We provide state-of-the-art models to detect toxic spans in text. We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). ## Usage You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed: ```bash pip install mudes ``` Then you can use the model like this: ```python from mudes.app.mudes_app import MUDESApp app = MUDESApp("multilingual-large", use_cuda=False) print(app.predict_toxic_spans("You motherfucking cunt", spans=True)) ``` ## System Demonstration An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/). ## Citing & Authors If you find this model helpful, feel free to cite our publication ```bash @inproceedings{ranasinghemudes, title={{MUDES: Multilingual Detection of Offensive Spans}}, author={Tharindu Ranasinghe and Marcos Zampieri}, booktitle={Proceedings of NAACL}, year={2021} } ``` ```bash @inproceedings{ranasinghe2021semeval, title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}}, author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex}, booktitle={Proceedings of SemEval}, year={2021} } ```
soheeyang/rdr-question_encoder-single-trivia-base
soheeyang
2021-04-15T15:59:29Z
5
0
transformers
[ "transformers", "pytorch", "tf", "dpr", "feature-extraction", "arxiv:2010.10999", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# rdr-queston_encoder-single-nq-base Reader-Distilled Retriever (`RDR`) Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020 The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a DPR retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at small numbers of top-k. This model is the question encoder of RDR trained solely on TriviaQA (single-trivia). This model is trained by the authors and is the official checkpoint of RDR. ## Performance The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0. For the values of DPR, those in parentheses are directly taken from the paper. The values without parentheses are reported using the reproduction of DPR that consists of [this question encoder](https://huggingface.co/soheeyang/dpr-question_encoder-single-trivia-base) and [this queston encoder](https://huggingface.co/soheeyang/dpr-question_encoder-single-trivia-base). | | Top-K Passages | 1 | 5 | 20 | 50 | 100 | |-------------|------------------|-----------|-----------|-----------|-----------|-----------| |**TriviaQA Dev** | **DPR** | 54.27 | 71.11 | 79.53 | 82.72 | 85.07 | | | **RDR (This Model)** | **61.84** | **75.93** | **82.56** | **85.35** | **87.00** | |**TriviaQA Test**| **DPR** | 54.41 | 70.99 | 79.31 (79.4) | 82.90 | 84.99 (85.0) | | | **RDR (This Model)** | **62.56** | **75.92** | **82.52** | **85.64** | **87.26** | ## How to Use RDR shares the same architecture with DPR. Therefore, It uses `DPRQuestionEncoder` as the model class. Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`. Therefore, please specify the exact class to use the model. ```python from transformers import DPRQuestionEncoder, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("soheeyang/rdr-question_encoder-single-trivia-base") question_encoder = DPRQuestionEncoder.from_pretrained("soheeyang/rdr-question_encoder-single-trivia-base") data = tokenizer("question comes here", return_tensors="pt") question_embedding = question_encoder(**data).pooler_output # embedding vector for question ```
soheeyang/rdr-question_encoder-single-nq-base
soheeyang
2021-04-15T15:58:07Z
1,028
1
transformers
[ "transformers", "pytorch", "tf", "dpr", "feature-extraction", "arxiv:2010.10999", "arxiv:2004.04906", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# rdr-question_encoder-single-nq-base Reader-Distilled Retriever (`RDR`) Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020 The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a [DPR](https://arxiv.org/abs/2004.04906) retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at small numbers of top-k. This model is the question encoder of RDR trained solely on Natural Questions (NQ) (single-nq). This model is trained by the authors and is the official checkpoint of RDR. ## Performance The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0. The values of DPR on the NQ dev set are taken from Table 1 of the [paper of RDR](https://arxiv.org/abs/2010.10999). The values of DPR on the NQ test set are taken from the [codebase of DPR](https://github.com/facebookresearch/DPR). DPR-adv is the a new DPR model released in March 2021. It is trained on the original DPR NQ train set and its version where hard negatives are mined using DPR index itself using the previous NQ checkpoint. Please refer to the [codebase of DPR](https://github.com/facebookresearch/DPR) for more details about DPR-adv-hn. | | Top-K Passages | 1 | 5 | 20 | 50 | 100 | |---------|------------------|-------|-------|-------|-------|-------| | **NQ Dev** | **DPR** | 44.2 | - | 76.9 | 81.3 | 84.2 | | | **RDR (This Model)** | **54.43** | **72.17** | **81.33** | **84.8** | **86.61** | | **NQ Test** | **DPR** | 45.87 | 68.14 | 79.97 | - | 85.87 | | | **DPR-adv-hn** | 52.47 | **72.24** | 81.33 | - | 87.29 | | | **RDR (This Model)** | **54.29** | 72.16 | **82.8** | **86.34** | **88.2** | ## How to Use RDR shares the same architecture with DPR. Therefore, It uses `DPRQuestionEncoder` as the model class. Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`. Therefore, please specify the exact class to use the model. ```python from transformers import DPRQuestionEncoder, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("soheeyang/rdr-question_encoder-single-trivia-base") question_encoder = DPRQuestionEncoder.from_pretrained("soheeyang/rdr-question_encoder-single-trivia-base") data = tokenizer("question comes here", return_tensors="pt") question_embedding = question_encoder(**data).pooler_output # embedding vector for question ```
sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco
sebastian-hofstaetter
2021-04-15T08:54:28Z
6,150
23
transformers
[ "transformers", "pytorch", "distilbert", "feature-extraction", "dpr", "dense-passage-retrieval", "knowledge-distillation", "en", "dataset:ms_marco", "arxiv:2104.06967", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: "en" tags: - dpr - dense-passage-retrieval - knowledge-distillation datasets: - ms_marco --- # DistilBert for Dense Passage Retrieval trained with Balanced Topic Aware Sampling (TAS-B) We provide a retrieval trained DistilBert-based model (we call the *dual-encoder then dot-product scoring* architecture BERT_Dot) trained with Balanced Topic Aware Sampling on MSMARCO-Passage. This instance was trained with a batch size of 256 and can be used to **re-rank a candidate set** or **directly for a vector index based dense retrieval**. The architecture is a 6-layer DistilBERT, without architecture additions or modifications (we only change the weights during training) - to receive a query/passage representation we pool the CLS vector. We use the same BERT layers for both query and passage encoding (yields better results, and lowers memory requirements). If you want to know more about our efficient (can be done on a single consumer GPU in 48 hours) batch composition procedure and dual supervision for dense retrieval training, check out our paper: https://arxiv.org/abs/2104.06967 🎉 For more information and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/tas-balanced-dense-retrieval ## Effectiveness on MSMARCO Passage & TREC-DL'19 We trained our model on the MSMARCO standard ("small"-400K query) training triples re-sampled with our TAS-B method. As teacher models we used the BERT_CAT pairwise scores as well as the ColBERT model for in-batch-negative signals published here: https://github.com/sebastian-hofstaetter/neural-ranking-kd ### MSMARCO-DEV (7K) | | MRR@10 | NDCG@10 | Recall@1K | |----------------------------------|--------|---------|-----------------------------| | BM25 | .194 | .241 | .857 | | **TAS-B BERT_Dot** (Retrieval) | .347 | .410 | .978 | ### TREC-DL'19 For MRR and Recall we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers. | | MRR@10 | NDCG@10 | Recall@1K | |----------------------------------|--------|---------|-----------------------------| | BM25 | .689 | .501 | .739 | | **TAS-B BERT_Dot** (Retrieval) | .883 | .717 | .843 | ### TREC-DL'20 For MRR and Recall we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers. | | MRR@10 | NDCG@10 | Recall@1K | |----------------------------------|--------|---------|-----------------------------| | BM25 | .649 | .475 | .806 | | **TAS-B BERT_Dot** (Retrieval) | .843 | .686 | .875 | For more baselines, info and analysis, please see the paper: https://arxiv.org/abs/2104.06967 ## Limitations & Bias - The model inherits social biases from both DistilBERT and MSMARCO. - The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text. ## Citation If you use our model checkpoint please cite our work as: ``` @inproceedings{Hofstaetter2021_tasb_dense_retrieval, author = {Sebastian Hofst{\"a}tter and Sheng-Chieh Lin and Jheng-Hong Yang and Jimmy Lin and Allan Hanbury}, title = {{Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling}}, booktitle = {Proc. of SIGIR}, year = {2021}, } ```
dbmdz/flair-clef-hipe-german-base
dbmdz
2021-04-09T13:00:18Z
15
1
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "arxiv:2011.06993", "arxiv:2010.10392", "license:mit", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - flair - token-classification - sequence-tagger-model language: de widget: - text: "Herr Oberst Brunner ist nämlich Hauptagent für den Kanton Zürich." license: mit --- # Triple E - Effective Ensembling of Embeddings and Language Models for NER of Historical German Based on [our paper](http://ceur-ws.org/Vol-2696/paper_173.pdf) we release a new baseline model for the German [CLEF-HIPE shared task](https://impresso.github.io/CLEF-HIPE-2020/). In contrast to the models used in the paper, we manually sentence-segmented and normalize hyphenations and trained a NER model using the German Europeana BERT model. Additionally, we perform experiments with different context sizes. This approach is described in more detail in [this paper](https://arxiv.org/abs/2011.06993). # Results The results with different context sizes can be seen in the following table: | Model | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | -------------------------- | --------------- | --------------- | --------------- | ------------------- | --------------- | --------------- | German Europeana BERT | (81.45) / 76.92 | (**81.53**) / 77.03 | (80.49) / 77.83 | (80.88) / 77.19 | (81.39) / 77.00 | (81.15 ± 0.45) / 77.19 ± 0.34 | German Europeana BERT (16) | (**82.56**) / 77.38 | (81.19) / 77.76 | (80.99) / 76.34 | (81.27) / 77.70 | (81.28) / 77.22 | (81.46 ± 0.63) / 77.28 ± 0.57 | German Europeana BERT (32) | (**82.04**) / 78.50 | (81.14) / 76.56 | (81.81) / 78.28 | (81.50) / 76.90 | (81.64) / 77.94 | (81.63 ± 0.34) / 77.64 ± 0.86 | German Europeana BERT (64) | (81.21) / 78.39 | (81.27) / 75.98 | (**81.88**) / 78.40 | (81.66) / 77.35 | (81.29) / 76.70 | (81.46 ± 0.29) / 77.36 ± 1.06 | German Europeana BERT (80) | (82.13) / 77.77 | (81.31) / 76.81 | (82.09) / 78.69 | (**82.30**) / 76.79 | (80.65) / 77.10 | (81.70 ± 0.70) / 77.43 ± 0.81 For model upload, we choose the best model on development score: 82.56 with a context length of 16. ## Comparisons The following figure shows the results with different context sized (on development dataset): ![German CLEF-HIPE Development Results](figures/clef_hipe_f1_score_development.png) We perform "Almost Stochastic Order" tests as proposed in the ["Deep Dominance - How to Properly Compare Deep Neural Models"](https://www.aclweb.org/anthology/P19-1266/) paper. The heatmap figure is heavily inspired by the ["CharacterBERT"](https://arxiv.org/abs/2010.10392) paper. ![Almost Stochastic Order Tests on Development set](figures/clef_hipe_asd_development.png)
vasilis/wav2vec2-large-xlsr-53-swedish
vasilis
2021-04-09T12:23:23Z
4
1
transformers
[ "transformers", "pytorch", "wav2vec2", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: sv-SE datasets: - common_voice - NST Swedish ASR Database metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: V XLSR Wav2Vec2 Large 53 - Swedish results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice sv-SE type: common_voice args: sv-SE metrics: - name: Test WER type: wer value: 14.695793 - name: Test CER type: cer value: 5.264666 --- # Wav2Vec2-Large-XLSR-53-Swedish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice) and parts for the [NST Swedish ASR Database](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-16/). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Swedish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "sv-SE", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish") model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish") model.to("cuda") chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data resampler = { 48_000: torchaudio.transforms.Resample(48_000, 16_000), 44100: torchaudio.transforms.Resample(44100, 16_000), 32000: torchaudio.transforms.Resample(32000, 16_000) } # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]]))) ``` **Test Result**: 14.695793 % ## Training As first step used Common Voice train dataset and parts from NST as can be found [here](https://github.com/se-asr/nst/tree/master). Part of NST where removed using this mask ```python mask = [(5 < len(x.split()) < 20) and np.average([len(entry) for entry in x.split()]) > 5 for x in dataset['transcript'].tolist()] ``` After training like this for 20000 steps the model was finetuned on all of nst data using the mask ```python mask = [(1 < len(x.split()) < 25) and np.average([len(entry) for entry in x.split()]) > 3 for x in dataset['transcript'].tolist()] ``` and all of common voice for 100000 more steps approximately 16 epochs.
Aurora/community.afpglobal
Aurora
2021-04-08T08:34:53Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
https://community.afpglobal.org/network/members/profile?UserKey=b0b38adc-86c7-4d30-85c6-ac7d15c5eeb0 https://community.afpglobal.org/network/members/profile?UserKey=f4ddef89-b508-4695-9d1e-3d4d1a583279 https://community.afpglobal.org/network/members/profile?UserKey=36081479-5e7b-41ba-8370-ecf72989107a https://community.afpglobal.org/network/members/profile?UserKey=e1a88332-be7f-4997-af4e-9fcb7bb366da https://community.afpglobal.org/network/members/profile?UserKey=4738b405-2017-4025-9e5f-eadbf7674840 https://community.afpglobal.org/network/members/profile?UserKey=eb96d91c-31ae-46e1-8297-a3c8551f2e6a https://u.mpi.org/network/members/profile?UserKey=9867e2d9-d22a-4dab-8bcf-3da5c2f30745 https://u.mpi.org/network/members/profile?UserKey=5af232f2-a66e-438f-a5ab-9768321f791d https://community.afpglobal.org/network/members/profile?UserKey=481305df-48ea-4c50-bca4-a82008efb427 https://u.mpi.org/network/members/profile?UserKey=039fbb91-52c6-40aa-b58d-432fb4081e32 https://www.geogebra.org/m/jkfkayj3 https://www.geogebra.org/m/hptnn7ar https://www.geogebra.org/m/de9cwmrf https://www.geogebra.org/m/yjc5hdep https://www.geogebra.org/m/nm8r56w5 https://www.geogebra.org/m/j7wfcpxj https://www.geogebra.org/m/bbuczchu https://www.geogebra.org/m/xwyasqje https://www.geogebra.org/m/mx2cqkwr https://www.geogebra.org/m/tkqqqthm https://www.geogebra.org/m/asdaf9mj https://www.geogebra.org/m/ywuaj7p5
vaishnavi/indic-bert-512
vaishnavi
2021-04-08T06:38:32Z
5
0
transformers
[ "transformers", "pytorch", "albert", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: mit datasets: - AI4Bharat IndicNLP Corpora --- # IndicBERT IndicBERT is a multilingual ALBERT model pretrained exclusively on 12 major Indian languages. It is pre-trained on our novel monolingual corpus of around 9 billion tokens and subsequently evaluated on a set of diverse tasks. IndicBERT has much fewer parameters than other multilingual models (mBERT, XLM-R etc.) while it also achieves a performance on-par or better than these models. The 12 languages covered by IndicBERT are: Assamese, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu. The code can be found [here](https://github.com/divkakwani/indic-bert). For more information, checkout our [project page](https://indicnlp.ai4bharat.org/) or our [paper](https://indicnlp.ai4bharat.org/papers/arxiv2020_indicnlp_corpus.pdf). ## Pretraining Corpus We pre-trained indic-bert on AI4Bharat's monolingual corpus. The corpus has the following distribution of languages: | Language | as | bn | en | gu | hi | kn | | | ----------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------- | | **No. of Tokens** | 36.9M | 815M | 1.34B | 724M | 1.84B | 712M | | | **Language** | **ml** | **mr** | **or** | **pa** | **ta** | **te** | **all** | | **No. of Tokens** | 767M | 560M | 104M | 814M | 549M | 671M | 8.9B | ## Evaluation Results IndicBERT is evaluated on IndicGLUE and some additional tasks. The results are summarized below. For more details about the tasks, refer our [official repo](https://github.com/divkakwani/indic-bert) #### IndicGLUE Task | mBERT | XLM-R | IndicBERT -----| ----- | ----- | ------ News Article Headline Prediction | 89.58 | 95.52 | **95.87** Wikipedia Section Title Prediction| **73.66** | 66.33 | 73.31 Cloze-style multiple-choice QA | 39.16 | 27.98 | **41.87** Article Genre Classification | 90.63 | 97.03 | **97.34** Named Entity Recognition (F1-score) | **73.24** | 65.93 | 64.47 Cross-Lingual Sentence Retrieval Task | 21.46 | 13.74 | **27.12** Average | 64.62 | 61.09 | **66.66** #### Additional Tasks Task | Task Type | mBERT | XLM-R | IndicBERT -----| ----- | ----- | ------ | ----- BBC News Classification | Genre Classification | 60.55 | **75.52** | 74.60 IIT Product Reviews | Sentiment Analysis | 74.57 | **78.97** | 71.32 IITP Movie Reviews | Sentiment Analaysis | 56.77 | **61.61** | 59.03 Soham News Article | Genre Classification | 80.23 | **87.6** | 78.45 Midas Discourse | Discourse Analysis | 71.20 | **79.94** | 78.44 iNLTK Headlines Classification | Genre Classification | 87.95 | 93.38 | **94.52** ACTSA Sentiment Analysis | Sentiment Analysis | 48.53 | 59.33 | **61.18** Winograd NLI | Natural Language Inference | 56.34 | 55.87 | **56.34** Choice of Plausible Alternative (COPA) | Natural Language Inference | 54.92 | 51.13 | **58.33** Amrita Exact Paraphrase | Paraphrase Detection | **93.81** | 93.02 | 93.75 Amrita Rough Paraphrase | Paraphrase Detection | 83.38 | 82.20 | **84.33** Average | | 69.84 | **74.42** | 73.66 \* Note: all models have been restricted to a max_seq_length of 128. ## Downloads The model can be downloaded [here](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/models/indic-bert-v1.tar.gz). Both tf checkpoints and pytorch binaries are included in the archive. Alternatively, you can also download it from [Huggingface](https://huggingface.co/ai4bharat/indic-bert). ## Citing If you are using any of the resources, please cite the following article: ``` @inproceedings{kakwani2020indicnlpsuite, title={{IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages}}, author={Divyanshu Kakwani and Anoop Kunchukuttan and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar}, year={2020}, booktitle={Findings of EMNLP}, } ``` We would like to hear from you if: - You are using our resources. Please let us know how you are putting these resources to use. - You have any feedback on these resources. ## License The IndicBERT code (and models) are released under the MIT License. ## Contributors - Divyanshu Kakwani - Anoop Kunchukuttan - Gokul NC - Satish Golla - Avik Bhattacharyya - Mitesh Khapra - Pratyush Kumar This work is the outcome of a volunteer effort as part of [AI4Bharat initiative](https://ai4bharat.org). ## Contact - Anoop Kunchukuttan ([anoop.kunchukuttan@gmail.com](mailto:anoop.kunchukuttan@gmail.com)) - Mitesh Khapra ([miteshk@cse.iitm.ac.in](mailto:miteshk@cse.iitm.ac.in)) - Pratyush Kumar ([pratyush@cse.iitm.ac.in](mailto:pratyush@cse.iitm.ac.in))
valhalla/gpt-neo-random-tiny
valhalla
2021-04-07T16:38:40Z
7,210
0
transformers
[ "transformers", "pytorch", "gpt_neo", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
**This model is uploaded for testing purpose. It's random model not trained on anything**
MalawiUniST/ISO6392.nya.ny
MalawiUniST
2021-04-07T14:30:00Z
6
0
transformers
[ "transformers", "pytorch", "longformer", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
This model trained on nyanja dataset in Longformer
vasudevgupta/offnote-mbart-adapters-bhasha
vasudevgupta
2021-04-07T13:53:17Z
4
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
**Project GitHub:** https://github.com/vasudevgupta7/transformers-adapters **Notes** * base model can be downloaded from `facebook/mbart-large-cc25` * `adapters-hin-eng.pt`: adapters hin-eng * `adapters-guj-eng.pt`: adapters guj-eng
tyoc213/wav2vec2-large-xlsr-nahuatl
tyoc213
2021-04-07T02:59:04Z
7
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: nah specifically ncj datasets: - created a new dataset based on https://www.openslr.org/92/ metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Nahuatl XLSR Wav2Vec 53 results: - task: name: Speech Recognition type: automatic-speech-recognition metrics: - name: Test WER type: wer value: 69.11 --- # Wav2Vec2-Large-XLSR-53-ncj/nah Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Nahuatl specifically of the Nort of Puebla (ncj) using a derivate of [SLR92](https://www.openslr.org/92/), and some samples of `es` and `de` datasets from [Common Voice](https://huggingface.co/datasets/common_voice). ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") # TODO: publish nahuatl_slr92_by_sentence processor = Wav2Vec2Processor.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl") model = Wav2Vec2ForCTC.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Nahuatl specifically of the Nort of Puebla (ncj) test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "{lang_id}", split="test") # TODO: publish nahuatl_slr92_by_sentence wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl") model = Wav2Vec2ForCTC.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\"\“\%\‘\”\�\(\)\-]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 50.95 % ## Training A derivate of [SLR92](https://www.openslr.org/92/) to be published soon.And some samples of `es` and `de` datasets from [Common Voice](https://huggingface.co/datasets/common_voice) The script used for training can be found [less60wer.ipynb](./less60wer.ipynb)
navteca/roberta-large-squad2
navteca
2021-04-06T16:31:09Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "en", "dataset:squad_v2", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- datasets: - squad_v2 language: en license: mit pipeline_tag: question-answering tags: - roberta - question-answering --- # Roberta large model for QA (SQuAD 2.0) This model uses [roberta-large](https://huggingface.co/roberta-large). ## Training Data The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. It can be used for question answering task. ## Usage and Performance The trained model can be used like this: ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline # Load model & tokenizer roberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/roberta-large-squad2') roberta_tokenizer = AutoTokenizer.from_pretrained('navteca/roberta-large-squad2') # Get predictions nlp = pipeline('question-answering', model=roberta_model, tokenizer=roberta_tokenizer) result = nlp({ 'question': 'How many people live in Berlin?', 'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.' }) print(result) #{ # "answer": "3,520,031" # "end": 36, # "score": 0.96186668, # "start": 27, #} ```
seduerr/t5-small-pytorch
seduerr
2021-04-06T04:48:50Z
273
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - fr - ro - de datasets: - c4 tags: - summarization - translation license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?search=t5) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
mkrigba/FreeTextSIG
mkrigba
2021-04-02T21:32:16Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
Frequency Distribution of Free Text SIGs from medication orders in Allscripts
yluisfern/FDR
yluisfern
2021-04-02T16:40:25Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://www.geogebra.org/m/cwcveget https://www.geogebra.org/m/b8dzxk6z https://www.geogebra.org/m/nqanttum https://www.geogebra.org/m/pd3g8a4u https://www.geogebra.org/m/jw8324jz https://www.geogebra.org/m/wjbpvz5q https://www.geogebra.org/m/qm3g3ma6 https://www.geogebra.org/m/sdajgph8 https://www.geogebra.org/m/e3ghhcbf https://www.geogebra.org/m/msne4bfm https://www.geogebra.org/m/nmcv2te5 https://www.geogebra.org/m/hguqx6cn https://www.geogebra.org/m/jnyvpgqu https://www.geogebra.org/m/syctd97g https://www.geogebra.org/m/nq9erdby https://www.geogebra.org/m/au4har8c https://network.aza.org/network/members/profile?UserKey=811de229-7f08-4360-863c-ac04181ba9c0 https://network.aza.org/network/members/profile?UserKey=31b495a0-36f7-4a50-ba3e-d76e3487278c https://network.aza.org/network/members/profile?UserKey=753c0ddd-bded-4b03-8c68-11dacdd1f676 https://network.aza.org/network/members/profile?UserKey=db9d0a25-1615-4e39-b61f-ad68766095b3 https://network.aza.org/network/members/profile?UserKey=59279f52-50cf-4686-9fb0-9ab613211ead https://network.aza.org/network/members/profile?UserKey=67b3ce20-cc3a-420f-8933-10796f301060 https://network.aza.org/network/members/profile?UserKey=f5e610c3-6400-4429-b42b-97eeeeb284a9 https://network.aza.org/network/members/profile?UserKey=ccda0739-f5f5-4ecc-a729-77c9a6825897 https://network.aza.org/network/members/profile?UserKey=3983471f-cf43-4a4a-90d3-148040f92dd9 https://network.aza.org/network/members/profile?UserKey=9f16d7a8-3502-4904-a99a-38362de78973 https://network.aza.org/network/members/profile?UserKey=961981d5-9743-44ac-8525-d4c8b708eb5a https://network.aza.org/network/members/profile?UserKey=178276d7-c64d-408e-af52-96d1ebd549fc
ozcangundes/wav2vec2-large-xlsr-53-turkish
ozcangundes
2021-04-02T14:54:49Z
25
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "tr", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - tr datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Ozcan Gundes XLSR Wav2Vec2 Large Turkish results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice tr type: common_voice args: tr metrics: - name: Test WER type: wer value: 29.62 --- # Wav2Vec2-Large-XLSR-53-Turkish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "tr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") model = Wav2Vec2ForCTC.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "tr", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") model = Wav2Vec2ForCTC.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\’\\']' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 29.62 % ## Training The Common Voice `train` and `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/1hesw9z_kFFINT93jBvGuFspOLrHx10AE?usp=sharing)
mami/malingkundonagn
mami
2021-04-02T13:24:01Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
ERROR: type should be string, got "\thttps://zambiainc.com/advert/full-watchnow-nobody-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-raya-and-the-last-dragon-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-chaos-walking-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-the-courier-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-the-croods-a-new-age-watch-2020-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-the-marksman-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-boogie-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-minari-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-promising-young-woman-watch-2020-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-monster-hunter-watch-2020-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-nomadland-watch-2020-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-the-war-with-grandpa-watch-2020-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-news-of-the-world-watch-2020-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-six-minutes-to-midnight-watch-2020-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-dutch-watch-2020-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-lamb-of-god-the-concert-film-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-long-weekend-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-mystery-of-the-kingdom-of-god-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-the-mauritanian-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-dark-state-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-zack-snyders-justice-league-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-godzilla-vs-kong-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-bad-trip-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-tom-jerry-watch-2021-movie-online-stream-free/\n\thttps://zambiainc.com/advert/full-watchnow-skylines-watch-2020-movie-online-stream-free/\nhttps://zambiainc.com/advert/full-watchnow-the-little-things-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-space-sweepers-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-sentinelle-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-the-unholy-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-mortal-kombat-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-assault-on-va-33-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-vanquish-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-voyagers-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-stowaway-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-thunder-force-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-in-search-of-tomorrow-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-arlo-the-alligator-boy-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-the-nameless-days-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-the-banishing-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-fatherhood-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-bananza-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-bonhoeffer-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-held-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-dawn-of-the-beast-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-00k9-no-time-to-shed-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-between-us-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-the-believer-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-limbo-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-things-heard-seen-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-free-byrd-watch-2021-movie-online-stream-free/\t\nhttps://zambiainc.com/advert/full-watchnow-the-workplace-watch-2021-movie-online-stream-free/\t\n"
not-tanh/wav2vec2-large-xlsr-53-vietnamese
not-tanh
2021-04-02T10:59:16Z
8
3
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "vi", "dataset:common_voice", "dataset:vivos", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: vi datasets: - common_voice - vivos metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Ted Vietnamese XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice vi type: common_voice args: vi metrics: - name: Test WER type: wer value: 39.571823 --- # Wav2Vec2-Large-XLSR-53-Vietnamese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), [Vivos dataset](https://ailab.hcmus.edu.vn/vivos) and [FOSD dataset](https://data.mendeley.com/datasets/k9sxg2twv4/4). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "vi", split="test") processor = Wav2Vec2Processor.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese") model = Wav2Vec2ForCTC.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Vietnamese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "vi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese") model = Wav2Vec2ForCTC.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese") model.to("cuda") chars_to_ignore_regex = r'[,?.!\-;:"“%\'�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 39.571823% ## Training ## TODO The Common Voice `train`, `validation`, the VIVOS and FOSD datasets were used for training The script used for training can be found ... # TODO
qqpann/w2v_hf_jsut_xlsr53
qqpann
2021-04-01T14:49:39Z
20
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ja", "dataset:common_voice", "dataset:jsut", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ja datasets: - common_voice - jsut metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Japanese XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ja type: common_voice args: ja metrics: - name: Test WER type: wer value: 51.72 - name: Test CER type: cer value: 24.89 --- # Wav2Vec2-Large-XLSR-53-Japanese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice), and JSUT dataset{s}. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ja", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("qqhann/w2v_hf_jsut_xlsr53") model = Wav2Vec2ForCTC.from_pretrained("qqhann/w2v_hf_jsut_xlsr53") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Japanese test data of Common Voice. ```python !pip install torchaudio !pip install datasets transformers !pip install jiwer !pip install mecab-python3 !pip install unidic-lite !python -m unidic download !pip install jaconv import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import MeCab from jaconv import kata2hira from typing import List # Japanese preprocessing tagger = MeCab.Tagger("-Owakati") chars_to_ignore_regex = '[\。\、\「\」\,\?\.\!\-\;\:\"\“\%\‘\”\�]' def text2kata(text): node = tagger.parseToNode(text) word_class = [] while node: word = node.surface wclass = node.feature.split(',') if wclass[0] != u'BOS/EOS': if len(wclass) <= 6: word_class.append((word)) elif wclass[6] == None: word_class.append((word)) else: word_class.append((wclass[6])) node = node.next return ' '.join(word_class) def hiragana(text): return kata2hira(text2kata(text)) test_dataset = load_dataset("common_voice", "ja", split="test") wer = load_metric("wer") resampler = torchaudio.transforms.Resample(48_000, 16_000) # JSUT is already 16kHz # resampler = torchaudio.transforms.Resample(16_000, 16_000) # JSUT is already 16kHz processor = Wav2Vec2Processor.from_pretrained("qqhann/w2v_hf_jsut_xlsr53") model = Wav2Vec2ForCTC.from_pretrained("qqhann/w2v_hf_jsut_xlsr53") model.to("cuda") # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = hiragana(batch["sentence"]).strip() batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) def cer_compute(predictions: List[str], references: List[str]): p = [" ".join(list(" " + pred.replace(" ", ""))).strip() for pred in predictions] r = [" ".join(list(" " + ref.replace(" ", ""))).strip() for ref in references] return wer.compute(predictions=p, references=r) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) print("CER: {:2f}".format(100 * cer_compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 51.72 % ## Training <!-- The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training. --> The privately collected JSUT Japanese dataset was used for training. <!-- The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. -->
ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt
ydshieh
2021-04-01T14:09:29Z
109
31
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "zh", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: zh datasets: - common_voice metrics: - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Large 53 - Chinese (zh-CN), by Yih-Dar SHIEH results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice zh-CN type: common_voice args: zh-CN metrics: - name: Test CER type: cer value: 20.90 --- # Wav2Vec2-Large-XLSR-53-Chinese-zh-cn-gpt Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chinese (zh-CN) using the [Common Voice](https://huggingface.co/datasets/common_voice), included [Common Voice](https://huggingface.co/datasets/common_voice) Chinese (zh-TW) dataset (converting the label text to simplified Chinese). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "zh-CN", split="test") processor = Wav2Vec2Processor.from_pretrained("ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt") model = Wav2Vec2ForCTC.from_pretrained("ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated as follows on the zh-CN test data of Common Voice. Original CER calculation refer to https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese ```python #!pip install datasets==1.4.1 #!pip install transformers==4.4.0 #!pip install torchaudio #!pip install jiwer import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import jiwer def chunked_cer(targets, predictions, chunk_size=None): _predictions = [char for seq in predictions for char in list(seq)] _targets = [char for seq in targets for char in list(seq)] if chunk_size is None: return jiwer.wer(_targets, _predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): _predictions = [char for seq in predictions[start:end] for char in list(seq)] _targets = [char for seq in targets[start:end] for char in list(seq)] chunk_metrics = jiwer.compute_measures(_targets, _predictions) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) test_dataset = load_dataset("common_voice", "zh-CN", split="test") processor = Wav2Vec2Processor.from_pretrained("ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt") model = Wav2Vec2ForCTC.from_pretrained("ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\⋯\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\–\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\。\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\》\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\)\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\~\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\~\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\…\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\︰\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\(\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\」\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\‧\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\《\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\﹔\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\、\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\—\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\「\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\﹖\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\·\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\×\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\̃\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\̌\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ε\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\λ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\μ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\и\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\т\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\─\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\□\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\〈\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\〉\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\『\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\』\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ア\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\オ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\カ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\チ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ド\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ベ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ャ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ヤ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ン\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\・\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\丶\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\a\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\b\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\f\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\g\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\i\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\p\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t' + "\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\']" resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") + " " speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("CER: {:2f}".format(100 * chunked_cer(predictions=result["pred_strings"], targets=result["sentence"], chunk_size=1000))) ``` **Test Result**: 20.902244 % ## Training The Common Voice zh-CN `train`, `validation` were used for training, as well as Common Voice zh-TW `train`, `validation` and `test` datasets. The script used for training can be found [to be uploaded later](...)
lighteternal/SSE-TUC-mt-el-en-lowercase
lighteternal
2021-03-31T17:26:44Z
10
0
transformers
[ "transformers", "pytorch", "fsmt", "text2text-generation", "translation", "en", "el", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - el tags: - translation widget: - text: "Η τύχη βοηθάει τους τολμηρούς." license: apache-2.0 metrics: - bleu --- ## Greek to English NMT (lower-case output) ## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) * source languages: el * target languages: en * licence: apache-2.0 * dataset: Opus, CCmatrix * model: transformer(fairseq) * pre-processing: tokenization + BPE segmentation * metrics: bleu, chrf * output: lowercase only, for mixed-cased model use this: https://huggingface.co/lighteternal/SSE-TUC-mt-el-en-cased ### Model description Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\ BPE segmentation (10k codes).\\ Lower-case model. ### How to use ``` from transformers import FSMTTokenizer, FSMTForConditionalGeneration mname = " <your_downloaded_model_folderpath_here> " tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) text = "Η τύχη βοηθάει τους τολμηρούς." encoded = tokenizer.encode(text, return_tensors='pt') outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True) for i, output in enumerate(outputs): i += 1 print(f"{i}: {output.tolist()}") decoded = tokenizer.decode(output, skip_special_tokens=True) print(f"{i}: {decoded}") ``` ## Training data Consolidated corpus from Opus and CC-Matrix (~6.6GB in total) ## Eval results Results on Tatoeba testset (EL-EN): | BLEU | chrF | | ------ | ------ | | 79.3 | 0.795 | Results on XNLI parallel (EL-EN): | BLEU | chrF | | ------ | ------ | | 66.2 | 0.623 | ### BibTeX entry and citation info Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW ### Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
lighteternal/SSE-TUC-mt-el-en-cased
lighteternal
2021-03-31T17:26:16Z
43
2
transformers
[ "transformers", "pytorch", "fsmt", "text2text-generation", "translation", "en", "el", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - el tags: - translation widget: - text: "Ο όρος τεχνητή νοημοσύνη αναφέρεται στον κλάδο της πληροφορικής ο οποίος ασχολείται με τη σχεδίαση και την υλοποίηση υπολογιστικών συστημάτων που μιμούνται στοιχεία της ανθρώπινης συμπεριφοράς. " license: apache-2.0 metrics: - bleu --- ## Greek to English NMT ## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) * source languages: el * target languages: en * licence: apache-2.0 * dataset: Opus, CCmatrix * model: transformer(fairseq) * pre-processing: tokenization + BPE segmentation * metrics: bleu, chrf ### Model description Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\ BPE segmentation (20k codes).\\ Mixed-case model. ### How to use ``` from transformers import FSMTTokenizer, FSMTForConditionalGeneration mname = "lighteternal/SSE-TUC-mt-el-en-cased" tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) text = "Ο όρος τεχνητή νοημοσύνη αναφέρεται στον κλάδο της πληροφορικής ο οποίος ασχολείται με τη σχεδίαση και την υλοποίηση υπολογιστικών συστημάτων που μιμούνται στοιχεία της ανθρώπινης συμπεριφοράς ." encoded = tokenizer.encode(text, return_tensors='pt') outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True) for i, output in enumerate(outputs): i += 1 print(f"{i}: {output.tolist()}") decoded = tokenizer.decode(output, skip_special_tokens=True) print(f"{i}: {decoded}") ``` ## Training data Consolidated corpus from Opus and CC-Matrix (~6.6GB in total) ## Eval results Results on Tatoeba testset (EL-EN): | BLEU | chrF | | ------ | ------ | | 79.3 | 0.795 | Results on XNLI parallel (EL-EN): | BLEU | chrF | | ------ | ------ | | 66.2 | 0.623 | ### BibTeX entry and citation info Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW ### Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
Wikidepia/indobert-lite-squad
Wikidepia
2021-03-31T13:26:55Z
132
6
transformers
[ "transformers", "pytorch", "albert", "question-answering", "id", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: id widget: - text: "Kapan Einstein melepas kewarganegaraan Jerman?" context: "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900." --- # IndoBERT-Lite base fine-tuned on Translated SQuAD v2 [IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/SQuAD) for **Q&A** downstream task. ## Model in action Fast usage with **pipelines**: ```python from transformers import BertTokenizerFast, pipeline tokenizer = BertTokenizerFast.from_pretrained( 'Wikidepia/indobert-lite-squad' ) qa_pipeline = pipeline( "question-answering", model="Wikidepia/indobert-lite-squad", tokenizer=tokenizer ) qa_pipeline({ 'context': "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900.", 'question': "Kapan Einstein melepas kewarganegaraan Jerman?" }) ``` # Output: ```json { "score":0.9799205660820007, "start":147, "end":151, "answer":"1896" } ``` README copied from [mrm8488's repository](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2)
qqpann/wav2vec2-large-xlsr-japanese-0325-1200
qqpann
2021-03-29T10:26:40Z
8
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ja", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ja datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Japanese XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ja type: common_voice args: ja metrics: - name: Test WER type: wer value: { wer_result_on_test } #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value --- # Wav2Vec2-Large-XLSR-53-{language} #TODO: replace language with your {language}, _e.g._ French Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, _e.g._ French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ja", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, _e.g._ French ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ja", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: XX.XX % <!-- # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags. --> ## Training The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... <!-- # TODO: adapt to state all the datasets that were used for training. --> The script used for training can be found [here](...) <!-- # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. -->
othrif/wav2vec_test
othrif
2021-03-29T02:48:07Z
22
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "ar", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ar datasets: - https://arabicspeech.org/ tags: - audio - automatic-speech-recognition - speech license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Egyptian by Zaid Alyafeai and Othmane Rifki results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: arabicspeech.org MGB-3 type: arabicspeech.org MGB-3 args: ar metrics: - name: Test WER type: wer value: 55.2 --- # Test Wav2Vec2 with egyptian arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Egyptian using the [arabicspeech.org MGB-3](https://arabicspeech.org/mgb3-asr/) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor dataset = load_dataset("arabic_speech_corpus", split="test") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec_test") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec_test") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
vasilis/wav2vec2-large-xlsr-53-finnish
vasilis
2021-03-29T02:30:18Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "fi", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: fi datasets: - common_voice - CSS10 finnish: Single Speaker Speech Dataset metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: V XLSR Wav2Vec2 Large 53 - finnish results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice fi type: common_voice args: fi metrics: - name: Test WER type: wer value: 38.335242 - name: Test CER type: cer value: 6.552408 --- # Wav2Vec2-Large-XLSR-53-finnish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on finnish using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10 finnish: Single Speaker Speech Dataset](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "el", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the finnish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "fi", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") model.to("cuda") chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data replacements = {"…": "", "–": ''} resampler = { 48_000: torchaudio.transforms.Resample(48_000, 16_000), 44100: torchaudio.transforms.Resample(44100, 16_000), 32000: torchaudio.transforms.Resample(32000, 16_000) } # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() for key, value in replacements.items(): batch["sentence"] = batch["sentence"].replace(key, value) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]]))) ``` **Test Result**: 38.335242 % ## Training The Common Voice train dataset was used for training. Also all of `CSS10 Finnish` was used using the normalized transcripts. After 20000 steps the models was finetuned using the common voice train and validation sets for 2000 steps more.
wietsedv/wav2vec2-large-xlsr-53-frisian
wietsedv
2021-03-28T20:09:35Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: fy-NL datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Frisian XLSR Wav2Vec2 Large 53 by Wietse de Vries results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice fy-NL type: common_voice args: fy-NL metrics: - name: Test WER type: wer value: 16.25 --- # Wav2Vec2-Large-XLSR-53-Frisian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian") model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Frisian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "fy-NL", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian") model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\'\“\%\‘\”]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:.2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 16.25 % ## Training The Common Voice `train` and `validation` datasets were used for training.
pcuenq/wav2vec2-large-xlsr-53-es
pcuenq
2021-03-28T19:06:18Z
6
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "es", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: es datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Large 53 Spanish by pcuenq results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice es type: common_voice args: es metrics: - name: Test WER type: wer value: 10.50 --- # Wav2Vec2-Large-XLSR-53-Spanish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset{s}. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "es", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es") model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Spanish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "es", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es") model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es") model.to("cuda") ## Text pre-processing chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]' chars_to_ignore_pattern = re.compile(chars_to_ignore_regex) def remove_special_characters(batch): batch["sentence"] = chars_to_ignore_pattern.sub('', batch["sentence"]).lower() + " " return batch def replace_diacritics(batch): sentence = batch["sentence"] sentence = re.sub('ì', 'í', sentence) sentence = re.sub('ù', 'ú', sentence) sentence = re.sub('ò', 'ó', sentence) sentence = re.sub('à', 'á', sentence) batch["sentence"] = sentence return batch def replace_additional(batch): sentence = batch["sentence"] sentence = re.sub('ã', 'a', sentence) # Portuguese, as in São Paulo sentence = re.sub('ō', 'o', sentence) # Japanese sentence = re.sub('ê', 'e', sentence) # Português batch["sentence"] = sentence return batch ## Audio pre-processing # I tried to perform the resampling using a `torchaudio` `Resampler` transform, # but found that the process deadlocked when using multiple processes. # Perhaps my torchaudio is using the wrong sox library under the hood, I'm not sure. # Fortunately, `librosa` seems to work fine, so that's what I'll use for now. import librosa def speech_file_to_array_fn(batch): speech_array, sample_rate = torchaudio.load(batch["path"]) batch["speech"] = librosa.resample(speech_array.squeeze().numpy(), sample_rate, 16_000) return batch # One-pass mapping function # Text transformation and audio resampling def cv_prepare(batch): batch = remove_special_characters(batch) batch = replace_diacritics(batch) batch = replace_additional(batch) batch = speech_file_to_array_fn(batch) return batch # Number of CPUs or None num_proc = 16 test_dataset = test_dataset.map(cv_prepare, remove_columns=['path'], num_proc=num_proc) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) # WER Metric computation # `wer.compute` crashes in my computer with more than ~10000 samples. # Until I confirm in a different one, I created a "chunked" version of the computation. # It gives the same results as `wer.compute` for smaller datasets. import jiwer def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000))) #print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 10.50 % ## Text processing The Common Voice `es` dataset has a lot of characters that don't belong to the Spanish language, even after discarding separators and punctuators. I made some translations and discarded most of the extraneous characters. I decided to keep all the Spanish language diacritics. This is a difficult decision. Some times the diacritics are added just because of ortography rules, but they don't alter the meaning of the word. In other cases, however, the diacritics carry meaning, as they disambiguate among different senses. A better WER score would surely have been achieved using just the non-accented characters, and the resulting text would be understood by Spanish speakers. Nevertheless, I think keeping them is "more correct". All the rules I applied are shown in the evaluation script. ## Training The Common Voice `train` and `validation` datasets were used for training. For dataset handling reasons, I initially split `train`+`validation` in 10% splits so I could see progress earlier and react if needed. * I trained for 30 epochs on the first split only, using similar values as the ones proposed by Patrick in his demo notebook. I used a batch_size of 24 with 2 gradient accumulation steps. This gave a WER of about 16.3%on the full test set. * I then trained the resulting model on the 9 remaining splits, for 3 epochs each, but with a faster warmup of 75 steps. * Next, I trained 3 epochs on each of the 10 splits using a smaller learning rate of `1e-4`. A warmup of 75 steps was used in this case too. The final model had a WER of about 11.7%. * By this time we had already figured out the reason for the initial delay in training time, and I decided to use the full dataset for training. However, in my tests I had seen that varying the learning rate seemed to work well, so I wanted to replicate that. I selected a cosine schedule with hard restarts, a reference learning rate of `3e-5` and 10 epochs. I configured the cosine schedule to have 10 cycles too, and used no warmup. This produced a WER of ~10.5%. ## Other things I tried * Starting from the same fine-tuned model, I compared a constant lr of 1e-4 against a linear schedule with warmup. The linear schedule worked better (11.85 vs 12.72 WER%). * I tried to use a Spanish model to improve a Basque one. I transformed the text to make ortography more similar to the target language, but the Basque model did not improve. * Label smoothing did not work. ## Issues and other technical challenges I had previously used the `transformers` library as an end user, just to try Bert on some tasks, but this is the first time I have needed to look into the code. * The `Datasets` abstraction is great because, being based on memory-mapped files, it allows arbitrarily-sized datasets to be processed. However, it is important to understand its limitations and trade-offs. I found caching convenient, but disk usage explodes fast. I keep the datasets for my current projects in a 1 TB, fast SSD disk, and a couple of times I ran out of space. I had to understand how cache files are stored and learn when it's best to disable caching and manually save when you need to. I found that data exploration is better suited for smaller datasets or sampled ones, but actual processing is most efficient when you have identified the transformations you need and apply them in a single `map` operation. * There was a noticeable delay before training started. Fortunately, we found the reason why, discussed it in Slack and the forums and created a workaround. * The WER metric crashed on large datasets. I evaluated on a small sample (also, it's faster) and wrote an accumulative version of wer that runs on fixed memory. I'd like to verify whether this change makes sense to be used inside the training loop. * `torchaudio` deadlocks when using multiple processes. `librosa` works fine. To be investigated. * When using `num_proc` inside a notebook, I could not see progress bars. This is surely some permissions issue in my computer. I still need to find it out.
vasudevgupta/mbart-summarizer-interiit
vasudevgupta
2021-03-28T17:49:15Z
10
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
This model is trained as a part of **InterIIT'21 competition**, on the dataset provided by Bridgei2i. It is able to do multilingual (Hindi, English, Hinglish) summarization (many -> one) & is capable of generating summaries in English regardless of the input language. | Rouge-L | Sacrebleu | Headline Similarity (using sentence-transformers) | |-----------------------|-----------|---------------------------------------------------| | p=0.46 r=0.49 f1=0.52 | 23.46 | 0.75 | mBART is initialized from **facebook/mbart-large-cc25** and is trained as per strategy mentioned in our [GitHub](https://github.com/vasudevgupta7/Bridgei2i-Winning-Solutions).
dispenst/hgfytgfg
dispenst
2021-03-28T15:32:14Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
<a href="https://www.geogebra.org/m/w8uzjttg">.</a> <a href="https://www.geogebra.org/m/gvn7m78g">.</a> <a href="https://www.geogebra.org/m/arxecanq">.</a> <a href="https://www.geogebra.org/m/xb69bvww">.</a> <a href="https://www.geogebra.org/m/apvepfnd">.</a> <a href="https://www.geogebra.org/m/evmj8ckk">.</a> <a href="https://www.geogebra.org/m/qxcxwmhp">.</a> <a href="https://www.geogebra.org/m/p3cxqh6c">.</a> <a href="https://www.geogebra.org/m/ggrahbgd">.</a> <a href="https://www.geogebra.org/m/pnhymrbc">.</a> <a href="https://www.geogebra.org/m/zjukbtk9">.</a> <a href="https://www.geogebra.org/m/bbezun8r">.</a> <a href="https://www.geogebra.org/m/sgwamtru">.</a> <a href="https://www.geogebra.org/m/fpunkxxp">.</a> <a href="https://www.geogebra.org/m/acxebrr7">.</a> <a href="https://jobs.acm.org/jobs/watch-godzilla-vs-kong-2021-full-1818658-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-godzilla-vs-kong-online-2021-full-f-r-e-e-1818655-cd">.</a> <a href="https://jobs.acm.org/jobs/watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-f-u-l-l-f-r-e-e-1818661-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-zack-snyder-s-justice-league-online-2021-full-f-r-e-e-1818662-cd">.</a> <a href="https://jobs.acm.org/jobs/hd-watch-godzilla-vs-kong-2021-version-full-hbomax-1818659-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-girl-in-the-basement-online-2021-full-f-r-e-e-1818663-cd">.</a> <a href="https://jobs.acm.org/jobs/watch-godzilla-vs-kong-2021-f-u-l-l-h-d-1818660-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-billie-eilish-the-world-s-a-little-blurry-2021-f-u-l-l-f-r-e-e-1818666-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-monster-hunter-2020-f-u-l-l-f-r-e-e-1818667-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-raya-and-the-last-dragon-2021-f-u-l-l-f-r-e-e-1818669-cd">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-365-days-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-billie-eilish-the-worlds-a-little-blurry-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-cherry-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-coming-2-america-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-godzilla-vs-kong-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-judas-and-the-black-messiah-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-monster-hunter-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-mortal-kombat-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-raya-and-the-last-dragon-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-tenet-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-the-world-to-come-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-tom-and-jerry-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-willys-wonderland-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-wonder-woman-1984-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-wrong-turn-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-zack-snyders-justice-league-2021-hd-online-full-free-stream-2/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-a-writers-odyssey-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-the-marksman-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-after-we-collided-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-watch-full/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online-full-version-123movies/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-2/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-3/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-4/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full/">.</a> <a href="https://pactforanimals.org/advert/full-watch-123movies-godzilla-vs-kong-2021/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-free-hd/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free-online/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-5/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online-full-version-hd/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-full-2021-free/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-2/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-6/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-7/">.</a> <a href="https://pactforanimals.org/advert/free-download-godzilla-vs-kong-2021-watch-full/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-online/">.</a> <a href="https://pactforanimals.org/advert/godzilla-vs-kong-2021-google-drive-mp4/">.</a> <a href="https://pactforanimals.org/advert/google-docs-godzilla-vs-kong-2021-google-drive-full-hd-mp4/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-8/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-9/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-3/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-online/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-4/">.</a> <a href="https://pactforanimals.org/advert/free-godzilla-vs-kong-2021-watch-full/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-10/">.</a> <a href="https://pactforanimals.org/advert/online-watch-godzilla-vs-kong-2021-full/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-godzilla-vs-kong-2021-full-online/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-11/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free-hd/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-free-online/">.</a> <a href="https://pactforanimals.org/advert/full-godzilla-vs-kong-2021-watch-online/">.</a> <a href="https://sites.google.com/view/mortalkombat1/">.</a> <a href="https://sites.google.com/view/free-watch-mortal-kombat-2021-/">.</a> <a href="https://sites.google.com/view/watch-mortal-kombat-2021-f-u-l/">.</a> <a href="https://sites.google.com/view/mortalkombat2/">.</a> <a href="https://sites.google.com/view/mortalkombat3/">.</a> <a href="https://sites.google.com/view/mortalkombat5/">.</a> <a href="https://sites.google.com/view/fullwatchmortalkombat2021-movi/">.</a> <a href="https://sites.google.com/view/mortalkombat7/">.</a> <a href="https://sites.google.com/view/mortalkombat8/">.</a> <a href="https://sites.google.com/view/mortalkombat9/">.</a> <a href="https://sites.google.com/view/mortalkombat10/">.</a> <a href="https://sites.google.com/view/watch-mort-tal-kombat/">.</a> <a href="https://sites.google.com/view/free-watch-mort-tal-kombat/">.</a> <a href="https://sites.google.com/view/watch-mort-tal-kombatfree-/">.</a> <a href="https://sites.google.com/view/full-watch-mortal-kombat/">.</a> <a href="https://sites.google.com/view/watch-mortal-kombat-2021-/">.</a> <a href="https://sites.google.com/view/watch-free-mortal-kombat-2021/">.</a> <a href="https://sites.google.com/view/full-watch-mortal-kombat-/">.</a> <a href="https://sites.google.com/view/watch-mortal-kombat-g-drive/">.</a> <a href="https://sites.google.com/view/g-docs-mortalkombat-g-drive/">.</a> <a href="https://sites.google.com/view/mortal-kombat-2021-full-free/">.</a> <a href="https://sites.google.com/view/mortal-kombat-2021-full-free-o/">.</a> <a href="https://sites.google.com/view/mortal-kombat-2021-full-free-o/">.</a> <a href="https://paiza.io/projects/56xFAEq61pSSn8VnKnHO6Q">.</a> <a href="https://www.posts123.com/post/1450667/mariners-announce-spring-training">.</a> <a href="https://sites.google.com/view/sfdjgkdfghdkfgjherghkkdfjg/home">.</a> <a href="https://dskfjshdkjfewhgf.blogspot.com/2021/03/sdkjfhwekjhfjdherjgfdjg.html">.</a> <a href="https://grahmaulidia.wordpress.com/2021/03/28/mariners-announce-spring-training-roster-moves/">.</a> <a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner-f83a9ea92f89">.</a> <a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner1-b2847091ff9f">.</a> <a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner2-df35041eec3a">.</a> <a href="https://4z5v6wq7a.medium.com">.</a> <a href="https://onlinegdb.com/BJaH8WR4O">.</a>
shahukareem/wav2vec2-large-xlsr-53-dhivehi
shahukareem
2021-03-28T08:47:31Z
78
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "dv", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: dv datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Shahu Kareem XLSR Wav2Vec2 Large 53 Dhivehi results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice dv type: common_voice args: dv metrics: - name: Test WER type: wer value: 32.85 --- # Wav2Vec2-Large-XLSR-53-Dhivehi Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dhivehi using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "dv", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi") model = Wav2Vec2ForCTC.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi") # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Dhivehi test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "dv", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi") model = Wav2Vec2ForCTC.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\،\.\؟\!\'\"\–\’]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 32.85% ## Training The Common Voice `train` and `validation` datasets were used for training. ## Example predictions ```-- reference: ކަރަންޓް ވައިރުކޮށް ބޮކި ހަރުކުރުން predicted: ކަރަންޓް ވައިރުކޮށް ބޮކި ހަރުކުރުން -- reference: ދެން އެކުދިންނާ ދިމާއަށް އަތް ދިށްކޮށްލެވެ predicted: ދެން އެކުދިންނާ ދިމާއަށް އަތް ދިއްކޮށްލެވެ ް -- reference: ރަކި ހިނިތުންވުމަކާއެކު އޭނާ އަމިއްލައަށް ތައާރަފްވި predicted: ރަކި ހިނިތުންވުމަކާއެކު އޭނާ އަމިއްލައަށް ތައަރަފްވި -- reference: ކޮޓަރީގެ ކުޑަދޮރުން ބޭރު ބަލަހައްޓައިގެން އިން ރޫނާގެ މޫނުމަތިން ފާޅުވަމުން ދިޔައީ ކަންބޮޑުވުމުގެ އަސަރުތައް predicted: ކޮޓަރީގެ ކުޑަދޮރުން ބޭރު ބަލަހައްޓައިގެން އިން ރނާގެ މޫނުމަތިން ފާޅުވަމުން ދިޔައީ ކަންބޮޑުވުމުގެ އަސަރުތައް -- ```
Marc/pegasus_xsum_gigaword
Marc
2021-03-26T22:49:11Z
5
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "dataset:XSUM", "dataset:Gigaword", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- language: - English - thumbnail: tags: - - - license: datasets: - XSUM - Gigaword metrics: - Rouge - --- # Pegasus XSUM Gigaword ## Model description Pegasus XSUM model finetuned to Gigaword Summarization task, significantly better performance than pegasus gigaword, but still doesn't match model paper performance. ## Intended uses & limitations Produces short summaries with the coherence of the XSUM Model #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Still has all the biases of any of the abstractive models, but seems a little less prone to hallucination. ## Training data Initialized with pegasus-XSUM ## Training procedure Trained for 11500 iterations on Gigaword corpus using OOB seq2seq (from hugging face using the default parameters) ## Eval results Evaluated on Gigaword test set (from hugging face using the default parameters) run_summarization.py --model_name_or_path pegasus-xsum/checkpoint-11500/ --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate | Metric | Score | | ----------- | ----------- | | eval_rouge1 | 34.1958 | | eval_rouge2 | 15.4033 | | eval_rougeL | 31.4488 | run_summarization.py --model_name_or_path google/pegasus-gigaword --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate | Metric | Score | | ----------- | ----------- | | eval_rouge1 | 20.8111 | | eval_rouge2 | 8.766 | | eval_rougeL | 18.4431 | ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
formu/DR-Site
formu
2021-03-26T15:34:21Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://www.geogebra.org/m/w8uzjttg https://www.geogebra.org/m/gvn7m78g https://www.geogebra.org/m/arxecanq https://www.geogebra.org/m/xb69bvww https://www.geogebra.org/m/apvepfnd https://www.geogebra.org/m/evmj8ckk https://www.geogebra.org/m/qxcxwmhp https://www.geogebra.org/m/p3cxqh6c https://www.geogebra.org/m/ggrahbgd https://www.geogebra.org/m/pnhymrbc https://www.geogebra.org/m/zjukbtk9 https://www.geogebra.org/m/bbezun8r https://www.geogebra.org/m/sgwamtru https://www.geogebra.org/m/fpunkxxp https://www.geogebra.org/m/acxebrr7
trueto/medalbert-base-wwm-chinese
trueto
2021-03-26T05:33:51Z
6
0
transformers
[ "transformers", "pytorch", "albert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# [medbert](https://github.com/trueto/medbert) 本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型 ## 评估基准 构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、 中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。 | **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** | | ---- | ---- | ---- |---- |---- |:----:| | CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 | | CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 | | CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 | | CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 | ## 开源模型 在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。 ## 性能表现 在同等实验环境,相同训练参数和脚本下,各模型的性能表现 | **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** | | :---- | :----: | :----: | :----: | :----: | | [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% | | [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% | | [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% | | MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** | |MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% | |MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% | |- | - | - | - | - | | [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% | | MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% | |MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** | ## 引用格式 ``` 杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03. ```
navteca/quora-roberta-base
navteca
2021-03-25T16:10:08Z
4,293
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "en", "dataset:quora", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- datasets: - quora language: en license: mit pipeline_tag: text-classification tags: - roberta - text-classification --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model uses [roberta-base](https://huggingface.co/roberta-base). ## Training Data This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1: How likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance The trained model can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')]) print(scores) ```
tuner007/pegasus_paraphrase
tuner007
2021-03-22T21:11:33Z
74,495
182
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "paraphrasing", "seq2seq", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 tags: - pegasus - paraphrasing - seq2seq --- ## Model description [PEGASUS](https://github.com/google-research/pegasus) fine-tuned for paraphrasing ## Model in Action 🚀 ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer model_name = 'tuner007/pegasus_paraphrase' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) def get_response(input_text,num_return_sequences,num_beams): batch = tokenizer([input_text],truncation=True,padding='longest',max_length=60, return_tensors="pt").to(torch_device) translated = model.generate(**batch,max_length=60,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) return tgt_text ``` #### Example: ``` num_beams = 10 num_return_sequences = 10 context = "The ultimate test of your knowledge is your capacity to convey it to another." get_response(context,num_return_sequences,num_beams) # output: ['The test of your knowledge is your ability to convey it.', 'The ability to convey your knowledge is the ultimate test of your knowledge.', 'The ability to convey your knowledge is the most important test of your knowledge.', 'Your capacity to convey your knowledge is the ultimate test of it.', 'The test of your knowledge is your ability to communicate it.', 'Your capacity to convey your knowledge is the ultimate test of your knowledge.', 'Your capacity to convey your knowledge to another is the ultimate test of your knowledge.', 'Your capacity to convey your knowledge is the most important test of your knowledge.', 'The test of your knowledge is how well you can convey it.', 'Your capacity to convey your knowledge is the ultimate test.'] ``` > Created by [Arpit Rajauria](https://twitter.com/arpit_rajauria) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/arpit_rajauria)
tugstugi/wav2vec2-large-xlsr-53-mongolian
tugstugi
2021-03-22T07:19:25Z
26
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "mn", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: mn datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Mongolian by Tugstugi results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice mn type: common_voice args: mn metrics: - name: Test WER type: wer value: 42.80 --- # Wav2Vec2-Large-XLSR-53-Mongolian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "mn", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-53-mongolian") model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-53-mongolian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Mongolian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "mn", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-53-mongolian") model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-53-mongolian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 42.80 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found ???
HooshvareLab/distilbert-fa-zwnj-base-ner
HooshvareLab
2021-03-21T14:32:29Z
130
4
transformers
[ "transformers", "pytorch", "tf", "distilbert", "token-classification", "fa", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: fa --- # DistilbertNER This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities: - Date (DAT) - Event (EVE) - Facility (FAC) - Location (LOC) - Money (MON) - Organization (ORG) - Percent (PCT) - Person (PER) - Product (PRO) - Time (TIM) ## Dataset Information | | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM | |:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:| | Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 | | Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 | | Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 | ## Evaluation The following tables summarize the scores obtained by model overall and per each class. **Overall** | Model | accuracy | precision | recall | f1 | |:----------:|:--------:|:---------:|:--------:|:--------:| | Distilbert | 0.994534 | 0.946326 | 0.95504 | 0.950663 | **Per entities** | | number | precision | recall | f1 | |:---: |:------: |:---------: |:--------: |:--------: | | DAT | 407 | 0.812048 | 0.828010 | 0.819951 | | EVE | 256 | 0.955056 | 0.996094 | 0.975143 | | FAC | 248 | 0.972549 | 1.000000 | 0.986083 | | LOC | 2884 | 0.968403 | 0.967060 | 0.967731 | | MON | 98 | 0.925532 | 0.887755 | 0.906250 | | ORG | 3216 | 0.932095 | 0.951803 | 0.941846 | | PCT | 94 | 0.936842 | 0.946809 | 0.941799 | | PER | 2645 | 0.959818 | 0.957278 | 0.958546 | | PRO | 318 | 0.963526 | 0.996855 | 0.979907 | | TIM | 43 | 0.760870 | 0.813953 | 0.786517 | ## How To Use You use this model with Transformers pipeline for NER. ### Installing requirements ```bash pip install transformers ``` ### How to predict using pipeline ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification # for pytorch from transformers import TFAutoModelForTokenClassification # for tensorflow from transformers import pipeline model_name_or_path = "HooshvareLab/distilbert-fa-zwnj-base-ner" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch # model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند." ner_results = nlp(example) print(ner_results) ``` ## Questions? Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo.
HooshvareLab/albert-fa-zwnj-base-v2-ner
HooshvareLab
2021-03-21T14:25:09Z
64
0
transformers
[ "transformers", "pytorch", "tf", "albert", "token-classification", "fa", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: fa --- # AlbertNER This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities: - Date (DAT) - Event (EVE) - Facility (FAC) - Location (LOC) - Money (MON) - Organization (ORG) - Percent (PCT) - Person (PER) - Product (PRO) - Time (TIM) ## Dataset Information | | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM | |:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:| | Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 | | Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 | | Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 | ## Evaluation The following tables summarize the scores obtained by model overall and per each class. **Overall** | Model | accuracy | precision | recall | f1 | |:----------:|:--------:|:---------:|:--------:|:--------:| | Albert | 0.993405 | 0.938907 | 0.943966 | 0.941429 | **Per entities** | | number | precision | recall | f1 | |:---: |:------: |:---------: |:--------: |:--------: | | DAT | 407 | 0.820639 | 0.820639 | 0.820639 | | EVE | 256 | 0.936803 | 0.984375 | 0.960000 | | FAC | 248 | 0.925373 | 1.000000 | 0.961240 | | LOC | 2884 | 0.960818 | 0.960818 | 0.960818 | | MON | 98 | 0.913978 | 0.867347 | 0.890052 | | ORG | 3216 | 0.920892 | 0.937500 | 0.929122 | | PCT | 94 | 0.946809 | 0.946809 | 0.946809 | | PER | 2644 | 0.960000 | 0.944024 | 0.951945 | | PRO | 318 | 0.942943 | 0.987421 | 0.964670 | | TIM | 43 | 0.780488 | 0.744186 | 0.761905 | ## How To Use You use this model with Transformers pipeline for NER. ### Installing requirements ```bash pip install sentencepiece pip install transformers ``` ### How to predict using pipeline ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification # for pytorch from transformers import TFAutoModelForTokenClassification # for tensorflow from transformers import pipeline model_name_or_path = "HooshvareLab/albert-fa-zwnj-base-v2-ner" # Albert tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch # model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند." ner_results = nlp(example) print(ner_results) ``` ## Questions? Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo.
sarnikowski/convbert-medium-small-da-cased
sarnikowski
2021-03-18T22:27:12Z
46
0
transformers
[ "transformers", "pytorch", "tf", "convbert", "da", "arxiv:2008.02496", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: da license: cc-by-4.0 --- # Danish ConvBERT medium small (cased) [ConvBERT](https://arxiv.org/abs/2008.02496) model pretrained on a custom Danish corpus (~17.5gb). For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers ## Usage ```python from transformers import ConvBertTokenizer, ConvBertModel tokenizer = ConvBertTokenizer.from_pretrained("sarnikowski/convbert-medium-small-da-cased") model = ConvBertModel.from_pretrained("sarnikowski/convbert-medium-small-da-cased") ``` ## Questions? If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to p.sarnikowski@gmail.com
acul3/xlsr_indonesia
acul3
2021-03-18T09:53:35Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "xlsr-fine-tuning-week", "id", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: id datasets: - common_voice tags: - speech - audio - automatic-speech-recognition - xlsr-fine-tuning-week license: apache-2.0 --- ## Evaluation on Common Voice ID Test ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys model_name = "munggok/xlsr_indonesia" device = "cuda" chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605 model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ds = load_dataset("common_voice", "id", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys())) wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` **Result**: 25.7 %
adzcodez/TokenClassificationTest
adzcodez
2021-03-16T14:18:09Z
4
1
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
distilbert-base-uncased finetuned on the conll2003 dataset for NER.
airesearch/xlm-roberta-base-finetuned
airesearch
2021-03-16T09:23:27Z
12
0
transformers
[ "transformers", "xlm-roberta", "fill-mask", "arxiv:1911.02116", "arxiv:2101.09635", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Finetuend `xlm-roberta-base` model on Thai sequence and token classification datasets <br> Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers). <br> ## Model description <br> We use the pretrained cross-lingual RoBERTa model as proposed by [[Conneau et al., 2020]](https://arxiv.org/abs/1911.02116). We download the pretrained PyTorch model via HuggingFace's Model Hub (https://huggingface.co/xlm-roberta-base) <br> ## Intended uses & limitations <br> You can use the finetuned models for multiclass/multilabel text classification and token classification task. <br> **Multiclass text classification** - `wisesight_sentiment` 4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets. - `wongnai_reivews` Users' review rating classification task (scale is ranging from 1 to 5) - `generated_reviews_enth` : (`review_star` as label) Generated users' review rating classification task (scale is ranging from 1 to 5). **Multilabel text classification** - `prachathai67k` Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k). **Token classification** - `thainer` Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer). - `lst20` : NER NER and POS tagging Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20). <br> ## How to use <br> The example notebook demonstrating how to use finetuned model for inference can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko) <br> **BibTeX entry and citation info** ``` @misc{lowphansirikul2021wangchanberta, title={WangchanBERTa: Pretraining transformer-based Thai Language Models}, author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong}, year={2021}, eprint={2101.09635}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
cemigo/cemigo-test-model
cemigo
2021-03-15T18:09:36Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
tags: - array - of - tags license: "any valid license identifier"
facebook/rag-sequence-nq
facebook
2021-03-12T11:04:28Z
24,970
41
transformers
[ "transformers", "pytorch", "tf", "rag", "en", "dataset:wiki_dpr", "arxiv:2005.11401", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - wiki_dpr thumbnail: https://huggingface.co/front/thumbnails/facebook.png --- ## RAG This is the RAG-Sequence Model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf) by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al. The model is a *uncased* model, which means that capital letters are simply converted to lower-case letters. The model consits of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* `train` datasets, which is linked above. The question_encoder and retriever are based on `facebook/dpr-question_encoder-single-nq-base` and `facebook/bart-large`, which were jointly finetuned on on the *wiki_dpr* QA dataset in an end-to-end fashion. ## Usage: **Note**: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM. The model can generate answers to any factoid question as follows: ```python from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq") retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True) model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("how many countries are in europe", return_tensors="pt") generated = model.generate(input_ids=input_dict["input_ids"]) print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0]) # should give 54 => google says either 44 or 51 ```
gagan3012/keytotext-small
gagan3012
2021-03-11T23:33:47Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# keytotext Idea is to build a model which will take keywords as inputs and generate sentences as outputs. ### Model: Two Models have been built: - Using T5-base size = 850 MB can be found here: https://huggingface.co/gagan3012/keytotext - Using T5-small size = 230 MB can be found here: https://huggingface.co/gagan3012/keytotext-small #### Usage: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("gagan3012/keytotext-small") model = AutoModelWithLMHead.from_pretrained("gagan3012/keytotext-small") ``` ### Demo: [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/app.py) https://share.streamlit.io/gagan3012/keytotext/app.py ![image](https://user-images.githubusercontent.com/49101362/110660053-3b20fe80-81d4-11eb-9275-ba402134e8d9.png) ### Example: ['India', 'Wedding'] -> We are celebrating today in New Delhi with three wedding anniversary parties.
gagan3012/keytotext
gagan3012
2021-03-11T20:23:32Z
4
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# keytotext Idea is to build a model which will take keywords as inputs and generate sentences as outputs. ### Model: Two Models have been built: - Using T5-base size = 850 MB can be found here: https://huggingface.co/gagan3012/keytotext - Using T5-small size = 230 MB can be found here: https://huggingface.co/gagan3012/keytotext-small #### Usage: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("gagan3012/keytotext-small") model = AutoModelWithLMHead.from_pretrained("gagan3012/keytotext-small") ``` ### Demo: [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/app.py) https://share.streamlit.io/gagan3012/keytotext/app.py ![image](https://user-images.githubusercontent.com/49101362/110660053-3b20fe80-81d4-11eb-9275-ba402134e8d9.png) ### Example: ['India', 'Wedding'] -> We are celebrating today in New Delhi with three wedding anniversary parties.
navteca/electra-base-squad2
navteca
2021-03-10T15:30:09Z
5
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "en", "dataset:squad_v2", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- datasets: - squad_v2 language: en license: mit pipeline_tag: question-answering tags: - electra - question-answering --- # Electra base model for QA (SQuAD 2.0) This model uses [electra-base](https://huggingface.co/google/electra-base-discriminator). ## Training Data The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. It can be used for question answering task. ## Usage and Performance The trained model can be used like this: ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline # Load model & tokenizer electra_model = AutoModelForQuestionAnswering.from_pretrained('navteca/electra-base-squad2') electra_tokenizer = AutoTokenizer.from_pretrained('navteca/electra-base-squad2') # Get predictions nlp = pipeline('question-answering', model=electra_model, tokenizer=electra_tokenizer) result = nlp({ 'question': 'How many people live in Berlin?', 'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.' }) print(result) #{ # "answer": "3,520,031" # "end": 36, # "score": 0.99983448, # "start": 27, #} ```
navteca/quora-roberta-large
navteca
2021-03-10T14:57:04Z
6
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "en", "dataset:quora", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- datasets: - quora language: en license: mit pipeline_tag: text-classification tags: - roberta - text-classification --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model uses [roberta-large](https://huggingface.co/roberta-large). ## Training Data This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1: How likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance The trained model can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')]) print(scores) ```
yjernite/bart_eli5
yjernite
2021-03-09T22:31:11Z
359
11
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "en", "dataset:eli5", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - eli5 --- ## BART ELI5 Read the article at https://yjernite.github.io/lfqa.html and try the demo at https://huggingface.co/qa/
hd10/semeval2020_task11_tc
hd10
2021-03-09T18:01:57Z
4
0
transformers
[ "transformers", "pytorch", "deberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
Technique Classification for https://propaganda.qcri.org/ptc/index.html
wptoux/albert-chinese-large-qa
wptoux
2021-03-09T07:48:40Z
65
12
transformers
[ "transformers", "pytorch", "albert", "question-answering", "Question Answering", "zh", "dataset:webqa", "dataset:dureader", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - zh tags: - Question Answering license: apache-2.0 datasets: - webqa - dureader --- # albert-chinese-large-qa Albert large QA model pretrained from baidu webqa and baidu dureader datasets. ## Data source + baidu webqa 1.0 + baidu dureader ## Traing Method We combined the two datasets together and created a new dataset in squad format, including 705139 samples for training and 69638 samples for validation. We finetune the model based on the albert chinese large model. ## Hyperparams + learning_rate 1e-5 + max_seq_length 512 + max_query_length 50 + max_answer_length 300 + doc_stride 256 + num_train_epochs 2 + warmup_steps 1000 + per_gpu_train_batch_size 8 + gradient_accumulation_steps 3 + n_gpu 2 (Nvidia Tesla P100) ## Usage ``` from transformers import AutoModelForQuestionAnswering, BertTokenizer model = AutoModelForQuestionAnswering.from_pretrained('wptoux/albert-chinese-large-qa') tokenizer = BertTokenizer.from_pretrained('wptoux/albert-chinese-large-qa') ``` ***Important: use BertTokenizer*** ## MoreInfo Please visit https://github.com/wptoux/albert-chinese-large-webqa for details.
tennessejoyce/titlewave-t5-small
tennessejoyce
2021-03-09T04:03:11Z
9
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# Titlewave: t5-small This is one of two models used in the Titlewave project. See https://github.com/tennessejoyce/TitleWave for more information. This model was fine-tuned on a dataset of Stack Overflow posts, with a ConditionalGeneration head that summarizes the body of a question in order to suggest a title.
Jade/bert_base_law
Jade
2021-03-08T06:59:50Z
0
0
null
[ "NLP", "LAW", "dataset:WIP", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: "zh_CN" thumbnail: "url to a thumbnail used in social sharing" tags: - NLP - LAW license: "MIT" datasets: - WIP metrics: - WIP ---
uasoyasser/eefdfgdg
uasoyasser
2021-03-05T15:37:12Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://teacher.desmos.com/activitybuilder/teacherguide/604249659240440d25a27d0c https://teacher.desmos.com/activitybuilder/teacherguide/604249a365ecd40d30b4ad18 https://teacher.desmos.com/activitybuilder/teacherguide/604249e2cfb0a20d51e13768 https://teacher.desmos.com/activitybuilder/teacherguide/60424a1c9240440d25a27e22 https://teacher.desmos.com/activitybuilder/teacherguide/60424a58cefbd00d5da96390 https://teacher.desmos.com/activitybuilder/teacherguide/60424a90229a7d0cfb807295 https://teacher.desmos.com/activitybuilder/teacherguide/60424ad532e0730c4bdcbbab https://teacher.desmos.com/activitybuilder/teacherguide/60424b0f1d780b0b7395f36d https://teacher.desmos.com/activitybuilder/teacherguide/60424c01534b110d262d4d46 https://teacher.desmos.com/activitybuilder/teacherguide/60424c47969a440d13c62ffb https://teacher.desmos.com/activitybuilder/teacherguide/60424cd7f17f6b0d4550c269 https://teacher.desmos.com/activitybuilder/teacherguide/60424d0dcfb0a20d51e13c97 https://teacher.desmos.com/activitybuilder/teacherguide/60424d5796540a0cf95ff215 https://teacher.desmos.com/activitybuilder/teacherguide/60424d9163a2220bc4c8f2be https://teacher.desmos.com/activitybuilder/teacherguide/60424e030d98a80d53856ab2 https://teacher.desmos.com/activitybuilder/teacherguide/60424e37ed488c0cfbbaab2f
yhavinga/mt5-base-cnn-nl
yhavinga
2021-03-05T07:48:08Z
8
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "dataset:cnn_dm_nl", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- tags: - summarization language: - dutch datasets: - cnn_dm_nl widget: - text: "(CNN) Skywatchers in West-Noord-Amerika zijn in voor een traktatie: een bijna vijf minuten totale maansverduistering vanmorgen. Hier is hoe het zich ontvouwt:. Het begon om 3:16 a.m. Pacific Daylight Tijd, toen de maan begon te bewegen in de schaduw van de Aarde. Voor het volgende uur en 45 minuten, die schaduw zal bewegen over de maan en verzwolgen het om 4:58 a.m. Pacific Time. De totale verduistering zal slechts vier minuten en 43 seconden duren, en NASA zegt dat maakt het de kortste van de eeuw. Kijken live op NASA TV. Terwijl mensen ten westen van de Mississippi River zal het beste uitzicht hebben, ten minste een gedeeltelijke verduistering zal zichtbaar zijn over de hele natie. Maar zonsopgang zal de show te onderbreken op de Oostkust. Delen van Zuid-Amerika, India, China en China Een maansverduistering gebeurt wanneer de zon, de aarde en de maan een rechte lijn vormen in de ruimte, met de aarde in het midden. De zon schijnt op de Aarde en creëert een schaduw. Als de maan dieper in die schaduw beweegt, lijkt het donker te worden en lijkt zelfs een roodachtige kleur te zijn. Waarom rood? Omdat de atmosfeer van de Aarde het grootste deel van het blauwe licht filtert. Sommige mensen hebben het effect van de \"bloedmaan\" bijgenaamd. NASA zegt dat maansverduisteringen meestal ten minste twee keer per jaar plaatsvinden, maar deze verduistering is de derde in een reeks van vier op een rij, bekend als een \"tetrad.\" De eerste was op 15 april 2014. De tweede was in september 2014, de volgende is zaterdag en er zal er een meer zijn, op 28 september. Als je meer wilt weten over de verduistering, NASA astronoom Mitzi Adam. Deel uw foto's met CNN iReport." - text: "(CNN) Filipino's worden gewaarschuwd om op wacht te staan voor flash overstromingen en aardverschuivingen als tropische storm Maysak benaderde de Aziatische eiland natie zaterdag. Slechts een paar dagen geleden, Maysak kreeg super tyfoon status dankzij zijn aanhoudende 150 km/h winden. Het heeft sindsdien verloren veel stoom als het naar het westen in de Stille Oceaan heeft gedraaid. Het is nu geclassificeerd als een tropische storm, volgens de Filipijnse nationale weerdienst, die noemt het een andere naam, Chedeng. Het heeft stabiele winden van meer dan 70 km/h (115 km/h) en gusts tot 90 km/h vanaf 17.00 uur (5 uur ET) Zaterdag. Toch, dat betekent niet dat Maysak zal geen pak een wallop. Autoriteiten nam preventieve stappen om mensen veilig te houden zoals barring outdoor activiteiten zoals zwemmen, surfen, di. Gabriel Llave, een ramp ambtenaar, vertelde PNA dat toeristen die aankomen zaterdag in en rond de kustplaats van Aurora \"zal niet worden geaccepteerd door de eigenaren van hotels, resorts, herbergen en dergelijke... en zal worden geadviseerd om terug te keren naar hun respectievelijke plaatsen.\" Aldczar Aurelio, een meteoroloog met de Filippijnse Atmosferische, Geofysische en Astronomische Diensten Administratie (PAGASA), zei dat de storm was gecentreerd 200 mijl ten zuidwesten van de provincie Aurora vanaf 5 uur (5 uur ET) en richting het westen op een 12.5 mph clip. Het is verwacht dat landval zondagochtend maken op de zuidoostelijke kust van de provincie Isabela en zijn uit de Filippijnen tegen maandag. Ahead van de storm. Isabela Gov. Faustino Dry III waarschuwde zaterdag dat bewoners moet handelen als deze zal maken landfall zondagochtend op de zuidoostelijke kust van de provincie Isabela en zijn uit de Filippijnen voor maandag." --- # mt5-base-cnn-nl mt5-base finetuned on CNN DM translated to nl (Dutch). * Learning rate 1e-3 * Trained for 1 epoch * Max source length 1024 * Max target length 142 * rouge1 31.1766 * rouge2 8.4538 * rougeL 17.8674
tiedeman/opus-mt-en-he
tiedeman
2021-03-04T17:50:20Z
15
0
transformers
[ "transformers", "pytorch", "rust", "marian", "text2text-generation", "translation", "en", "he", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - he tags: - translation license: apache-2.0 --- ### en-he * source group: English * target group: Hebrew * OPUS readme: [eng-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md) * model: transformer * source language(s): eng * target language(s): heb * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.zip) * test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.test.txt) * test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.heb | 37.9 | 0.602 | ### System Info: - hf_name: en-he - source_languages: eng - target_languages: heb - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'he'] - src_constituents: ('English', {'eng'}) - tgt_constituents: ('Hebrew', {'heb'}) - src_multilingual: False - tgt_multilingual: False - long_pair: eng-heb - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.test.txt - src_alpha3: eng - tgt_alpha3: heb - chrF2_score: 0.602 - bleu: 37.9 - brevity_penalty: 1.0 - ref_len: 60359.0 - src_name: English - tgt_name: Hebrew - train_date: 2020-10-04 00:00:00 - src_alpha2: en - tgt_alpha2: he - prefer_old: False - short_pair: en-he - helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561 - transformers_git_sha: d99ed7ad618037ae878f0758157ed0764bd7f935 - port_machine: LM0-400-22516.local - port_time: 2020-10-15-16:31
hfl/chinese-electra-large-discriminator
hfl
2021-03-03T01:42:48Z
10
1
transformers
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" --- **Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
hfl/chinese-electra-base-discriminator
hfl
2021-03-03T01:40:07Z
245
9
transformers
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" --- **Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```