modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-11 12:33:28
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-11 12:33:10
card
stringlengths
11
1.01M
tftransformers/gpt2
tftransformers
2021-10-24T08:41:46Z
1
0
transformers
[ "transformers", "exbert", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: - exbert license: mit --- # GPT-2 Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python from tf_transformers.models import GPT2Model from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained("gpt2") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] outputs_tf = model(inputs_tf) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
tftransformers/albert-xxlarge-v1
tftransformers
2021-10-24T08:38:28Z
1
0
transformers
[ "transformers", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT XXLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: In tf_transformers ```python from tf_transformers.models import AlbertModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1') model = AlbertModel.from_pretrained("albert-xxlarge-v1") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] inputs_tf["input_type_ids"] = inputs["token_type_ids"] inputs_tf["input_mask"] = inputs["attention_mask"] outputs_tf = model(inputs_tf) ``` ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tftransformers/albert-xlarge-v1
tftransformers
2021-10-24T08:37:26Z
3
0
transformers
[ "transformers", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT XLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: In tf_transformers ```python from tf_transformers.models import AlbertModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1') model = AlbertModel.from_pretrained("albert-xlarge-v1") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] inputs_tf["input_type_ids"] = inputs["token_type_ids"] inputs_tf["input_mask"] = inputs["attention_mask"] outputs_tf = model(inputs_tf) ``` ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tftransformers/albert-base-v2
tftransformers
2021-10-24T08:36:40Z
3
0
transformers
[ "transformers", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT Base v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: In tf_transformers ```python from tf_transformers.models import AlbertModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = AlbertModel.from_pretrained("albert-base-v2") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] inputs_tf["input_type_ids"] = inputs["token_type_ids"] inputs_tf["input_mask"] = inputs["attention_mask"] outputs_tf = model(inputs_tf) ``` ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tftransformers/bart-large
tftransformers
2021-10-24T08:24:25Z
2
0
transformers
[ "transformers", "en", "arxiv:1910.13461", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 language: en --- # BART (large-sized model) BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in tf_transformers: ```python from tf_transformers.models import BartModel from transformers import BartTokenizer tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') model = BartModel.from_pretrained('facebook/bart-large') inputs_tf = {} inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") inputs_tf["encoder_input_ids"] = inputs["input_ids"] inputs_tf["encoder_input_mask"] = inputs["attention_mask"] inputs_tf["decoder_input_ids"] = decoder_input_ids outputs_tf = model(inputs_tf) ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tftransformers/mt5-small
tftransformers
2021-10-24T08:18:10Z
4
0
transformers
[ "transformers", "multilingual", "dataset:mc4", "arxiv:2010.11934", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: multilingual datasets: - mc4 license: apache-2.0 --- [Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available. ## Usage ``` from tf_transformers.models import MT5Model # Any MT5 model (mt5-small, mt5-base etc) model_name = 'mt5-small' model = MT5Model.from_pretrained(model_name) ```
tftransformers/t5-base
tftransformers
2021-10-24T08:16:17Z
3
0
transformers
[ "transformers", "summarization", "translation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1910.10683", "license:apache-2.0", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - fr - ro - de datasets: - c4 tags: - summarization - translation license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?search=t5) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) ## Usage ``` from tf_transformers.models import T5Model # Any T5 model (t5-small, t5-base, t5-large etc) model_name = 't5-small' model = T5Model.from_pretrained(model_name) ```
tftransformers/t5-large
tftransformers
2021-10-24T08:15:07Z
2
0
transformers
[ "transformers", "summarization", "translation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1910.10683", "license:apache-2.0", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - fr - ro - de datasets: - c4 tags: - summarization - translation license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?search=t5) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) ## Usage ``` from tf_transformers.models import T5Model # Any T5 model (t5-small, t5-base, t5-large etc) model_name = 't5-small' model = T5Model.from_pretrained(model_name) ```
mathew/layoutlmv2-finetuned-funsd-1024
mathew
2021-10-24T06:13:48Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2-finetuned-funsd-1024 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-funsd-1024 This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.0+cu101 - Datasets 1.14.0 - Tokenizers 0.10.3
aditeyabaral/sentencetransformer-xlm-roberta-base
aditeyabaral
2021-10-24T04:56:00Z
49
1
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # aditeyabaral/sentencetransformer-xlm-roberta-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aditeyabaral/sentencetransformer-xlm-roberta-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-xlm-roberta-base') model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-xlm-roberta-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-xlm-roberta-base) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 9234 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
huggingartists/sqwore
huggingartists
2021-10-24T04:23:45Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/sqwore", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/sqwore tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/3557a234d4c5912569afbea078a23eff.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Sqwore</div> <a href="https://genius.com/artists/sqwore"> <div style="text-align: center; font-size: 14px;">@sqwore</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Sqwore. Dataset is available [here](https://huggingface.co/datasets/huggingartists/sqwore). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/sqwore") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3gzd5crq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Sqwore's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/vzeft23g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/vzeft23g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/sqwore') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/sqwore") model = AutoModelWithLMHead.from_pretrained("huggingartists/sqwore") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingtweets/praisegodbarbon
huggingtweets
2021-10-24T03:47:17Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/praisegodbarbon/1635047234116/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1381764452098437120/74IgKP07_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Boston Psychology PhD</div> <div style="text-align: center; font-size: 14px;">@praisegodbarbon</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Boston Psychology PhD. | Data | Boston Psychology PhD | | --- | --- | | Tweets downloaded | 3212 | | Retweets | 810 | | Short tweets | 265 | | Tweets kept | 2137 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/h4r5tyq8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @praisegodbarbon's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o2225sd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o2225sd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/praisegodbarbon') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
espnet/kan-bayashi_ljspeech_joint_finetune_conformer_fastspeech2_hifigan
espnet
2021-10-23T20:55:12Z
17
16
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - ljspeech license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/ljspeech_joint_finetune_conformer_fastspeech2_hifigan` ♻️ Imported from https://zenodo.org/record/5498896/ This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_ljspeech_joint_train_conformer_fastspeech2_hifigan
espnet
2021-10-23T20:54:48Z
3
0
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - ljspeech license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/ljspeech_joint_train_conformer_fastspeech2_hifigan` ♻️ Imported from https://zenodo.org/record/5498487/ This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_-truncated-737899
espnet
2021-10-23T20:54:27Z
2
1
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - ljspeech license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5498896/ This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_libritts_xvector_vits
espnet
2021-10-23T20:52:03Z
3
0
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:libritts", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - libritts license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/libritts_xvector_vits` ♻️ Imported from https://zenodo.org/record/5521416/ This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_full_band_vits_prosody
espnet
2021-10-23T20:47:17Z
11
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_full_band_vits_prosody` ♻️ Imported from https://zenodo.org/record/5521340/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave
espnet
2021-10-23T20:44:44Z
0
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5521354/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_vctk_full_band_multi_spk_vits
espnet
2021-10-23T20:44:14Z
0
1
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:vctk", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - vctk license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/vctk_full_band_multi_spk_vits` ♻️ Imported from https://zenodo.org/record/5521431/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_vctk_multi_spk_vits
espnet
2021-10-23T20:42:58Z
2
0
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:vctk", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - vctk license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/vctk_multi_spk_vits` ♻️ Imported from https://zenodo.org/record/5500759/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave
espnet
2021-10-23T20:32:45Z
1
0
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:vctk", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - vctk license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5500759/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_conformer_fastspeech2_transformer_prosody
espnet
2021-10-23T20:32:15Z
4
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_conformer_fastspeech2_transformer_prosody` ♻️ Imported from https://zenodo.org/record/5499066/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_transformer_teacher_r-truncated-f43d8f
espnet
2021-10-23T20:31:48Z
0
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave` ♻️ Imported from https://zenodo.org/record/5499066/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_conformer_fastspeech2_tacotron2_prosody
espnet
2021-10-23T20:31:24Z
3
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_conformer_fastspeech2_tacotron2_prosody` ♻️ Imported from https://zenodo.org/record/5499050/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_transformer_prosody
espnet
2021-10-23T20:30:42Z
0
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_transformer_prosody` ♻️ Imported from https://zenodo.org/record/5499040/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave
espnet
2021-10-23T20:30:29Z
1
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave` ♻️ Imported from https://zenodo.org/record/5499040/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave
espnet
2021-10-23T20:27:27Z
0
0
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - ljspeech license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5443814/ This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jvs_jvs010_vits_accent_with_pause
espnet
2021-10-23T20:26:30Z
1
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jvs", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jvs license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jvs_jvs010_vits_accent_with_pause` ♻️ Imported from https://zenodo.org/record/5432566/ This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjta-truncated-d57a28
espnet
2021-10-23T20:25:39Z
1
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jvs", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jvs license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest` ♻️ Imported from https://zenodo.org/record/5432566/ This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_vits_accent_with_pause
espnet
2021-10-23T20:23:56Z
0
3
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_vits_accent_with_pause` ♻️ Imported from https://zenodo.org/record/5414980/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_a-truncated-d7d5d0
espnet
2021-10-23T20:23:41Z
3
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5431984/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
huggingtweets/islamocommunism
huggingtweets
2021-10-23T18:38:04Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/islamocommunism/1635014280450/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1448436144388009985/zWh5cSQ3_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">نورهان</div> <div style="text-align: center; font-size: 14px;">@islamocommunism</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from نورهان. | Data | نورهان | | --- | --- | | Tweets downloaded | 3196 | | Retweets | 1205 | | Short tweets | 227 | | Tweets kept | 1764 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2l8ikj22/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @islamocommunism's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kngkxcq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kngkxcq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/islamocommunism') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
tiennvcs/bert-large-uncased-finetuned-docvqa
tiennvcs
2021-10-23T17:43:43Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-uncased-finetuned-docvqa results: - task: name: Question Answering type: question-answering --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-docvqa This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 250500 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.5228 | 0.05 | 1000 | 2.6645 | | 2.4909 | 0.1 | 2000 | 2.8985 | | 2.1679 | 0.16 | 3000 | 2.3551 | | 1.9451 | 0.21 | 4000 | 2.2226 | | 1.6814 | 0.26 | 5000 | 2.1590 | | 1.8868 | 0.31 | 6000 | 2.6197 | | 1.6618 | 0.36 | 7000 | 2.3632 | | 1.8313 | 0.41 | 8000 | 2.4519 | | 1.7017 | 0.47 | 9000 | 2.2682 | | 1.8169 | 0.52 | 10000 | 2.4486 | | 1.7074 | 0.57 | 11000 | 2.3862 | | 1.7674 | 0.62 | 12000 | 2.1801 | | 1.8134 | 0.67 | 13000 | 2.3032 | | 1.8334 | 0.73 | 14000 | 2.4205 | | 1.6819 | 0.78 | 15000 | 2.2398 | | 1.5846 | 0.83 | 16000 | 2.3834 | | 1.6758 | 0.88 | 17000 | 1.9683 | | 1.6303 | 0.93 | 18000 | 2.3297 | | 1.5652 | 0.98 | 19000 | 2.0581 | | 1.3045 | 1.04 | 20000 | 2.4950 | | 1.2393 | 1.09 | 21000 | 2.6622 | | 1.1526 | 1.14 | 22000 | 2.3749 | | 1.2631 | 1.19 | 23000 | 2.3915 | | 1.1846 | 1.24 | 24000 | 2.2592 | | 1.2731 | 1.3 | 25000 | 2.4239 | | 1.3057 | 1.35 | 26000 | 2.2920 | | 1.134 | 1.4 | 27000 | 2.3107 | | 1.2017 | 1.45 | 28000 | 2.4271 | | 1.2202 | 1.5 | 29000 | 2.1814 | | 1.2179 | 1.56 | 30000 | 2.3365 | | 1.2359 | 1.61 | 31000 | 2.1256 | | 1.1964 | 1.66 | 32000 | 2.1720 | | 1.269 | 1.71 | 33000 | 2.4363 | | 1.1812 | 1.76 | 34000 | 2.2372 | | 1.2187 | 1.81 | 35000 | 2.2318 | | 1.1805 | 1.87 | 36000 | 2.3693 | | 1.1458 | 1.92 | 37000 | 2.5128 | | 1.1958 | 1.97 | 38000 | 2.1311 | | 0.8924 | 2.02 | 39000 | 2.4635 | | 0.869 | 2.07 | 40000 | 2.8231 | | 0.8333 | 2.13 | 41000 | 2.6762 | | 0.9194 | 2.18 | 42000 | 2.4588 | | 0.8089 | 2.23 | 43000 | 2.6443 | | 0.8612 | 2.28 | 44000 | 2.4300 | | 0.7981 | 2.33 | 45000 | 2.7418 | | 0.9765 | 2.38 | 46000 | 2.6543 | | 0.8646 | 2.44 | 47000 | 2.5990 | | 1.0316 | 2.49 | 48000 | 2.4625 | | 0.9862 | 2.54 | 49000 | 2.4691 | | 1.027 | 2.59 | 50000 | 2.4156 | | 0.9412 | 2.64 | 51000 | 2.4204 | | 0.9353 | 2.7 | 52000 | 2.4933 | | 0.9509 | 2.75 | 53000 | 2.4708 | | 0.9351 | 2.8 | 54000 | 2.5351 | | 0.9968 | 2.85 | 55000 | 2.2506 | | 1.025 | 2.9 | 56000 | 2.6317 | | 1.627 | 2.95 | 57000 | 2.7843 | | 0.9294 | 3.01 | 58000 | 2.9396 | | 0.6043 | 3.06 | 59000 | 3.1560 | | 0.7903 | 3.11 | 60000 | 2.8330 | | 0.7373 | 3.16 | 61000 | 2.9422 | | 0.6499 | 3.21 | 62000 | 3.0948 | | 0.6411 | 3.27 | 63000 | 2.7900 | | 0.625 | 3.32 | 64000 | 2.5268 | | 0.6264 | 3.37 | 65000 | 2.8701 | | 0.6143 | 3.42 | 66000 | 3.2544 | | 0.6286 | 3.47 | 67000 | 2.6208 | | 0.739 | 3.53 | 68000 | 2.8107 | | 0.5981 | 3.58 | 69000 | 2.8073 | | 0.6502 | 3.63 | 70000 | 2.6293 | | 0.6548 | 3.68 | 71000 | 2.9501 | | 0.7243 | 3.73 | 72000 | 2.7917 | | 0.598 | 3.78 | 73000 | 2.9341 | | 0.6159 | 3.84 | 74000 | 2.7629 | | 0.5905 | 3.89 | 75000 | 2.6441 | | 0.6393 | 3.94 | 76000 | 2.6660 | | 0.677 | 3.99 | 77000 | 2.7616 | | 0.3281 | 4.04 | 78000 | 3.6873 | | 0.4524 | 4.1 | 79000 | 3.3441 | | 0.3994 | 4.15 | 80000 | 3.3129 | | 0.4686 | 4.2 | 81000 | 3.1813 | | 0.5293 | 4.25 | 82000 | 2.9088 | | 0.3961 | 4.3 | 83000 | 3.0765 | | 0.4406 | 4.35 | 84000 | 3.1254 | | 0.401 | 4.41 | 85000 | 3.2415 | | 0.4594 | 4.46 | 86000 | 3.0691 | | 0.4523 | 4.51 | 87000 | 3.0493 | | 0.4719 | 4.56 | 88000 | 3.1352 | | 0.4895 | 4.61 | 89000 | 2.8991 | | 0.423 | 4.67 | 90000 | 3.1738 | | 0.3984 | 4.72 | 91000 | 3.1862 | | 0.4206 | 4.77 | 92000 | 3.1213 | | 0.4587 | 4.82 | 93000 | 3.0030 | | 0.381 | 4.87 | 94000 | 3.3218 | | 0.4138 | 4.92 | 95000 | 3.1529 | | 0.4003 | 4.98 | 96000 | 3.1375 | | 0.2098 | 5.03 | 97000 | 3.7443 | | 0.2334 | 5.08 | 98000 | 3.7359 | | 0.2534 | 5.13 | 99000 | 3.7814 | | 0.3067 | 5.18 | 100000 | 3.7128 | | 0.2363 | 5.24 | 101000 | 3.6091 | | 0.2652 | 5.29 | 102000 | 3.4015 | | 0.3311 | 5.34 | 103000 | 3.4793 | | 0.2344 | 5.39 | 104000 | 3.6792 | | 0.2741 | 5.44 | 105000 | 3.5385 | | 0.2896 | 5.5 | 106000 | 3.8118 | | 0.2071 | 5.55 | 107000 | 3.8690 | | 0.3023 | 5.6 | 108000 | 3.7087 | | 0.3299 | 5.65 | 109000 | 3.4925 | | 0.1943 | 5.7 | 110000 | 3.6739 | | 0.2488 | 5.75 | 111000 | 3.7614 | | 0.3138 | 5.81 | 112000 | 3.5156 | | 0.2555 | 5.86 | 113000 | 3.6056 | | 0.2918 | 5.91 | 114000 | 3.6533 | | 0.2751 | 5.96 | 115000 | 3.6367 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.8.0+cu101 - Datasets 1.11.0 - Tokenizers 0.10.3
2umm3r/distilbert-base-uncased-finetuned-cola
2umm3r
2021-10-23T11:46:51Z
21
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5155709926752544 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7816 - Matthews Correlation: 0.5156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5291 | 1.0 | 535 | 0.5027 | 0.4092 | | 0.3492 | 2.0 | 1070 | 0.5136 | 0.4939 | | 0.2416 | 3.0 | 1605 | 0.6390 | 0.5056 | | 0.1794 | 4.0 | 2140 | 0.7816 | 0.5156 | | 0.1302 | 5.0 | 2675 | 0.8836 | 0.5156 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
nepalprabin/xlm-roberta-base-finetuned-marc-en
nepalprabin
2021-10-23T09:53:48Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.0442 - Mae: 0.5385 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0371 | 1.0 | 1105 | 1.0522 | 0.5256 | | 0.8925 | 2.0 | 2210 | 1.0442 | 0.5385 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
stamas01/vgg19_skin_auto_encoder
stamas01
2021-10-23T06:04:31Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
A simple Auto Encoder made up of VGG19 trained to reconstruct skin lesion images.
jx88/xlm-roberta-base-finetuned-marc-en-j-run
jx88
2021-10-23T03:13:16Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en-j-run results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en-j-run This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9189 - Mae: 0.4634 ## Model description Trained following the MLT Tokyo Transformers workshop run by huggingface. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.2327 | 1.0 | 235 | 1.0526 | 0.6341 | | 0.9943 | 2.0 | 470 | 0.9189 | 0.4634 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
tiennvcs/bert-base-uncased-finetuned-infovqa
tiennvcs
2021-10-23T00:21:16Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-infovqa results: - task: name: Question Answering type: question-answering --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-infovqa This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 250500 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2765 | 0.23 | 1000 | 3.0678 | | 2.9987 | 0.46 | 2000 | 2.9525 | | 2.826 | 0.69 | 3000 | 2.7870 | | 2.7084 | 0.93 | 4000 | 2.7051 | | 2.1286 | 1.16 | 5000 | 2.9286 | | 2.0009 | 1.39 | 6000 | 3.1037 | | 2.0323 | 1.62 | 7000 | 2.8567 | | 1.9905 | 1.85 | 8000 | 2.8276 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.8.0+cu101 - Datasets 1.11.0 - Tokenizers 0.10.3
espnet/sujay_catslu_map
espnet
2021-10-22T21:01:58Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "zh", "dataset:catslu", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: zh datasets: - catslu license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/sujay_catslu_map` This model was trained by Sujay S Kumar using catslu recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout e31965d55993766461f0964216a0bb9aea3cfb7a pip install -e . cd egs2/catslu/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/sujay_catslu_map ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sun Oct 3 12:53:16 EDT 2021` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.3a3` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `b41391336042a4876e30d9fe5c66afb4e4be404c` - Commit date: `Wed Sep 22 10:02:03 2021 -0400` ## asr_train_asr_smaller_aishell_xlsr_raw_zh_word ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/test|1577|11441|46.1|30.1|23.7|2.5|56.4|81.3| |inference_asr_model_valid.acc.ave_5best/valid|921|6438|49.4|29.2|21.4|2.7|53.4|79.2| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/test|1577|45924|74.4|13.0|12.5|3.2|28.8|81.3| |inference_asr_model_valid.acc.ave_5best/valid|921|26110|77.0|11.9|11.1|2.7|25.7|79.2| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/train_asr_smaller_aishell_xlsr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp_train_asr_smaller_aishell_xlsr/asr_train_asr_smaller_aishell_xlsr_raw_zh_word ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: 5 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/train/speech_shape - exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/train/text_shape.word valid_shape_file: - exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/valid/speech_shape - exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/valid/text_shape.word batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text valid_data_path_and_name_and_type: - - dump/raw/valid/wav.scp - speech - sound - - dump/raw/valid/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0001 scheduler: warmuplr scheduler_conf: warmup_steps: 2500 token_list: - <blank> - <unk> - 航 - 导 - inform_操作_none - inform_终点名称_none - 去 - none_none_none - 我 - 到 - inform_poi名称_none - unknown - 要 - 市 - side - 一 - 个 - 路 - 区 - 第 - 大 - 县 - 你 - inform_序列号_none - 小 - 城 - 站 - 家 - 南 - 中 - 山 - 州 - 好 - 镇 - 场 - 的 - 院 - 西 - 店 - 东 - 车 - 阳 - 学 - 北 - 园 - dialect - 安 - 新 - 海 - 回 - 公 - 医 - 二 - 不 - 三 - 广 - 天 - 村 - 有 - 闭 - 开 - 酒 - 下 - 江 - 消 - 人 - 帮 - 金 - 是 - 取 - 花 - 近 - 政 - 民 - 口 - 十 - 里 - 河 - 府 - 请 - 关 - 国 - 了 - 华 - 那 - 高 - robot - 出 - 平 - 湖 - 在 - 省 - 定 - 号 - 门 - 想 - 街 - 四 - 道 - 水 - 龙 - 京 - 啊 - 地 - 行 - 么 - 五 - 都 - 桥 - 上 - 给 - 明 - 业 - 哪 - 附 - 八 - 宁 - 心 - 长 - 馆 - 百 - 这 - 汽 - 机 - 工 - 庄 - 方 - 商 - 司 - 石 - 确 - 兴 - 火 - 走 - 乡 - 万 - 通 - 加 - 银 - 青 - 发 - 校 - 速 - 交 - 退 - 德 - 际 - 电 - 楼 - 宾 - 找 - 苑 - 和 - 嗯 - 油 - 林 - 乐 - 景 - 打 - 达 - 来 - 七 - 川 - inform_请求类型_none - 最 - noise - 兰 - 湾 - 台 - 所 - 保 - 什 - 福 - 建 - 说 - 就 - 沙 - 页 - 宝 - 子 - 厂 - 科 - 尔 - 光 - inform_页码_none - 六 - 费 - 环 - 成 - 昌 - 吗 - 汉 - 白 - 黄 - 限 - 局 - 泉 - 怎 - 云 - 武 - 源 - 吃 - 前 - 点 - 收 - 物 - 滨 - 溪 - 马 - 贵 - 务 - 世 - 岛 - 没 - 生 - 常 - 理 - 会 - 们 - 重 - 浦 - 名 - 合 - 运 - 顺 - 美 - 儿 - 头 - 乌 - 设 - 厦 - 化 - 郑 - 时 - inform_poi目标_none - 现 - 农 - 港 - 泰 - 停 - 宜 - 昆 - 九 - 对 - 管 - 看 - 界 - 张 - 庆 - 文 - 博 - 嘉 - 零 - 苏 - 能 - 面 - 客 - 红 - 搜 - 远 - 古 - 津 - 始 - 王 - 呃 - 用 - 瑞 - 后 - 雅 - 带 - 流 - 木 - 之 - 汇 - 夏 - 他 - 还 - 清 - 临 - 服 - 渡 - 日 - 幺 - 济 - 田 - 锦 - 吉 - 呀 - 利 - 神 - 饭 - 香 - 太 - 双 - 永 - 图 - 洲 - 集 - 特 - 吧 - request_位置_none - 技 - 把 - 寺 - 爱 - 丰 - 春 - 盛 - 罗 - 队 - 也 - 亚 - 线 - 玉 - 哦 - 贸 - 果 - 连 - 正 - 结 - 与 - 米 - 鲁 - 警 - 信 - 捷 - 样 - 温 - 岭 - 丽 - 育 - 凤 - 位 - 听 - 动 - 可 - 原 - 年 - 经 - 纪 - 齐 - 索 - inform_对象_none - 义 - 多 - 叫 - 况 - 气 - 老 - 派 - 池 - 曲 - 营 - 返 - 置 - 品 - 程 - 同 - 辉 - 批 - 音 - 康 - 威 - 幼 - 斯 - 库 - 拉 - 星 - 团 - 风 - 岗 - 话 - 放 - 泽 - 晋 - 部 - 知 - 外 - 塔 - 沈 - 奇 - 卫 - 月 - 庭 - 眼 - 总 - 梅 - 房 - 千 - 哈 - 自 - 字 - 呢 - 豪 - 直 - 盘 - 屯 - 超 - 祥 - 佳 - 恒 - 过 - 以 - 两 - 蓝 - 修 - 入 - 松 - 铁 - 职 - 珠 - 凯 - 快 - 丹 - 体 - 书 - 游 - 转 - 莱 - 寨 - 克 - 当 - 李 - 钱 - s - 货 - 惠 - 格 - 岳 - 淮 - 束 - 社 - 莞 - 森 - 堵 - 内 - 蒙 - 分 - 柏 - 富 - 碧 - 凰 - 陵 - 桐 - 边 - 坡 - 胶 - 得 - 力 - 滚 - 喀 - 旗 - 料 - 歌 - 块 - 滩 - 查 - 虹 - 续 - 为 - 驾 - 许 - 峰 - 问 - 真 - 视 - 选 - 接 - 语 - 洪 - 众 - 全 - 徽 - 鄂 - 实 - 未 - 杭 - 尚 - 胜 - 塘 - 产 - 鱼 - 叉 - 岸 - 洛 - 随 - 哎 - 配 - 丁 - 继 - 迪 - 牛 - 坪 - 无 - 深 - 圳 - 韩 - 法 - 灵 - 迁 - 间 - 逼 - 步 - 咸 - 期 - 菜 - 紫 - 邢 - 赣 - 横 - 播 - 鼎 - 进 - 止 - 铜 - 便 - 鸡 - 巴 - 仁 - 财 - 佛 - 桂 - 官 - 英 - 绵 - 奥 - 矿 - 波 - 治 - 元 - 首 - 钟 - 计 - 飞 - 坊 - 阿 - 代 - 周 - 朝 - 固 - 错 - 向 - 潭 - 隆 - 装 - 纳 - 伊 - 将 - 军 - 师 - 途 - 影 - 怀 - 择 - 药 - 术 - 手 - 于 - 离 - 族 - 莲 - 布 - 呼 - 峡 - 迈 - 委 - 叮 - 咚 - 阴 - 宏 - 郡 - 健 - 本 - 洋 - 再 - 支 - 划 - 郊 - 绿 - 妈 - 旅 - 堰 - 肥 - 玛 - 左 - 网 - inform_途经点名称_none - 拜 - 材 - inform_终点修饰_none - 辽 - 煤 - 谢 - 则 - 土 - 草 - 埠 - 伦 - 堂 - 卡 - 肉 - 底 - 灯 - 树 - 寻 - 掉 - 展 - 庙 - 赵 - 余 - 见 - 望 - 故 - 事 - 相 - 杨 - inform_终点目标_none - 馨 - 税 - 属 - 资 - 井 - 艺 - 越 - 微 - 包 - 阜 - 记 - 窗 - 维 - 甲 - 鑫 - 休 - 啥 - 锡 - 渝 - 岩 - 彩 - 少 - 处 - 往 - 从 - 封 - 联 - 觉 - 验 - 容 - 萨 - 普 - 弄 - 干 - 强 - 鲜 - 柳 - 衡 - 规 - request_路况_none - 靖 - 沃 - 板 - 防 - 约 - 球 - 居 - 至 - 坝 - 翠 - 持 - 具 - 烟 - 榆 - 枫 - 照 - 意 - 目 - t - 凌 - 邦 - 报 - 码 - 轻 - 欣 - 复 - 买 - 玻 - 璃 - 住 - 恩 - 女 - 嘴 - 级 - 振 - 邵 - 浴 - 茂 - 黔 - 您 - 比 - 显 - 渭 - 钢 - 妇 - 易 - 党 - 版 - 介 - 姐 - 才 - 览 - k - 崇 - 桃 - 厅 - 虎 - 皮 - 仪 - 赤 - 寓 - 洞 - 绍 - 饰 - 很 - 病 - 度 - 胡 - 像 - 邮 - 又 - 充 - 贤 - 御 - 然 - 潍 - 基 - 启 - 聊 - 驶 - inform_路线偏好_none - 澄 - 几 - 等 - 塑 - 监 - 办 - 沧 - 亭 - 观 - 螺 - 领 - 秀 - 咋 - 坨 - 奎 - 优 - 半 - 贡 - 唐 - 写 - 今 - 慢 - 傻 - 反 - 次 - 甘 - 肃 - 它 - 泗 - 贺 - 拍 - 咱 - 留 - ktv - 察 - 顶 - 啦 - 别 - 润 - 谷 - 仙 - 慧 - 朱 - 靠 - 座 - 锅 - 麦 - 雁 - 羊 - 共 - 邓 - 荣 - 食 - 陕 - 邑 - 右 - 铺 - 梁 - 宣 - 幸 - 哥 - 士 - 员 - 招 - 番 - 徐 - 检 - 巷 - 私 - 堡 - 跟 - 器 - 峪 - 立 - 氏 - 教 - 圣 - 购 - 印 - 黑 - 完 - 条 - 唉 - 燕 - 屿 - 闸 - 茶 - 任 - 种 - 蛋 - 荆 - 岔 - inform_value_none - 黎 - 奉 - 准 - 熟 - 薛 - 朔 - 范 - 械 - 菲 - 雪 - 腾 - 备 - 琼 - 尹 - 垣 - 吴 - 示 - 嫖 - 宫 - 冲 - 毛 - 绘 - 菏 - 嘞 - 浙 - 遵 - 各 - 饶 - 嗷 - 简 - 施 - 俱 - 岚 - 豆 - 栋 - 险 - 岘 - 滇 - 叶 - 卓 - 荔 - 刘 - 滕 - 系 - 统 - e - 做 - 巡 - 坐 - 研 - 究 - 盐 - 冀 - 象 - 斗 - 娄 - 先 - 陆 - deny_操作_none - 户 - 额 - 价 - 更 - 拆 - 溧 - 量 - 帝 - 断 - 态 - 智 - 蜀 - 庐 - 舟 - 摄 - 泡 - 洗 - 历 - 咖 - 啡 - 湘 - 甸 - 泾 - 卖 - 朗 - 芜 - 棠 - 凉 - 嵩 - 焦 - 让 - 夫 - 吐 - 童 - 薇 - 旺 - 浩 - 息 - 裕 - 禄 - 睡 - 狮 - 质 - 樱 - 递 - 鸣 - 句 - 韶 - 色 - 典 - 厉 - 测 - 应 - 尉 - 汤 - 己 - 宸 - 漳 - 证 - 沟 - 巩 - 扬 - 笨 - 旁 - 湟 - 主 - 浪 - 殡 - request_前方路况_none - 竹 - 列 - 季 - 唱 - 冠 - 泥 - 懂 - 秋 - 君 - 祁 - 声 - 拥 - 曹 - 嘛 - 静 - 嗨 - 起 - 刚 - 墨 - 宿 - 络 - 襄 - 葫 - 芦 - 漫 - 峨 - 需 - 眉 - 瓦 - 如 - 根 - 域 - 式 - 何 - 鞍 - 饺 - 票 - 冶 - 喷 - 映 - 组 - 昭 - 延 - 萌 - 角 - 解 - 玲 - 蟹 - 晃 - 瀑 - 纽 - 逸 - 些 - 猪 - 蹄 - 亲 - 野 - 蒋 - 喂 - 荷 - 窝 - 锁 - 试 - 桑 - 沥 - 非 - 制 - 督 - 贝 - 址 - 识 - 侬 - 烧 - 翡 - 堤 - 伟 - 驼 - 昊 - 牌 - 陶 - 室 - 轩 - 鹰 - 钉 - 空 - 着 - 蛳 - 已 - 砖 - 姓 - 顿 - 麓 - 亿 - 售 - 功 - 淄 - 澳 - 斜 - 击 - 活 - 缴 - 输 - 雍 - 鄄 - 降 - 革 - 恢 - 卸 - 承 - 箬 - 澧 - 栈 - 疗 - 传 - 媒 - 血 - 战 - 舞 - 姨 - 婆 - 辆 - 蚌 - 鹅 - 剧 - 湛 - 亳 - b - 敦 - 煌 - 迎 - 味 - 数 - 妞 - 嫂 - 厚 - hi - 邹 - 摁 - 榄 - 梨 - 亮 - 纺 - 婚 - 培 - 训 - inform_起点名称_none - 护 - 霍 - 升 - 考 - m - 呗 - 摩 - 送 - 段 - 悦 - 餐 - 早 - 议 - 互 - 助 - 抚 - 慈 - 按 - 调 - 杰 - 份 - 兵 - 粥 - 邻 - 墅 - 鬃 - 泳 - 朋 - 良 - 缘 - 鼓 - 赛 - 枝 - 藏 - 鸿 - 冷 - 匀 - 征 - 欢 - 闯 - 汝 - 讲 - 肤 - 响 - 浮 - 录 - 冰 - 圆 - 算 - 思 - 储 - 蓄 - 苗 - 聚 - 湿 - 肇 - 阆 - 拿 - 沣 - 渔 - 铝 - 植 - 托 - 盟 - 宇 - 但 - 渠 - 告 - 丘 - 拓 - 陇 - 鹤 - 操 - 珙 - deny_poi名称_none - 询 - 攀 - 寿 - 副 - 或 - 假 - 焰 - 夜 - 妓 - 而 - 漆 - 濮 - 胥 - 密 - 志 - 苹 - 彭 - 陪 - 添 - 满 - 章 - 骨 - 栖 - 呦 - 善 - 乖 - 姑 - 爷 - 鸟 - 璧 - 专 - 洧 - 依 - 仔 - 晨 - 沂 - 券 - 晓 - 压 - 涨 - 闻 - 男 - 诊 - 融 - 怡 - 蓬 - 廊 - 殖 - 益 - 必 - 靓 - 蒲 - beyond - i - love - you - 旋 - 尖 - 驿 - 貂 - 蝉 - 足 - 迹 - 翰 - 杏 - 牡 - 帅 - 雨 - 呈 - 迷 - 哟 - 召 - 娼 - 辛 - 顾 - 殷 - 闵 - 潮 - 脑 - 彗 - 枣 - 杆 - 洁 - 画 - 片 - 认 - 灰 - 鞋 - 宠 - 劫 - 潘 - 烤 - 破 - 隶 - 搞 - 忠 - 仕 - 郴 - 梧 - 酌 - 涵 - 醍 - 候 - 俩 - 馈 - 磨 - 骤 - 翔 - 莘 - 希 - 娅 - 剑 - 权 - 壹 - 冕 - 蛟 - 拨 - 诶 - 盖 - 楠 - 只 - 编 - 虾 - 尽 - 尧 - 晚 - 珍 - 因 - 捆 - 绑 - 端 - 盱 - 眙 - 贩 - 卷 - 养 - 陂 - 晟 - 巧 - 椿 - 毕 - 沭 - 供 - 秒 - 眠 - 状 - 璟 - 受 - 伤 - 萍 - 奔 - 效 - 禽 - 玫 - 瑰 - request_剩余距离_none - 序 - 鹃 - 齿 - 厕 - 厨 - 忻 - 埔 - 茅 - 芳 - 雕 - 刻 - 蜜 - 筝 - g - 橄 - 畜 - 牧 - 仑 - 臣 - 溆 - 纱 - 卉 - 群 - 痛 - 疼 - 仟 - 赶 - 紧 - 闫 - 嘶 - 潼 - 烽 - 勾 - 驰 - 麻 - 烦 - 遍 - 樟 - 浜 - 极 - 酷 - 晶 - 穿 - 芽 - 害 - 钓 - 棍 - 核 - 橙 - 琴 - 滋 - 柯 - 箐 - 株 - 陌 - 坤 - 炳 - 槐 - 协 - 湄 - 滏 - 旦 - 策 - 虞 - 陈 - 情 - 潞 - 藁 - 豹 - 若 - 垃 - 圾 - 舰 - 造 - 珥 - 董 - 泼 - 乾 - 瑶 - 龚 - 撤 - 钛 - 责 - 吶 - 喜 - 隔 - 碗 - 倒 - 椰 - 冬 - 伯 - 乳 - 隐 - 尼 - 境 - 圩 - 卧 - 抱 - 使 - 玩 - 饮 - 峤 - 炉 - 终 - 霸 - 晴 - 糕 - 疫 - 弥 - 萧 - 围 - 邬 - 贞 - 逊 - 祠 - 泛 - 逯 - 侯 - 距 - 织 - 谋 - 嵋 - 楚 - 瑜 - 妹 - 误 - 念 - 镜 - 粮 - 涮 - 值 - 鹿 - 捞 - 沅 - 移 - 涉 - 模 - 饿 - 佩 - 汀 - 朐 - 魔 - 细 - 者 - 暖 - 汕 - 谛 - 棣 - 敖 - 此 - 背 - 鲅 - 圈 - 逻 - 绕 - 锋 - 班 - 珲 - 汾 - 著 - 参 - 且 - 摇 - 宕 - 缅 - 柔 - 脂 - 肪 - 变 - 谱 - 积 - 礼 - 凡 - 落 - 羽 - 歇 - 仰 - 聋 - 雷 - 磊 - 繁 - 吭 - 皇 - 晖 - 粤 - 腊 - 习 - 题 - 绅 - 畔 - 啤 - 弋 - 匹 - 订 - 单 - ok - 灶 - 描 - 婺 - 沿 - 莉 - 弘 - 茵 - 换 - 屏 - 瞎 - 较 - 岁 - 湫 - 塞 - 疏 - 勒 - 涟 - 巫 - 违 - 戈 - 吾 - 脏 - 葛 - 轮 - 胎 - 霞 - 鹭 - 废 - 稍 - 谨 - 慎 - 淡 - 注 - 每 - 既 - 删 - 喝 - 付 - 诸 - 暨 - 戴 - 綦 - 伍 - 诚 - 坦 - 兜 - 残 - 韵 - 喽 - 廖 - 麒 - 麟 - n - 感 - 籍 - 难 - 死 - 笑 - 哭 - 孩 - 频 - 舍 - 溶 - 垸 - 淀 - 奸 - 改 - 藤 - 狭 - 隧 - 翁 - 陀 - 扎 - 肯 - 揭 - 壁 - 件 - 刷 - 牙 - 节 - 恋 - 淹 - 桦 - 幢 - 棉 - 俺 - 屎 - 彬 - 牟 - 亩 - 傣 - 裴 - 翼 - 辰 - 剪 - 挡 - 凹 - 投 - 碣 - 妆 - 荡 - 驻 - 颍 - 狐 - 享 - 恐 - 汶 - 寅 - 仍 - 睿 - 搁 - 尊 - 泊 - 仲 - 午 - 枞 - 仓 - 卞 - 瀚 - 佰 - 暮 - 拐 - 崔 - 榭 - 棵 - 孕 - 潜 - 俏 - 葡 - 萄 - 采 - 摘 - 癜 - 屑 - 芙 - 蓉 - 咏 - 忙 - 漂 - 父 - 母 - 差 - 彻 - 魏 - 绥 - 闲 - 遥 - 棕 - 榈 - 壶 - 疆 - 苍 - 磁 - 辅 - 泸 - 淅 - a - 呐 - 燃 - 沱 - 禺 - 宛 - 友 - 俊 - 筑 - 贾 - 宋 - 梯 - 吨 - inform_poi修饰_none - 础 - 碑 - request_剩余路程_none - 创 - 孙 - 枢 - 翟 - 浑 - 糖 - 舜 - 橱 - 柜 - 浠 - 莒 - 乔 - 幕 - 磅 - 嘿 - 曼 - 昔 - 衣 - 铭 - 浏 - 喆 - 垦 - 墓 - 戍 - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: wav2vec2_xlsr download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 15 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 4 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 required: - output_dir - token_list version: 0.10.3a3 distributed: false ``` </details> ## LM config <details><summary>expand</summary> ``` NONE ``` </details>
patrickvonplaten/sat-base
patrickvonplaten
2021-10-22T17:51:13Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "unispeech-sat", "automatic-speech-recognition", "timit_asr", "generated_from_trainer", "dataset:timit_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: sat-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sat-base This model is a fine-tuned version of [microsoft/unispeech-sat-base](https://huggingface.co/microsoft/unispeech-sat-base) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.7014 - Wer: 0.5374 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.9958 | 0.69 | 100 | 6.7171 | 1.0 | | 3.0453 | 1.38 | 200 | 3.0374 | 1.0 | | 2.9989 | 2.07 | 300 | 2.9807 | 1.0 | | 2.969 | 2.76 | 400 | 2.9579 | 1.0 | | 2.903 | 3.45 | 500 | 2.9072 | 1.0 | | 2.8565 | 4.14 | 600 | 2.8804 | 1.0 | | 2.8195 | 4.83 | 700 | 2.7916 | 1.0 | | 2.3134 | 5.52 | 800 | 2.1456 | 1.0004 | | 1.5475 | 6.21 | 900 | 1.4663 | 0.9549 | | 1.1295 | 6.9 | 1000 | 1.1140 | 0.7227 | | 1.0181 | 7.59 | 1100 | 0.9258 | 0.6497 | | 1.0252 | 8.28 | 1200 | 0.8430 | 0.6255 | | 0.835 | 8.97 | 1300 | 0.8063 | 0.6032 | | 0.662 | 9.66 | 1400 | 0.7595 | 0.5931 | | 0.5558 | 10.34 | 1500 | 0.7322 | 0.5819 | | 0.7596 | 11.03 | 1600 | 0.7120 | 0.5708 | | 0.6169 | 11.72 | 1700 | 0.7073 | 0.5606 | | 0.4565 | 12.41 | 1800 | 0.7124 | 0.5586 | | 0.4554 | 13.1 | 1900 | 0.6880 | 0.5501 | | 0.6216 | 13.79 | 2000 | 0.6783 | 0.5494 | | 0.5393 | 14.48 | 2100 | 0.7067 | 0.5499 | | 0.4095 | 15.17 | 2200 | 0.7014 | 0.5438 | | 0.3551 | 15.86 | 2300 | 0.7000 | 0.5426 | | 0.5112 | 16.55 | 2400 | 0.6866 | 0.5426 | | 0.5139 | 17.24 | 2500 | 0.7134 | 0.5446 | | 0.3638 | 17.93 | 2600 | 0.7130 | 0.5434 | | 0.3327 | 18.62 | 2700 | 0.6980 | 0.5377 | | 0.4385 | 19.31 | 2800 | 0.7017 | 0.5390 | | 0.4986 | 20.0 | 2900 | 0.7014 | 0.5374 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2-random
patrickvonplaten
2021-10-22T17:20:59Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "timit_asr", "generated_from_trainer", "dataset:timit_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: wav2vec2-random results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-random This model is a fine-tuned version of [patrickvonplaten/wav2vec2-base-random](https://huggingface.co/patrickvonplaten/wav2vec2-base-random) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 3.1593 - Wer: 0.8364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9043 | 0.69 | 100 | 2.9683 | 1.0 | | 2.8537 | 1.38 | 200 | 2.9281 | 0.9997 | | 2.7803 | 2.07 | 300 | 2.7330 | 0.9999 | | 2.6806 | 2.76 | 400 | 2.5792 | 1.0 | | 2.4136 | 3.45 | 500 | 2.4327 | 0.9948 | | 2.1682 | 4.14 | 600 | 2.3508 | 0.9877 | | 2.2577 | 4.83 | 700 | 2.2176 | 0.9773 | | 2.355 | 5.52 | 800 | 2.1753 | 0.9542 | | 1.8588 | 6.21 | 900 | 2.0650 | 0.8851 | | 1.6831 | 6.9 | 1000 | 2.0109 | 0.8618 | | 1.888 | 7.59 | 1100 | 1.9660 | 0.8418 | | 2.0066 | 8.28 | 1200 | 1.9847 | 0.8531 | | 1.7044 | 8.97 | 1300 | 1.9760 | 0.8527 | | 1.3168 | 9.66 | 1400 | 2.0708 | 0.8327 | | 1.2143 | 10.34 | 1500 | 2.0601 | 0.8419 | | 1.6189 | 11.03 | 1600 | 2.0960 | 0.8299 | | 1.13 | 11.72 | 1700 | 2.2540 | 0.8408 | | 0.8001 | 12.41 | 1800 | 2.4260 | 0.8306 | | 0.7769 | 13.1 | 1900 | 2.4182 | 0.8445 | | 1.2165 | 13.79 | 2000 | 2.3666 | 0.8284 | | 0.8026 | 14.48 | 2100 | 2.7118 | 0.8662 | | 0.5148 | 15.17 | 2200 | 2.7957 | 0.8526 | | 0.4921 | 15.86 | 2300 | 2.8244 | 0.8346 | | 0.7629 | 16.55 | 2400 | 2.8944 | 0.8370 | | 0.5762 | 17.24 | 2500 | 3.0335 | 0.8367 | | 0.4076 | 17.93 | 2600 | 3.0776 | 0.8358 | | 0.3395 | 18.62 | 2700 | 3.1572 | 0.8261 | | 0.4862 | 19.31 | 2800 | 3.1319 | 0.8414 | | 0.5061 | 20.0 | 2900 | 3.1593 | 0.8364 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
tiennvcs/bert-base-uncased-finetuned-docvqa
tiennvcs
2021-10-22T15:49:05Z
16
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-docvqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-docvqa This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9146 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 250500 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.2151 | 0.1 | 1000 | 2.6299 | | 1.8885 | 0.21 | 2000 | 2.2217 | | 1.7353 | 0.31 | 3000 | 2.1675 | | 1.6188 | 0.41 | 4000 | 2.2436 | | 1.5802 | 0.52 | 5000 | 2.0539 | | 1.4875 | 0.62 | 6000 | 2.0551 | | 1.4675 | 0.73 | 7000 | 1.9368 | | 1.3485 | 0.83 | 8000 | 1.9456 | | 1.3273 | 0.93 | 9000 | 1.9281 | | 1.1048 | 1.04 | 10000 | 1.9333 | | 0.9529 | 1.14 | 11000 | 2.2019 | | 0.9418 | 1.24 | 12000 | 2.0381 | | 0.9209 | 1.35 | 13000 | 1.8753 | | 0.8788 | 1.45 | 14000 | 1.9964 | | 0.8729 | 1.56 | 15000 | 1.9690 | | 0.8671 | 1.66 | 16000 | 1.8513 | | 0.8379 | 1.76 | 17000 | 1.9627 | | 0.8722 | 1.87 | 18000 | 1.8988 | | 0.7842 | 1.97 | 19000 | 1.9146 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
muhtasham/autonlp-Doctor_DE-24595547
muhtasham
2021-10-22T14:04:29Z
5
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "autonlp", "de", "dataset:muhtasham/autonlp-data-Doctor_DE", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: de widget: - text: "I love AutoNLP 🤗" datasets: - muhtasham/autonlp-data-Doctor_DE co2_eq_emissions: 396.5529429198159 --- # Model Trained Using AutoNLP - Problem type: Single Column Regression - Model ID: 24595547 - CO2 Emissions (in grams): 396.5529429198159 ## Validation Metrics - Loss: 1.9565489292144775 - MSE: 1.9565489292144775 - MAE: 0.9890901446342468 - R2: -7.68965036332947e-05 - RMSE: 1.3987668752670288 - Explained Variance: 0.0 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595547 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595547", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595547", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
yokonav/xlm-roberta-base-finetuned-marc-en
yokonav
2021-10-22T13:36:59Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9177 - Mae: 0.4756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.136 | 1.0 | 235 | 0.9515 | 0.4756 | | 0.9724 | 2.0 | 470 | 0.9177 | 0.4756 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.14.0 - Tokenizers 0.10.3
muhtasham/autonlp-Doctor_DE-24595546
muhtasham
2021-10-22T12:23:10Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "de", "dataset:muhtasham/autonlp-data-Doctor_DE", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: de widget: - text: "I love AutoNLP 🤗" datasets: - muhtasham/autonlp-data-Doctor_DE co2_eq_emissions: 210.5957437893554 --- # Model Trained Using AutoNLP - Problem type: Single Column Regression - Model ID: 24595546 - CO2 Emissions (in grams): 210.5957437893554 ## Validation Metrics - Loss: 0.3092539310455322 - MSE: 0.30925390124320984 - MAE: 0.25015318393707275 - R2: 0.841926941198094 - RMSE: 0.5561060309410095 - Explained Variance: 0.8427215218544006 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595546 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595546", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595546", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
meghana/hitalm-xlmroberta-finetuned
meghana
2021-10-22T11:51:18Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: hitalm-xlmroberta-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hitalm-xlmroberta-finetuned This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.7745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 48 | 5.4501 | | No log | 2.0 | 96 | 5.2843 | | No log | 3.0 | 144 | 4.7745 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
muhtasham/autonlp-Doctor_DE-24595544
muhtasham
2021-10-22T10:51:44Z
6
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "de", "dataset:muhtasham/autonlp-data-Doctor_DE", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: de widget: - text: "I love AutoNLP 🤗" datasets: - muhtasham/autonlp-data-Doctor_DE co2_eq_emissions: 92.87363201770962 --- # Model Trained Using AutoNLP - Problem type: Single Column Regression - Model ID: 24595544 - CO2 Emissions (in grams): 92.87363201770962 ## Validation Metrics - Loss: 0.3001164197921753 - MSE: 0.3001164197921753 - MAE: 0.24272102117538452 - R2: 0.8465975006681247 - RMSE: 0.5478288531303406 - Explained Variance: 0.8468209505081177 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595544 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595544", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595544", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
model-attribution-challenge/german-gpt2
model-attribution-challenge
2021-10-22T08:58:57Z
7
0
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "de", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-09T20:17:28Z
--- language: de widget: - text: "Heute ist sehr schönes Wetter in" license: mit --- # German GPT-2 model In this repository we release (yet another) GPT-2 model, that was trained on various texts for German. The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉 **Note**: The model was initially released under an anonymous alias (`anonymous-german-nlp/german-gpt2`) so we now "de-anonymize" it. More details about GPT-2 can be found in the great [Hugging Face](https://huggingface.co/transformers/model_doc/gpt2.html) documentation. # Changelog 16.08.2021: Public release of re-trained version of our German GPT-2 model with better results. 15.11.2020: Initial release. Please use the tag `v1.0` for [this older version](https://huggingface.co/dbmdz/german-gpt2/tree/v1.0). # Training corpora We use pretty much the same corpora as used for training the DBMDZ BERT model, that can be found in [this repository](https://github.com/dbmdz/berts). Thanks to the awesome Hugging Face team, it is possible to create byte-level BPE with their awesome [Tokenizers](https://github.com/huggingface/tokenizers) library. With the previously mentioned awesome Tokenizers library we created a 50K byte-level BPE vocab based on the training corpora. After creating the vocab, we could train the GPT-2 for German on a v3-8 TPU over the complete training corpus for 20 epochs. All hyperparameters can be found in the official JAX/FLAX documentation [here](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/README.md) from Transformers. # Using the model The model itself can be used in this way: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dbmdz/german-gpt2") model = AutoModelWithLMHead.from_pretrained("dbmdz/german-gpt2") ``` However, text generation is a bit more interesting, so here's an example that shows how to use the great Transformers *Pipelines* for generating text: ```python from transformers import pipeline pipe = pipeline('text-generation', model="dbmdz/german-gpt2", tokenizer="dbmdz/german-gpt2") text = pipe("Der Sinn des Lebens ist es", max_length=100)[0]["generated_text"] print(text) ``` This could output this beautiful text: ``` Der Sinn des Lebens ist es, im Geist zu verweilen, aber nicht in der Welt zu sein, sondern ganz im Geist zu leben. Die Menschen beginnen, sich nicht nach der Natur und nach der Welt zu richten, sondern nach der Seele,' ``` # License All models are licensed under [MIT](LICENSE). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/stefan-it/german-gpt/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
teacookies/autonlp-roberta-base-squad2-24465516
teacookies
2021-10-22T08:21:22Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-roberta-base-squad2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-roberta-base-squad2 co2_eq_emissions: 65.5797497320557 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 24465516 - CO2 Emissions (in grams): 65.5797497320557 ## Validation Metrics - Loss: 0.6545609831809998 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465516 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465516", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465516", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-roberta-base-squad2-24465524
teacookies
2021-10-22T08:14:00Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-roberta-base-squad2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-roberta-base-squad2 co2_eq_emissions: 58.51753681929935 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 24465524 - CO2 Emissions (in grams): 58.51753681929935 ## Validation Metrics - Loss: 0.5759999752044678 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465524 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465524", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465524", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-roberta-base-squad2-24465520
teacookies
2021-10-22T08:13:49Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-roberta-base-squad2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-roberta-base-squad2 co2_eq_emissions: 57.56554511511173 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 24465520 - CO2 Emissions (in grams): 57.56554511511173 ## Validation Metrics - Loss: 0.6455457806587219 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465520 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465520", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465520", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-roberta-base-squad2-24465517
teacookies
2021-10-22T08:13:41Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-roberta-base-squad2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-roberta-base-squad2 co2_eq_emissions: 54.75747617143382 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 24465517 - CO2 Emissions (in grams): 54.75747617143382 ## Validation Metrics - Loss: 0.6653227806091309 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465517 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465517", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465517", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-roberta-base-squad2-24465519
teacookies
2021-10-22T08:13:26Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-roberta-base-squad2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-roberta-base-squad2 co2_eq_emissions: 58.19097299648645 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 24465519 - CO2 Emissions (in grams): 58.19097299648645 ## Validation Metrics - Loss: 0.566668689250946 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465519 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465519", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465519", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-roberta-base-squad2-24465523
teacookies
2021-10-22T08:13:18Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-roberta-base-squad2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-roberta-base-squad2 co2_eq_emissions: 56.99866929988893 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 24465523 - CO2 Emissions (in grams): 56.99866929988893 ## Validation Metrics - Loss: 0.5468788146972656 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465523 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465523", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465523", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-roberta-base-squad2-24465515
teacookies
2021-10-22T08:11:45Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-roberta-base-squad2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-roberta-base-squad2 co2_eq_emissions: 56.45146749922553 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 24465515 - CO2 Emissions (in grams): 56.45146749922553 ## Validation Metrics - Loss: 0.5932255387306213 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465515 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465515", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465515", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-roberta-base-squad2-24465518
teacookies
2021-10-22T08:04:33Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-roberta-base-squad2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-roberta-base-squad2 co2_eq_emissions: 45.268576304018616 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 24465518 - CO2 Emissions (in grams): 45.268576304018616 ## Validation Metrics - Loss: 0.5742421746253967 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465518 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465518", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465518", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
aditeyabaral/sentencetransformer-distilbert-base-cased
aditeyabaral
2021-10-21T22:30:29Z
129
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # aditeyabaral/sentencetransformer-distilbert-base-cased This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-base-cased') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-base-cased') model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-base-cased') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-base-cased) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 9234 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
pritoms/distilgpt2-finetuned-wikitext2
pritoms
2021-10-21T21:16:24Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0540 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 130 | 3.1733 | | No log | 2.0 | 260 | 3.0756 | | No log | 3.0 | 390 | 3.0540 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
JonatanGk/roberta-base-bne-finetuned-sqac
JonatanGk
2021-10-21T21:06:47Z
6
1
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:sqac", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sqac model-index: - name: roberta-base-bne-finetuned-sqac results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-sqac This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the sqac dataset. It achieves the following results on the evaluation set: - Loss: 1.2066 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9924 | 1.0 | 1196 | 0.8670 | | 0.474 | 2.0 | 2392 | 0.8923 | | 0.1637 | 3.0 | 3588 | 1.2066 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
lewtun/xlm-roberta-base-finetuned-marc-en-dummy
lewtun
2021-10-21T20:03:13Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en-dummy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en-dummy This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8931 - Mae: 0.4634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1258 | 1.0 | 235 | 0.9538 | 0.4390 | | 0.9445 | 2.0 | 470 | 0.8931 | 0.4634 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
lewtun/xlm-roberta-base-finetuned-marc-en
lewtun
2021-10-21T18:53:52Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8850 - Mae: 0.4390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1589 | 1.0 | 235 | 0.9769 | 0.5122 | | 0.974 | 2.0 | 470 | 0.8850 | 0.4390 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
abhishek/autonlp-hindi-question-answering-23865268
abhishek
2021-10-21T13:51:44Z
14
5
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "hi", "dataset:abhishek/autonlp-data-hindi-question-answering", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: hi widget: - text: "´सतीश धवन अंतरिक्ष केंद्र´ किस राज्य में स्थित है?" context: "सतीश धवन अंतरिक्ष केंद्र, भारतीय अंतरिक्ष अनुसंधान संगठन (इसरो) का प्रक्षेपण केंद्र है। यह आंध्र प्रदेश के श्रीहरीकोटा में स्थित है, इसे 'श्रीहरीकोटा रेंज' या 'श्रीहरीकोटा लाँचिंग रेंज' के नाम से भी जाना जाता है। 2002 में इसरो के पूर्व प्रबंधक और वैज्ञानिक सतीश धवन के मरणोपरांत उनके सम्मान में इसका नाम बदला गया। प्रक्षेपण यान की असेम्\u200dबली के लिए दूसरा भवन केन्\u200dद्रीय मंत्रिमंडल ने 12 सितम्\u200dबर, 2013 को सतीश धवन अंतरिक्ष केन्\u200dद्र, श्रीहरिकोटा में प्रक्षेपण यान की असेम्\u200dबली के लिए दूसरे भवन के निर्माण की मंजूरी दी। इस पर 363.95 करोड़ रुपये की अनुमानित लागत आएगी, जिसमें सात करोड़ रुपये का खर्च विदेशी मुद्रा में होगा। इस दूसरी बिल्डिंग के उपलब्\u200dध हो जाने से पीएसएलवी और जीएसएलवी की प्रक्षेपण फ्रीक्वेंसी बढ़ेगी। यह जीएसएलवी एमके-III के एकीकरण के लिए वर्तमान व्\u200dहीकल असेम्\u200dबली बिल्डिंग को अतिरिक्\u200dत सुविधा मुहैया करायेगी। तीसरे प्रक्षेपण पैड तथा भविष्\u200dय में सामान्\u200dय यान प्रक्षेपण के लिए भी इससे काफी सुविधा मिलेगी।[1]\nलांच पैड\nउपग्रह प्रक्षेपण यान लॉन्च पैड\nइस लांच पैड से उपग्रह प्रक्षेपण यान और संवर्धित उपग्रह प्रक्षेपण यान को लांच किया गया था। यह वर्तमान प्रक्षेपण स्थल के दक्षिणी सिरे पर स्थित है। इसे सेवामुक्त कर दिया गया है। शुरू में इसे उपग्रह प्रक्षेपण यान लांच करने के लिए बनाया गया था। लेकिन बाद में इसे संवर्धित उपग्रह प्रक्षेपण यान प्रक्षेपण परिसर के रूप में इस्तेमाल किया गया था।\nप्रथम लांच पैड\nद्वितीय लॉन्च पैड\nतृतीय लांच पैड\nसन्दर्भ श्रेणी:भारतीय अंतरिक्ष अनुसंधान संगठन\nश्रेणी:भारत के रॉकेट प्रक्षेपण स्थल" datasets: - abhishek/autonlp-data-hindi-question-answering co2_eq_emissions: 39.76330395590446 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - CO2 Emissions (in grams): 39.76330395590446 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-hindi-question-answering-23865268 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("abhishek/autonlp-hindi-question-answering-23865268", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-hindi-question-answering-23865268", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
tiennvcs/distilbert-base-uncased-finetuned-infovqa
tiennvcs
2021-10-21T11:37:56Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-infovqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-infovqa This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8872 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 250500 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.02 | 100 | 4.7706 | | No log | 0.05 | 200 | 4.4399 | | No log | 0.07 | 300 | 3.8175 | | No log | 0.09 | 400 | 3.8306 | | 3.3071 | 0.12 | 500 | 3.6480 | | 3.3071 | 0.14 | 600 | 3.6451 | | 3.3071 | 0.16 | 700 | 3.4974 | | 3.3071 | 0.19 | 800 | 3.4686 | | 3.3071 | 0.21 | 900 | 3.4703 | | 3.5336 | 0.23 | 1000 | 3.3165 | | 3.5336 | 0.25 | 1100 | 3.3634 | | 3.5336 | 0.28 | 1200 | 3.3466 | | 3.5336 | 0.3 | 1300 | 3.3411 | | 3.5336 | 0.32 | 1400 | 3.2456 | | 3.3593 | 0.35 | 1500 | 3.3257 | | 3.3593 | 0.37 | 1600 | 3.2941 | | 3.3593 | 0.39 | 1700 | 3.2581 | | 3.3593 | 0.42 | 1800 | 3.1680 | | 3.3593 | 0.44 | 1900 | 3.2077 | | 3.2436 | 0.46 | 2000 | 3.2422 | | 3.2436 | 0.49 | 2100 | 3.2529 | | 3.2436 | 0.51 | 2200 | 3.2681 | | 3.2436 | 0.53 | 2300 | 3.1055 | | 3.2436 | 0.56 | 2400 | 3.0174 | | 3.093 | 0.58 | 2500 | 3.0608 | | 3.093 | 0.6 | 2600 | 3.0200 | | 3.093 | 0.63 | 2700 | 2.9884 | | 3.093 | 0.65 | 2800 | 3.0041 | | 3.093 | 0.67 | 2900 | 2.9700 | | 3.0087 | 0.69 | 3000 | 3.0993 | | 3.0087 | 0.72 | 3100 | 3.0499 | | 3.0087 | 0.74 | 3200 | 2.9317 | | 3.0087 | 0.76 | 3300 | 3.0817 | | 3.0087 | 0.79 | 3400 | 3.0035 | | 2.9694 | 0.81 | 3500 | 3.0850 | | 2.9694 | 0.83 | 3600 | 2.9948 | | 2.9694 | 0.86 | 3700 | 2.9874 | | 2.9694 | 0.88 | 3800 | 2.9202 | | 2.9694 | 0.9 | 3900 | 2.9322 | | 2.8277 | 0.93 | 4000 | 2.9195 | | 2.8277 | 0.95 | 4100 | 2.8638 | | 2.8277 | 0.97 | 4200 | 2.8809 | | 2.8277 | 1.0 | 4300 | 2.8872 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
anton-l/wav2vec2-base-finetuned-ks
anton-l
2021-10-21T11:04:30Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - superb metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-ks results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0952 - Accuracy: 0.9823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7908 | 1.0 | 399 | 0.6776 | 0.9009 | | 0.3202 | 2.0 | 798 | 0.2061 | 0.9763 | | 0.221 | 3.0 | 1197 | 0.1257 | 0.9785 | | 0.1773 | 4.0 | 1596 | 0.0990 | 0.9813 | | 0.1729 | 5.0 | 1995 | 0.0952 | 0.9823 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
BSC-LT/roberta-large-bne
BSC-LT
2021-10-21T10:32:31Z
37
7
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "national library of spain", "spanish", "bne", "es", "dataset:bne", "arxiv:1907.11692", "arxiv:2107.07253", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" datasets: - "bne" metrics: - "ppl" widget: - text: "Este año las campanadas de La Sexta las <mask> Pedroche y Chicote." - text: "El artista Antonio Orozco es un colaborador de La <mask>." - text: "Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje." - text: "Hay base legal dentro del marco <mask> actual." --- **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne # RoBERTa large trained with data from National Library of Spain (BNE) ## Model Description RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Training corpora and preprocessing The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ## Tokenization and pre-training The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The RoBERTa-large-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa large. The training lasted a total of 96 hours with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Evaluation and results For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish). ## Citing Check out our paper for all the details: https://arxiv.org/abs/2107.07253 ``` @misc{gutierrezfandino2021spanish, title={Spanish Language Models}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas}, year={2021}, eprint={2107.07253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
BSC-LT/roberta-large-bne-sqac
BSC-LT
2021-10-21T10:32:05Z
28
3
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "national library of spain", "spanish", "bne", "qa", "question answering", "es", "dataset:BSC-TeMU/SQAC", "arxiv:1907.11692", "arxiv:2107.07253", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" - "qa" - "question answering" datasets: - "BSC-TeMU/SQAC" metrics: - "f1" --- **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-sqac # Spanish RoBERTa-large trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset. RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne ## Dataset The dataset used is the [SQAC corpus](https://huggingface.co/datasets/BSC-TeMU/SQAC). ## Evaluation and results F1 Score: 0.7993 (average of 5 runs). For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish). ## Citing Check out our paper for all the details: https://arxiv.org/abs/2107.07253 ``` @misc{gutierrezfandino2021spanish, title={Spanish Language Models}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas}, year={2021}, eprint={2107.07253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
BSC-LT/roberta-large-bne-capitel-ner
BSC-LT
2021-10-21T10:31:30Z
13
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "ner", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" - "capitel" - "ner" datasets: - "bne" - "capitel" metrics: - "f1" --- **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-capitel-ner # Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne ## Dataset The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). ## Evaluation and results F1 Score: 0.8998 For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish). ## Citing Check out our paper for all the details: https://arxiv.org/abs/2107.07253 ``` @misc{gutierrezfandino2021spanish, title={Spanish Language Models}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas}, year={2021}, eprint={2107.07253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
BSC-LT/roberta-base-bne
BSC-LT
2021-10-21T10:30:31Z
2,054
9
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "national library of spain", "spanish", "bne", "es", "dataset:bne", "arxiv:1907.11692", "arxiv:2107.07253", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" datasets: - "bne" metrics: - "ppl" widget: - text: "Este año las campanadas de La Sexta las presentará <mask>." - text: "David Broncano es un presentador de La <mask>." - text: "Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje." - text: "Hay base legal dentro del marco <mask> actual." --- **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne # RoBERTa base trained with data from National Library of Spain (BNE) ## Model Description RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Training corpora and preprocessing The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ## Tokenization and pre-training The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The RoBERTa-base-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Evaluation and results For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish). ## Citing Check out our paper for all the details: https://arxiv.org/abs/2107.07253 ``` @misc{gutierrezfandino2021spanish, title={Spanish Language Models}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas}, year={2021}, eprint={2107.07253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
BSC-LT/roberta-base-bne-capitel-pos
BSC-LT
2021-10-21T10:29:55Z
27
3
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "pos", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" - "capitel" - "pos" datasets: - "bne" - "capitel" metrics: - "f1" widget: - text: "Festival de San Sebastián: Johnny Depp recibirá el premio Donostia en pleno rifirrafe judicial con Amber Heard" - text: "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto." - text: "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje." - text: "El Tribunal Superior de Justicia se pronunció ayer: \"Hay base legal dentro del marco jurídico actual\"." --- **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-pos # Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne ## Dataset The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2). ## Evaluation and results F1 Score: 0.9846 (average of 5 runs). For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish). ## Citing Check out our paper for all the details: https://arxiv.org/abs/2107.07253 ``` @misc{gutierrezfandino2021spanish, title={Spanish Language Models}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas}, year={2021}, eprint={2107.07253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
BSC-LT/roberta-base-bne-capitel-ner
BSC-LT
2021-10-21T10:29:35Z
43
1
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "ner", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" - "capitel" - "ner" datasets: - "bne" - "capitel" metrics: - "f1" --- **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner # Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne ## Dataset The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). ## Evaluation and results F1 Score: 0.8960 For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish). ## Citing Check out our paper for all the details: https://arxiv.org/abs/2107.07253 ``` @misc{gutierrezfandino2021spanish, title={Spanish Language Models}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas}, year={2021}, eprint={2107.07253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
BSC-LT/roberta-base-bne-capitel-ner-plus
BSC-LT
2021-10-21T10:29:17Z
8
2
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "ner", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" - "capitel" - "ner" datasets: - "bne" - "capitel" metrics: - "f1" inference: parameters: aggregation_strategy: "first" --- **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus # Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne ## Dataset The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). **IMPORTANT ABOUT THIS MODEL:** We modified the dataset to make this model more robust to general Spanish input. In the Spanish language all the name entities are capitalized, as this dataset has been elaborated by experts, it is totally correct in terms of Spanish language. We randomly took some entities and we lower-cased some of them for the model to learn not only that the named entities are capitalized, but also the structure of a sentence that should contain a named entity. For instance: "My name is [placeholder]", this [placeholder] should be a named entity even though it is not written capitalized. The model trained on the original capitel dataset can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne-capitel-ner Examples: This model: - "Me llamo asier y vivo en barcelona todo el año." → "Me llamo {as:S-PER}{ier:S-PER} y vivo en {barcelona:S-LOC} todo el año." - "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." → "Hoy voy a visitar el {par:B-LOC}{k:I-LOC} {gü:E-LOC}{ell:E-LOC} tras salir del {barcelona:B-ORG} {super:I-ORG}{com:I-ORG}{pu:I-ORG}{ting:I-ORG} {cen:E-ORG}{ter:E-ORG}." Model trained on original data: - "Me llamo asier y vivo en barcelona todo el año." → "Me llamo asier y vivo en barcelona todo el año." (nothing) - "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." → "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." (nothing) ## Evaluation and results F1 Score: 0.8867 For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish). ## Citing Check out our paper for all the details: https://arxiv.org/abs/2107.07253 ``` @misc{gutierrezfandino2021spanish, title={Spanish Language Models}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas}, year={2021}, eprint={2107.07253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
patrickvonplaten/unispeech-sat-base-plus-timit-ft
patrickvonplaten
2021-10-21T10:05:15Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "unispeech-sat", "automatic-speech-recognition", "timit_asr", "generated_from_trainer", "dataset:timit_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: unispeech-sat-base-plus-timit-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unispeech-sat-base-plus-timit-ft This model is a fine-tuned version of [microsoft/unispeech-sat-base-plus](https://huggingface.co/microsoft/unispeech-sat-base-plus) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.6549 - Wer: 0.4051 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3838 | 0.69 | 100 | 3.2528 | 1.0 | | 2.9608 | 1.38 | 200 | 2.9682 | 1.0 | | 2.9574 | 2.07 | 300 | 2.9346 | 1.0 | | 2.8555 | 2.76 | 400 | 2.7612 | 1.0 | | 1.7418 | 3.45 | 500 | 1.5732 | 0.9857 | | 0.9606 | 4.14 | 600 | 1.0014 | 0.7052 | | 0.8334 | 4.83 | 700 | 0.7691 | 0.6161 | | 0.852 | 5.52 | 800 | 0.7169 | 0.5997 | | 0.5707 | 6.21 | 900 | 0.6821 | 0.5527 | | 0.4235 | 6.9 | 1000 | 0.6078 | 0.5140 | | 0.4357 | 7.59 | 1100 | 0.5927 | 0.4982 | | 0.5004 | 8.28 | 1200 | 0.5814 | 0.4826 | | 0.3757 | 8.97 | 1300 | 0.5951 | 0.4643 | | 0.2579 | 9.66 | 1400 | 0.5990 | 0.4581 | | 0.2087 | 10.34 | 1500 | 0.5864 | 0.4488 | | 0.3155 | 11.03 | 1600 | 0.5836 | 0.4464 | | 0.2701 | 11.72 | 1700 | 0.6045 | 0.4348 | | 0.172 | 12.41 | 1800 | 0.6494 | 0.4344 | | 0.1529 | 13.1 | 1900 | 0.5915 | 0.4241 | | 0.2411 | 13.79 | 2000 | 0.6156 | 0.4246 | | 0.2348 | 14.48 | 2100 | 0.6363 | 0.4206 | | 0.1429 | 15.17 | 2200 | 0.6394 | 0.4161 | | 0.1151 | 15.86 | 2300 | 0.6186 | 0.4167 | | 0.1723 | 16.55 | 2400 | 0.6498 | 0.4124 | | 0.1997 | 17.24 | 2500 | 0.6541 | 0.4076 | | 0.1297 | 17.93 | 2600 | 0.6546 | 0.4117 | | 0.101 | 18.62 | 2700 | 0.6471 | 0.4075 | | 0.1272 | 19.31 | 2800 | 0.6586 | 0.4065 | | 0.1901 | 20.0 | 2900 | 0.6549 | 0.4051 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
MINYOUNG/distilbert-base-uncased-finetuned-cola
MINYOUNG
2021-10-21T09:42:00Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5494735380761103 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8540 - Matthews Correlation: 0.5495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5219 | 1.0 | 535 | 0.5314 | 0.4095 | | 0.346 | 2.0 | 1070 | 0.5141 | 0.5054 | | 0.2294 | 3.0 | 1605 | 0.6351 | 0.5200 | | 0.1646 | 4.0 | 2140 | 0.7575 | 0.5459 | | 0.1235 | 5.0 | 2675 | 0.8540 | 0.5495 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
pritoms/distilgpt2-finetuned-mit-lecture
pritoms
2021-10-21T08:59:34Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-mit-lecture results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-mit-lecture This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8377 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 144 | 3.8737 | | No log | 2.0 | 288 | 3.8436 | | No log | 3.0 | 432 | 3.8377 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
bochaowei/t5-small-finetuned-xsum-wei2
bochaowei
2021-10-21T07:21:16Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum-wei2 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 29.2287 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-wei2 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4131 - Rouge1: 29.2287 - Rouge2: 8.4073 - Rougel: 23.0934 - Rougelsum: 23.0954 - Gen Len: 18.8236 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.633 | 1.0 | 17004 | 2.4131 | 29.2287 | 8.4073 | 23.0934 | 23.0954 | 18.8236 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
huggingtweets/s66jewelevans
huggingtweets
2021-10-20T23:06:38Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/s66jewelevans/1634771194675/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1313199276852342784/fJ8Lb2C__400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Jewel Evans</div> <div style="text-align: center; font-size: 14px;">@s66jewelevans</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Jewel Evans. | Data | Jewel Evans | | --- | --- | | Tweets downloaded | 1714 | | Retweets | 2 | | Short tweets | 20 | | Tweets kept | 1692 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ec5yuuj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @s66jewelevans's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kxbfdnt) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kxbfdnt/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/s66jewelevans') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AyushPJ/ai-club-inductions-21-nlp-roBERTa
AyushPJ
2021-10-20T22:33:57Z
11
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer model-index: - name: ai-club-inductions-21-nlp-roBERTa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai-club-inductions-21-nlp-roBERTa This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.11.3 - Pytorch 1.7.1+cpu - Datasets 1.14.0 - Tokenizers 0.10.3
bochaowei/t5-small-finetuned-xsum-wei1
bochaowei
2021-10-20T18:33:31Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
20% of the training data --- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum-wei1 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 27.5875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-wei1 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5287 - Rouge1: 27.5875 - Rouge2: 7.4083 - Rougel: 21.5654 - Rougelsum: 21.5716 - Gen Len: 18.8205 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.7677 | 1.0 | 3401 | 2.5441 | 27.4235 | 7.2208 | 21.3535 | 21.3636 | 18.8311 | | 2.735 | 2.0 | 6802 | 2.5287 | 27.5875 | 7.4083 | 21.5654 | 21.5716 | 18.8205 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
monologg/koelectra-base-generator
monologg
2021-10-20T16:55:00Z
7
0
transformers
[ "transformers", "pytorch", "electra", "fill-mask", "korean", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ko license: apache-2.0 tags: - korean --- # KoELECTRA (Base Generator) Pretrained ELECTRA Language Model for Korean (`koelectra-base-generator`) For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md). ## Usage ### Load model and tokenizer ```python >>> from transformers import ElectraModel, ElectraTokenizer >>> model = ElectraModel.from_pretrained("monologg/koelectra-base-generator") >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator") ``` ### Tokenizer example ```python >>> from transformers import ElectraTokenizer >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator") >>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]") ['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'] >>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']) [2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3] ``` ## Example using ElectraForMaskedLM ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="monologg/koelectra-base-generator", tokenizer="monologg/koelectra-base-generator" ) print(fill_mask("나는 {} 밥을 먹었다.".format(fill_mask.tokenizer.mask_token))) ```
monologg/koelectra-base-v3-discriminator
monologg
2021-10-20T16:53:40Z
31,234
30
transformers
[ "transformers", "pytorch", "electra", "pretraining", "korean", "ko", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ko license: apache-2.0 tags: - korean --- # KoELECTRA v3 (Base Discriminator) Pretrained ELECTRA Language Model for Korean (`koelectra-base-v3-discriminator`) For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md). ## Usage ### Load model and tokenizer ```python >>> from transformers import ElectraModel, ElectraTokenizer >>> model = ElectraModel.from_pretrained("monologg/koelectra-base-v3-discriminator") >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator") ``` ### Tokenizer example ```python >>> from transformers import ElectraTokenizer >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator") >>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]") ['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]'] >>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]']) [2, 11229, 29173, 13352, 25541, 4110, 7824, 17788, 18, 3] ``` ## Example using ElectraForPreTraining ```python import torch from transformers import ElectraForPreTraining, ElectraTokenizer discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-v3-discriminator") tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator") sentence = "나는 방금 밥을 먹었다." fake_sentence = "나는 내일 밥을 먹었다." fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) print(list(zip(fake_tokens, predictions.tolist()[1:-1]))) ```
bochaowei/t5-small-finetuned-xsum-wei0
bochaowei
2021-10-20T15:10:46Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum-wei0 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 25.7398 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-wei0 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.6289 - Rouge1: 25.7398 - Rouge2: 6.1361 - Rougel: 19.8262 - Rougelsum: 19.8284 - Gen Len: 18.7984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.858 | 1.0 | 1701 | 2.6289 | 25.7398 | 6.1361 | 19.8262 | 19.8284 | 18.7984 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
YushiUeda/test
YushiUeda
2021-10-20T14:48:21Z
4
0
espnet
[ "espnet", "audio", "diarization", "dataset:mini_librispeech", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - espnet - audio - diarization language: datasets: - mini_librispeech license: cc-by-4.0 --- ## ESPnet2 DIAR model ### `YushiUeda/test` This model was trained by Yushi Ueda using mini_librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 4dfa2be4331d3d68f124aa5fd81f63217a7278a4 pip install -e . cd egs2/mini_librispeech/diar1 ./run.sh --skip_data_prep false --skip_train true --download_model YushiUeda/test ``` <!-- Generated by scripts/utils/show_diar_result.sh --> # RESULTS ## Environments - date: `Wed Aug 25 23:29:07 EDT 2021` - python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]` - espnet version: `espnet 0.10.2a1` - pytorch version: `pytorch 1.9.0+cu102` - Git hash: `19bcd34f9395e01e54a97c4db5ecbcedb429dd92` - Commit date: `Tue Aug 24 19:50:44 2021 -0400` ## `diar_train_diar_raw_max_epoch20` ### DER `dev_clean_2_ns2_beta2_500` |threshold_median_collar|DER| |---|---| |result_th0.3_med1_collar0.0|32.42| |result_th0.3_med11_collar0.0|32.03| |result_th0.4_med1_collar0.0|30.96| |result_th0.4_med11_collar0.0|30.26| |result_th0.5_med1_collar0.0|30.35| |result_th0.5_med11_collar0.0|29.37| |result_th0.6_med1_collar0.0|30.77| |result_th0.6_med11_collar0.0|29.52| |result_th0.7_med1_collar0.0|32.60| |result_th0.7_med11_collar0.0|31.03| ## DIAR config <details><summary>expand</summary> ``` config: conf/train_diar.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/diar_train_diar_raw_max_epoch20 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 20 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 3 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 16 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/diar_stats_8k/train/speech_shape - exp/diar_stats_8k/train/spk_labels_shape valid_shape_file: - exp/diar_stats_8k/valid/speech_shape - exp/diar_stats_8k/valid/spk_labels_shape batch_type: folded valid_batch_type: null fold_length: - 80000 - 800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 200000 chunk_shift_ratio: 0.5 num_cache_chunks: 64 train_data_path_and_name_and_type: - - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp - speech - sound - - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm - spk_labels - rttm valid_data_path_and_name_and_type: - - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp - speech - sound - - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm - spk_labels - rttm allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.01 scheduler: noamlr scheduler_conf: warmup_steps: 1000 num_spk: 2 init: xavier_uniform input_size: null model_conf: loss_type: pit use_preprocessor: true frontend: default frontend_conf: fs: 8k hop_length: 128 normalize: global_mvn normalize_conf: stats_file: exp/diar_stats_8k/train/feats_stats.npz encoder: transformer encoder_conf: input_layer: linear num_blocks: 2 linear_units: 512 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.0 decoder: linear decoder_conf: {} label_aggregator: label_aggregator label_aggregator_conf: {} required: - output_dir version: 0.10.2a1 distributed: false ``` </details>
Monsia/autonlp-tweets-classification-23044997
Monsia
2021-10-20T14:38:58Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "en", "dataset:Monsia/autonlp-data-tweets-classification", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - Monsia/autonlp-data-tweets-classification co2_eq_emissions: 4.819872182577655 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 23044997 - CO2 Emissions (in grams): 4.819872182577655 ## Validation Metrics - Loss: 0.001594889909029007 - Accuracy: 0.9997478885667465 - Macro F1: 0.9991190902836993 - Micro F1: 0.9997478885667465 - Weighted F1: 0.9997476735518704 - Macro Precision: 0.9998014460161265 - Micro Precision: 0.9997478885667465 - Weighted Precision: 0.9997479944069787 - Macro Recall: 0.9984426545713851 - Micro Recall: 0.9997478885667465 - Weighted Recall: 0.9997478885667465 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Monsia/autonlp-tweets-classification-23044997 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Monsia/autonlp-tweets-classification-23044997", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Monsia/autonlp-tweets-classification-23044997", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
huggingtweets/dril-linaarabii
huggingtweets
2021-10-20T11:36:30Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/dril-linaarabii/1634729786636/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1423543147305619456/9RT-Ji0Z_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wint & Lina Arabi</div> <div style="text-align: center; font-size: 14px;">@dril-linaarabii</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wint & Lina Arabi. | Data | wint | Lina Arabi | | --- | --- | --- | | Tweets downloaded | 3227 | 3130 | | Retweets | 473 | 896 | | Short tweets | 317 | 322 | | Tweets kept | 2437 | 1912 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yq3shwo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-linaarabii's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/21rpwe17) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/21rpwe17/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dril-linaarabii') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
aditeyabaral/sentencetransformer-distilbert-hinglish-small
aditeyabaral
2021-10-20T09:04:04Z
173
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # aditeyabaral/sentencetransformer-distilbert-hinglish-small This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-hinglish-small') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-small') model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-small') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-hinglish-small) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 4617 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
lapcameraatp/cameragiamsat
lapcameraatp
2021-10-20T08:53:25Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
ERROR: type should be string, got "https://camerasaigon24h.com\nhttps://cameragiamsat360.com\nhttps://lapdatcameracongty.vn\nhttps://lapdatcamerawifi.vn\nhttps://lapcamerawifi.com\nhttps://giacameraquansat.com\nhttps://cameraquansatre.com\nhttps://cameraanninhwifi.com\n\nhttps://camerawifigiadinh.com/\nhttps://lapcameratanphu.com\nhttp://camerathehemoi.com\nhttp://lapcameratanbinh.com\nhttp://lapcamerabinhtan.com\nhttp://lapcameraquan2giare.com\nhttp://cameraquan12.com\nhttp://cameraquan3giare.com\nhttp://lapdatcameraquan4.com\nhttp://lapdatcameraquan10.com\nhttp://lapdatcameraquan7.com\nhttp://camerabinhthanh.com\nhttp://lapcameraquan9giare.com\nhttp://lapdatcameraquan11.com\nhttp://lapcameragiarethuduc.com\nhttp://lapdatcameraquan6.com\nhttp://lapdatcameraquan5.com\nhttp://lapcameraquan1.com\nhttp://cameraquan8.com\nhttp://cameranhatranggiare.com\nhttp://lapcamerahocmon.com\nhttp://lapcameragiaregovap.com\nhttp://lapcameraphunhuan.com\nhttp://cameragiarebinhduong.com\nhttp://phanphoicameragiare.com\nhttp://camerawifigiadinh.com/\nhttp://cameraphanthietgiare.com/"
mrm8488/t5-base-finetuned-break_data
mrm8488
2021-10-20T08:31:28Z
962
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:break_data", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - break_data widget: - text: "paraphrase: The composer of Sands Theme plays what type of guitar?" --- # T5-base fine-tuned on break_data / QDMR-high-level ❓➡️📋 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [break_data](https://huggingface.co/nlp/viewer/?dataset=break_data&config=QDMR-high-level) dataset for **QDMRs**. ## Details of T5 📜 ➡️ 📜 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (QDMRs) - Dataset 📚 Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format. | Dataset | Split | # samples | | -------- | ----- | --------- | | break_data | train | 17503 | | break_data | valid | 3130 | Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The main change is at preprocessing ```inputs``` and ```targets``` we feed to the model. We do it as a *paraphrasing task*. ## Model in Action 🚀 ```python # Tip: By now, install transformers from source from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-break_data") model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-break_data") def get_decomposition(question): input_text = "paraphrase: %s </s>" % question features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=32) return tokenizer.decode(output[0]) question = "The composer of Sands Theme plays what type of guitar?" get_decomposition(question) # output: 'return Sands Theme ;return composer of #1 ;return guitar that #2 plays' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
aditeyabaral/sentencetransformer-bert-hinglish-small
aditeyabaral
2021-10-20T06:28:16Z
9
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # aditeyabaral/sentencetransformer-bert-hinglish-small This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-small') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small') model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-small) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 4617 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
chrisjay/masakhane_benchmarks
chrisjay
2021-10-20T05:55:51Z
0
0
null
[ "african-languages", "machine-translation", "text", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: african-languages tags: - african-languages - machine-translation - text license: apache-2.0 model-index: - name: Masakhane Benchmark Models results: - task: name: Machine Translation type: machine-translation dataset: name: masakhane benchmarks args: african-languages --- # Interacting with the Masakhane Benchmark Models I created this demo for very easy interaction with the [benchmark models on Masakhane](https://github.com/masakhane-io/masakhane-mt/tree/master/benchmarks) which were trained with [JoeyNMT](https://github.com/chrisemezue/joeynmt)(my forked version). To access the space click [here](https://huggingface.co/spaces/chrisjay/masakhane-benchmarks). To include your language, all you need to do is: 1. Create a folder in the format *src-tgt/main* for your language pair, if it does not exist. 2. Inside the *main* folder put the following files: 1. model checkpoint. Rename it to `best.ckpt`. 2. `config.yaml` file. This is the JoeyNMT config file which loads the model an pre-processing parameters. 3. `src_vocab.txt` file. 4. `trg_vocab.txt` file. The space currently supports these languages: | source language | target language | |:---------------:|:---------------:| | English | Swahili | | English | Afrikaans | | English | Arabic | | English | Urhobo | | English | Ẹ̀dó | | Efik | English | | English | Hausa | | English | Igbo | | English | Fon | | English | Twi | | English | Dendi | | English | Ẹ̀sán | | English | Isoko | | English | Kamba | | English | Luo | | English | Southern Ndebele | | English | Tshivenda | | Shona | English | | Swahili | English | | Yoruba | English | TO DO: 1. Include more languages from the benchmark.
Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition
Bagus
2021-10-20T05:38:41Z
37
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio", "audio-classification", "speech", "el", "dataset:aesdd", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:04Z
--- language: el datasets: - aesdd tags: - audio - audio-classification - speech license: apache-2.0 --- ~~~ # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !git clone https://github.com/m3hrdadfi/soxan cd soxan ~~~ # prediction ~~~ import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor import librosa import IPython.display as ipd import numpy as np import pandas as pd ~~~ ~~~ device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device) ~~~ ~~~ def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Emotion": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ~~~ # prediction ~~~ # path for a sample path = '/data/jtes_v1.1/wav/f01/ang/f01_ang_01.wav' outputs = predict(path, sampling_rate) ~~~ ~~~ [{'Emotion': 'anger', 'Score': '98.3%'}, {'Emotion': 'disgust', 'Score': '0.0%'}, {'Emotion': 'fear', 'Score': '0.4%'}, {'Emotion': 'happiness', 'Score': '0.7%'}, {'Emotion': 'sadness', 'Score': '0.5%'}] ~~~
huggingartists/adele
huggingartists
2021-10-20T04:50:21Z
5
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/adele", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/adele tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/4c3ac1f1d845d251671a892309b5f9b5.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Adele</div> <a href="https://genius.com/artists/adele"> <div style="text-align: center; font-size: 14px;">@adele</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Adele. Dataset is available [here](https://huggingface.co/datasets/huggingartists/adele). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/adele") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1yyqw6ss/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Adele's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3qruwjpr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3qruwjpr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/adele') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/adele") model = AutoModelWithLMHead.from_pretrained("huggingartists/adele") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
aditeyabaral/sentencetransformer-distilbert-hinglish-big
aditeyabaral
2021-10-20T01:24:00Z
153
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # aditeyabaral/sentencetransformer-distilbert-hinglish-big This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-hinglish-big') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-big') model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-big') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-hinglish-big) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 4617 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
yazdipour/text-to-sparql-t5-base-qald9
yazdipour
2021-10-19T23:25:20Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: sparql-qald9-t5-base-2021-10-19_23-02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sparql-qald9-t5-base-2021-10-19_23-02 This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-base-2021-10-19_15-35_lastDS](https://huggingface.co/yazdipour/text-to-sparql-t5-base-2021-10-19_15-35_lastDS) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-----------------------------------------------------------------------------:|:-------:| | No log | 1.0 | 51 | 1.8300 | 19.0 | 0.3640 | 0.0346 | 0.1943 | 10.0358 | [72.88988261598658, 50.27455765710799, 35.93015446608462, 28.454070201643017] | 0.2281 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
huggingtweets/iamdevloper
huggingtweets
2021-10-19T20:59:40Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/iamdevloper/1634677176847/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1178631635606151168/yIlrcg4o_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">I Am Devloper</div> <div style="text-align: center; font-size: 14px;">@iamdevloper</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from I Am Devloper. | Data | I Am Devloper | | --- | --- | | Tweets downloaded | 3244 | | Retweets | 190 | | Short tweets | 233 | | Tweets kept | 2821 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2k1120ro/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iamdevloper's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wr63mia) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wr63mia/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/iamdevloper') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
aditeyabaral/sentencetransformer-bert-hinglish-big
aditeyabaral
2021-10-19T19:38:38Z
6
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # aditeyabaral/sentencetransformer-bert-hinglish-big This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-big') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big') model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-big) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 4617 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
hugggof/ConvTasNet-DAMP-Vocals
hugggof
2021-10-19T19:28:08Z
0
2
null
[ "audacity", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - audacity inference: false sample_rate: 8000 --- This is an Audacity wrapper for the model, forked from the repository `groadabike/ConvTasNet_DAMP-VSEP_enhboth`, This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid. The following info was copied directly from `groadabike/ConvTasNet_DAMP-VSEP_enhboth`: ### Description: This model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset. ### Training config: ```yaml data: channels: 1 n_src: 2 root_path: data sample_rate: 16000 samples_per_track: 10 segment: 3.0 task: enh_both filterbank: kernel_size: 20 n_filters: 256 stride: 10 main_args: exp_dir: exp/train_convtasnet help: None masknet: bn_chan: 256 conv_kernel_size: 3 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 4 n_src: 2 norm_type: gLN skip_chan: 256 optim: lr: 0.0003 optimizer: adam weight_decay: 0.0 positional arguments: training: batch_size: 12 early_stop: True epochs: 50 half_lr: True num_workers: 12 ``` ### Results: ```yaml si_sdr: 14.018196157142519 si_sdr_imp: 14.017103133809577 sdr: 14.498517291333885 sdr_imp: 14.463389151567865 sir: 24.149634529133372 sir_imp: 24.11450638936735 sar: 15.338597389045935 sar_imp: -137.30634122401517 stoi: 0.7639416744417206 stoi_imp: 0.1843383526963759 ``` ### License notice: This work "ConvTasNet_DAMP-VSEP_enhboth" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). "ConvTasNet_DAMP-VSEP_enhboth" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike.
hugggof/ConvTasNet_WHAM_sepclean
hugggof
2021-10-19T19:25:37Z
0
0
null
[ "audacity", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - audacity inference: false --- This is an Audacity wrapper for the model, forked from the repository mpariente/ConvTasNet_WHAM_sepclean, This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid. The following info was copied from `mpariente/ConvTasNet_WHAM_sepclean`: ### Description: This model was trained by Manuel Pariente using the wham/ConvTasNet recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `sep_clean` task of the WHAM! dataset. ### Training config: ```yaml data: n_src: 2 mode: min nondefault_nsrc: None sample_rate: 8000 segment: 3 task: sep_clean train_dir: data/wav8k/min/tr/ valid_dir: data/wav8k/min/cv/ filterbank: kernel_size: 16 n_filters: 512 stride: 8 main_args: exp_dir: exp/wham gpus: -1 help: None masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 2 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 positional arguments: training: batch_size: 24 early_stop: True epochs: 200 half_lr: True num_workers: 4 ``` ### Results: ```yaml si_sdr: 16.21326632846293 si_sdr_imp: 16.21441705664987 sdr: 16.615180021738933 sdr_imp: 16.464137807433435 sir: 26.860503975131923 sir_imp: 26.709461760826414 sar: 17.18312813480803 sar_imp: -131.99332048277296 stoi: 0.9619940905157323 stoi_imp: 0.2239480672473015 ``` ### License notice: This work "ConvTasNet_WHAM!_sepclean" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A) by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only). "ConvTasNet_WHAM!_sepclean" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Manuel Pariente.
maxspaziani/bert-base-italian-xxl-uncased-finetuned-ComunaliRoma
maxspaziani
2021-10-19T17:58:13Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: bert-base-italian-xxl-uncased-finetuned-ComunaliRoma results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-italian-xxl-uncased-finetuned-ComunaliRoma This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-uncased](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6717 | 1.0 | 1014 | 2.6913 | | 2.4869 | 2.0 | 2028 | 2.5843 | | 2.3411 | 3.0 | 3042 | 2.5095 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2-large-xlsr-turkish-demo-colab
patrickvonplaten
2021-10-19T17:18:47Z
5
2
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xlsr-turkish-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-turkish-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4055 - Wer: 0.4800 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0179 | 4.21 | 400 | 1.4935 | 1.0249 | | 0.7075 | 8.42 | 800 | 0.4546 | 0.6071 | | 0.3072 | 12.63 | 1200 | 0.3947 | 0.5401 | | 0.2145 | 16.84 | 1600 | 0.4049 | 0.5194 | | 0.1647 | 21.05 | 2000 | 0.4199 | 0.5003 | | 0.1338 | 25.26 | 2400 | 0.4144 | 0.4859 | | 0.116 | 29.47 | 2800 | 0.4055 | 0.4800 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
doc2query/stackexchange-t5-base-v1
doc2query
2021-10-19T16:26:19Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl", "arxiv:1904.08375", "arxiv:2104.08663", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl widget: - text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." license: apache-2.0 --- # doc2query/stackexchange-t5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'doc2query/stackexchange-t5-base-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=5) print("Text:") print(text) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ``` **Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it. ## Training This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 449k training steps. For the training script, see the `train_script.py` in this repository. The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (title, best_answer_pairs) from StackExchange.