pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sentence-similarity
|
sentence-transformers
|
# Model description
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained ['mpnet-base'](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
700M sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures
the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence
similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/reddit_single-context_mpnet-base')
text = "Replace me by any text you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained ['mpnet-base'](https://huggingface.co/microsoft/mpnet-base).
Please refer to the model card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 700M sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
We only use the first context response when building the dataset.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
|
{"language": "en", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
|
flax-sentence-embeddings/reddit_single-context_mpnet-base
| null |
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.06472"
] |
[
"en"
] |
TAGS
#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #en #arxiv-1904.06472 #endpoints_compatible #region-us
|
Model description
=================
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained 'mpnet-base' model and fine-tuned in on a
700M sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
Community week using JAX/Flax for NLP & CV,
organized by Hugging Face. We developped this model as part of the project:
Train the Best Sentence Embedding Model Ever with 1B Training Pairs. We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks.
Intended uses
-------------
Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures
the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence
similarity tasks.
How to use
----------
Here is how to use this model to get the features of a given text using SentenceTransformers library:
Training procedure
==================
Pre-training
------------
We use the pretrained 'mpnet-base'.
Please refer to the model card for more detailed information about the pre-training procedure.
Fine-tuning
-----------
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 700M sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the 'data\_config.json' file.
We only use the first context response when building the dataset.
|
[
"### Hyper parameters\n\n\nWe trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core).\nWe use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with\na 2e-5 learning rate. The full training script is accessible in this current repository.",
"### Training data\n\n\nWe use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 700M sentences.\nWe sampled each dataset given a weighted probability which configuration is detailed in the 'data\\_config.json' file.\nWe only use the first context response when building the dataset."
] |
[
"TAGS\n#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #en #arxiv-1904.06472 #endpoints_compatible #region-us \n",
"### Hyper parameters\n\n\nWe trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core).\nWe use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with\na 2e-5 learning rate. The full training script is accessible in this current repository.",
"### Training data\n\n\nWe use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 700M sentences.\nWe sampled each dataset given a weighted probability which configuration is detailed in the 'data\\_config.json' file.\nWe only use the first context response when building the dataset."
] |
sentence-similarity
|
sentence-transformers
|
# flax-sentence-embeddings/st-codesearch-distilroberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It was trained on the [code_search_net](https://huggingface.co/datasets/code_search_net) dataset and can be used to search program code given text.
## Usage:
```python
from sentence_transformers import SentenceTransformer, util
#This list the defines the different programm codes
code = ["""def sort_list(x):
return sorted(x)""",
"""def count_above_threshold(elements, threshold=0):
counter = 0
for e in elements:
if e > threshold:
counter += 1
return counter""",
"""def find_min_max(elements):
min_ele = 99999
max_ele = -99999
for e in elements:
if e < min_ele:
min_ele = e
if e > max_ele:
max_ele = e
return min_ele, max_ele"""]
model = SentenceTransformer("flax-sentence-embeddings/st-codesearch-distilroberta-base")
# Encode our code into the vector space
code_emb = model.encode(code, convert_to_tensor=True)
# Interactive demo: Enter queries, and the method returns the best function from the
# 3 functions we defined
while True:
query = input("Query: ")
query_emb = model.encode(query, convert_to_tensor=True)
hits = util.semantic_search(query_emb, code_emb)[0]
top_hit = hits[0]
print("Cossim: {:.2f}".format(top_hit['score']))
print(code[top_hit['corpus_id']])
print("\n\n")
```
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('flax-sentence-embeddings/st-codesearch-distilroberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Training
The model was trained with a DistilRoBERTa-base model for 10k training steps on the codesearch dataset with batch_size 256 and MultipleNegativesRankingLoss.
It is some preliminary model. It was neither tested nor was the trained quite sophisticated
The model was trained with the parameters:
**DataLoader**:
`MultiDatasetDataLoader.MultiDatasetDataLoader` of length 5371 with parameters:
```
{'batch_size': 256}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20, 'similarity_fct': 'dot_score'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "warmupconstant",
"steps_per_epoch": 10000,
"warmup_steps": 500,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "datasets": ["code_search_net"], "pipeline_tag": "sentence-similarity"}
|
flax-sentence-embeddings/st-codesearch-distilroberta-base
| null |
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"dataset:code_search_net",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #dataset-code_search_net #endpoints_compatible #has_space #region-us
|
# flax-sentence-embeddings/st-codesearch-distilroberta-base
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It was trained on the code_search_net dataset and can be used to search program code given text.
## Usage:
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Training
The model was trained with a DistilRoBERTa-base model for 10k training steps on the codesearch dataset with batch_size 256 and MultipleNegativesRankingLoss.
It is some preliminary model. It was neither tested nor was the trained quite sophisticated
The model was trained with the parameters:
DataLoader:
'MultiDatasetDataLoader.MultiDatasetDataLoader' of length 5371 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# flax-sentence-embeddings/st-codesearch-distilroberta-base\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nIt was trained on the code_search_net dataset and can be used to search program code given text.",
"## Usage:",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Training\n\nThe model was trained with a DistilRoBERTa-base model for 10k training steps on the codesearch dataset with batch_size 256 and MultipleNegativesRankingLoss. \n\nIt is some preliminary model. It was neither tested nor was the trained quite sophisticated \n\n\nThe model was trained with the parameters:\n\nDataLoader:\n\n'MultiDatasetDataLoader.MultiDatasetDataLoader' of length 5371 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #dataset-code_search_net #endpoints_compatible #has_space #region-us \n",
"# flax-sentence-embeddings/st-codesearch-distilroberta-base\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nIt was trained on the code_search_net dataset and can be used to search program code given text.",
"## Usage:",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Training\n\nThe model was trained with a DistilRoBERTa-base model for 10k training steps on the codesearch dataset with batch_size 256 and MultipleNegativesRankingLoss. \n\nIt is some preliminary model. It was neither tested nor was the trained quite sophisticated \n\n\nThe model was trained with the parameters:\n\nDataLoader:\n\n'MultiDatasetDataLoader.MultiDatasetDataLoader' of length 5371 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# stackoverflow_mpnet-base
This is a microsoft/mpnet-base model trained on 18,562,443 (title, body) pairs from StackOverflow.
SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) model and trained it using Siamese Network setup and contrastive learning objective. 18,562,443 (title, body) pairs from StackOverflow was used as training data. For this model, mean pooling of hidden states were used as sentence embeddings. See data_config.json and train_script.py in this respository how the model was trained and which datasets have been used.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures
the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/stackoverflow_mpnet-base')
text = "Replace me by any question / answer you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We used 18,562,443 (title, body) pairs from StackOverflow as training data.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| StackOverflow title body pairs | - | 18,562,443 |
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
|
flax-sentence-embeddings/stackoverflow_mpnet-base
| null |
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
stackoverflow\_mpnet-base
=========================
This is a microsoft/mpnet-base model trained on 18,562,443 (title, body) pairs from StackOverflow.
SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained microsoft/mpnet-base model and trained it using Siamese Network setup and contrastive learning objective. 18,562,443 (title, body) pairs from StackOverflow was used as training data. For this model, mean pooling of hidden states were used as sentence embeddings. See data\_config.json and train\_script.py in this respository how the model was trained and which datasets have been used.
We developed this model during the
Community week using JAX/Flax for NLP & CV,
organized by Hugging Face. We developed this model as part of the project:
Train the Best Sentence Embedding Model Ever with 1B Training Pairs. We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
Intended uses
-------------
Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures
the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
How to use
----------
Here is how to use this model to get the features of a given text using SentenceTransformers library:
Training procedure
==================
Pre-training
------------
We use the pretrained microsoft/mpnet-base. Please refer to the model
card for more detailed information about the pre-training procedure.
Fine-tuning
-----------
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We used 18,562,443 (title, body) pairs from StackOverflow as training data.
|
[
"### Hyper parameters\n\n\nWe trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).\nWe use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with\na 2e-5 learning rate. The full training script is accessible in this current repository.",
"### Training data\n\n\nWe used 18,562,443 (title, body) pairs from StackOverflow as training data."
] |
[
"TAGS\n#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"### Hyper parameters\n\n\nWe trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).\nWe use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with\na 2e-5 learning rate. The full training script is accessible in this current repository.",
"### Training data\n\n\nWe used 18,562,443 (title, body) pairs from StackOverflow as training data."
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reddit-bert-text2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4969
- Perplexity: 12.14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8378 | 1.0 | 1007 | 2.6379 |
| 2.6493 | 2.0 | 2014 | 2.5655 |
| 2.5561 | 3.0 | 3021 | 2.5382 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "reddit-bert-text2", "results": []}]}
|
flboehm/reddit-bert-text2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
reddit-bert-text2
=================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4969
* Perplexity: 12.14
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu113
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reddit-bert-text3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1924 | 1.0 | 981 | 2.6541 |
| 2.7158 | 2.0 | 1962 | 2.5480 |
| 2.6583 | 3.0 | 2943 | 2.5072 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "reddit-bert-text3", "results": []}]}
|
flboehm/reddit-bert-text3
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
reddit-bert-text3
=================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.5346
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu113
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reddit-bert-text4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1071 | 1.0 | 978 | 2.6170 |
| 2.6788 | 2.0 | 1956 | 2.5332 |
| 2.6112 | 3.0 | 2934 | 2.4844 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "reddit-bert-text4", "results": []}]}
|
flboehm/reddit-bert-text4
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
reddit-bert-text4
=================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4763
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu113
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reddit-bert-text_10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9626 | 1.0 | 946 | 2.6163 |
| 2.6934 | 2.0 | 1892 | 2.5612 |
| 2.5971 | 3.0 | 2838 | 2.5023 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "reddit-bert-text_10", "results": []}]}
|
flboehm/reddit-bert-text_10
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
reddit-bert-text\_10
====================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.5198
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.0+cu113
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reddit-bert-text_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4702
- Perplexity: 11.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9383 | 1.0 | 947 | 2.5420 |
| 2.6448 | 2.0 | 1894 | 2.5241 |
| 2.586 | 3.0 | 2841 | 2.4833 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "reddit-bert-text_20", "results": []}]}
|
flboehm/reddit-bert-text_20
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
reddit-bert-text\_20
====================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4702
* Perplexity: 11.82
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.0+cu113
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reddit-bert-text5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0257 | 1.0 | 945 | 2.6167 |
| 2.7138 | 2.0 | 1890 | 2.5529 |
| 2.6363 | 3.0 | 2835 | 2.5463 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "reddit-bert-text5", "results": []}]}
|
flboehm/reddit-bert-text_5
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
reddit-bert-text5
=================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.5749
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.0+cu113
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# youtube-bert
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.691 | 1.0 | 1077 | 2.5445 |
| 2.5768 | 2.0 | 2154 | 2.5226 |
| 2.5227 | 3.0 | 3231 | 2.5027 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "youtube-bert", "results": []}]}
|
flboehm/youtube-bert
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
youtube-bert
============
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4771
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu113
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# youtube-bert_10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4456
- Perplexity: 11.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6799 | 1.0 | 1899 | 2.5135 |
| 2.5736 | 2.0 | 3798 | 2.4612 |
| 2.5172 | 3.0 | 5697 | 2.4363 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "youtube-bert_10", "results": []}]}
|
flboehm/youtube-bert_10
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
youtube-bert\_10
================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4456
* Perplexity: 11.54
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
# Cheapity3 🐷
GPT-like T5 model trained to generate text in multiple languages.
## Motivation
- GPT models are expensive to run.
- GPT models are monolingual.
## Solution
- Maybe, Small Models aren't Terrible (*SMarT*)
- Plus, they are cheaper to run.
I fine-tuned T5 on multiple languages (🇬🇧 English, 🇩🇪 German, 🇫🇷 French) and multiple academic text snippets from
various domains like tech, law, finance and science etc. to generate text, just like GPT models do.
## Usage - [NLPlayStore](https://github.com/flexudy/NLPlayStore) 👈
```python
from store.service_management import ServiceManager
service_manager = ServiceManager().get_service("cheapity3")
service.install()
service = service.launch()
input_text = "The mechanical engineering field requires ... "
generated_texts = service.play(input_text, 15) # A list a generated text
```
## Usage - Hugging Face Transformers 🤗
- Provide some text e.g `"Italy, officially the Italian Republic is a country consisting of"`
- Tell Cheapity3 how many words you want to generate e.g `15` -- 😃 Yes, you can control the length.
- Cheapity3 reads your text and generates a continuation containing approximately 15 words.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("flexudy/cheapity3")
model = AutoModelWithLMHead.from_pretrained("flexudy/cheapity3")
input_text = """The mechanical engineering field requires an understanding of core areas including mechanics, dynamics,
thermodynamics, materials science, structural analysis, and
electricity. { _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ }""" # 15 words
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512)
input_ids = inputs["input_ids"]
attention_mask = inputs["attention_mask"]
outputs = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_length=128,
do_sample=True,
early_stopping=True,
num_return_sequences=4,
repetition_penalty=2.5
)
for i in range(4):
print(tokenizer.decode(outputs[i], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
**INPUT: The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity.**
```
> Cheapity3 continues with beam search:
... The field of mechanical engineering is a broad field that includes many core areas of engineering.
> Cheapity3 continues with sampling and top_k=50:
... Developing the knowledge base for these core areas will enable engineers to build their capabilities rapidly and efficiently. ...
... The field of mechanics offers a variety and broad range for applications throughout the engineering/technological fields. ...
... Mechanics generally is not understood by students. While they can be employed in the field, mechanical engineering ...
... Introduction to mechanical engineering and core fields including chemical products, materials science, structural analysis, and geomatics ...
```
## Pretty decent right?
Hence, whenever you feel like GPT3 is too expensive, Cheapity3 comes to the rescue 🤗.
## Model Training FYI
- T5-base model
- Trained on ONLY 1M sentences from English, French and German text
- Mostly text from Wikipedia, arxiv and QA datasets
- Learning rate: 0.00003
- 2 epochs
- Max input: 512 tokens
- Max output: 128 tokens
|
{}
|
flexudy/cheapity3
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Cheapity3
GPT-like T5 model trained to generate text in multiple languages.
## Motivation
- GPT models are expensive to run.
- GPT models are monolingual.
## Solution
- Maybe, Small Models aren't Terrible (*SMarT*)
- Plus, they are cheaper to run.
I fine-tuned T5 on multiple languages (🇬🇧 English, 🇩🇪 German, 🇫🇷 French) and multiple academic text snippets from
various domains like tech, law, finance and science etc. to generate text, just like GPT models do.
## Usage - NLPlayStore
## Usage - Hugging Face Transformers
- Provide some text e.g '"Italy, officially the Italian Republic is a country consisting of"'
- Tell Cheapity3 how many words you want to generate e.g '15' -- Yes, you can control the length.
- Cheapity3 reads your text and generates a continuation containing approximately 15 words.
INPUT: The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity.
## Pretty decent right?
Hence, whenever you feel like GPT3 is too expensive, Cheapity3 comes to the rescue .
## Model Training FYI
- T5-base model
- Trained on ONLY 1M sentences from English, French and German text
- Mostly text from Wikipedia, arxiv and QA datasets
- Learning rate: 0.00003
- 2 epochs
- Max input: 512 tokens
- Max output: 128 tokens
|
[
"# Cheapity3 \n\nGPT-like T5 model trained to generate text in multiple languages.",
"## Motivation\n\n- GPT models are expensive to run.\n- GPT models are monolingual.",
"## Solution\n\n- Maybe, Small Models aren't Terrible (*SMarT*)\n- Plus, they are cheaper to run.\n\nI fine-tuned T5 on multiple languages (🇬🇧 English, 🇩🇪 German, 🇫🇷 French) and multiple academic text snippets from\nvarious domains like tech, law, finance and science etc. to generate text, just like GPT models do.",
"## Usage - NLPlayStore",
"## Usage - Hugging Face Transformers \n\n- Provide some text e.g '\"Italy, officially the Italian Republic is a country consisting of\"'\n- Tell Cheapity3 how many words you want to generate e.g '15' -- Yes, you can control the length.\n- Cheapity3 reads your text and generates a continuation containing approximately 15 words.\n\n\n\nINPUT: The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity.",
"## Pretty decent right?\n\nHence, whenever you feel like GPT3 is too expensive, Cheapity3 comes to the rescue .",
"## Model Training FYI\n- T5-base model\n- Trained on ONLY 1M sentences from English, French and German text\n- Mostly text from Wikipedia, arxiv and QA datasets\n- Learning rate: 0.00003\n- 2 epochs\n- Max input: 512 tokens\n- Max output: 128 tokens"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Cheapity3 \n\nGPT-like T5 model trained to generate text in multiple languages.",
"## Motivation\n\n- GPT models are expensive to run.\n- GPT models are monolingual.",
"## Solution\n\n- Maybe, Small Models aren't Terrible (*SMarT*)\n- Plus, they are cheaper to run.\n\nI fine-tuned T5 on multiple languages (🇬🇧 English, 🇩🇪 German, 🇫🇷 French) and multiple academic text snippets from\nvarious domains like tech, law, finance and science etc. to generate text, just like GPT models do.",
"## Usage - NLPlayStore",
"## Usage - Hugging Face Transformers \n\n- Provide some text e.g '\"Italy, officially the Italian Republic is a country consisting of\"'\n- Tell Cheapity3 how many words you want to generate e.g '15' -- Yes, you can control the length.\n- Cheapity3 reads your text and generates a continuation containing approximately 15 words.\n\n\n\nINPUT: The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity.",
"## Pretty decent right?\n\nHence, whenever you feel like GPT3 is too expensive, Cheapity3 comes to the rescue .",
"## Model Training FYI\n- T5-base model\n- Trained on ONLY 1M sentences from English, French and German text\n- Mostly text from Wikipedia, arxiv and QA datasets\n- Learning rate: 0.00003\n- 2 epochs\n- Max input: 512 tokens\n- Max output: 128 tokens"
] |
text2text-generation
|
transformers
|
# Towards Neuro-Symbolic Language Understanding

At [Flexudy](https://flexudy.com), we look for ways to unify symbolic and sub-symbolic methods to improve model interpretation and inference.
## Problem
1. Word embeddings are awesome 🚀. However, no one really knows what an array of 768 numbers means?
2. Text/Token classification is also awesome ❤️. Still, classifying things into a finite set of concepts is rather limited.
3. Last but not least, how do I know that the word *cat* is a **mammal** and also an **animal** if my neural network is only trained to predict whether something is an animal or not?
## Solution
1. It would be cool if my neural network would just know that **cat** is an **animal** right? *∀x.Cat(x) ⇒ Animal(x)*.
Or for example, (*∀x.SchöneBlumen(x) ⇒ Blumen(x)*) -- English meaning: For all x, If x is a beautiful flower, then x is still a flower. --
2. All of a sudden, tasks like **Question Answering**, **Summarization**, **Named Entity Recognition** or even **Intent Classification** etc become easier right?
Well, one might probably still need time to build a good and robust solution that is not as large as **GPT3**.
Like [Peter Gärdenfors, author of conceptual spaces](https://www.goodreads.com/book/show/1877443.Conceptual_Spaces), we are trying to find ways to navigate between the symbolic and the sub-symbolic by thinking in concepts.
Should such a solution exist, one could easily leverage true logical reasoning engines on natural language.
How awesome would that be? 💡
## Flexudy's Conceptor
1. We developed a poor man's implementation of the ideal solution described above.
2. Though it is a poor man's model, **it is still a useful one** 🤗.
### Usage
No library should anyone suffer. Especially not if it is built on top of 🤗 **HF Transformers**.
Go to the [Github repo](https://github.com/flexudy/natural-language-logic)
`pip install git+https://github.com/flexudy/natural-language-logic.git@v0.0.1`
```python
from flexudy.conceptor.start import FlexudyConceptInferenceMachineFactory
# Load me only once
concept_inference_machine = FlexudyConceptInferenceMachineFactory.get_concept_inference_machine()
# A list of terms.
terms = ["cat", "dog", "economics and sociology", "public company"]
# If you don't pass the language, a language detector will attempt to predict it for you
# If any error occurs, the language defaults to English.
language = "en"
# Predict concepts
# You can also pass the batch_size=2 and the beam_size=4
concepts = concept_inference_machine.infer_concepts(terms, language=language)
```
Output:
```python
{'cat': ['mammal', 'animal'], 'dog': ['hound', 'animal'], 'economics and sociology': ['both fields of study'], 'public company': ['company']}
```
### How was it trained?
1. Using Google's T5-base and T5-small. Both models are released on the Hugging Face Hub.
2. T5-base was trained for only two epochs while T5-small was trained for 5 epochs.
## Where did you get the data?
1. I extracted and curated a fragment of [Conceptnet](https://conceptnet.io/)
2. In particular, only the IsA relation was used.
3. Note that one term can belong to multiple concepts (which is pretty cool if you think about [Fuzzy Description Logics](https://lat.inf.tu-dresden.de/~stefborg/Talks/QuantLAWorkshop2013.pdf)).
Multiple inheritances however mean some terms belong to so many concepts. Hence, I decided to randomly throw away some due to the **maximum length limitation**.
### Setup
1. I finally allowed only `2` to `4` concepts at random for each term. This means, there is still great potential to make the models generalise better 🚀.
3. I used a total of `279884` training examples and `1260` for testing. Edges -- i.e `IsA(concept u, concept v)` -- in both sets are disjoint.
4. Trained for `15K` steps with learning rate linear decay during each step. Starting at `0.001`
5. Used `RAdam Optimiser` with weight_decay =`0.01` and batch_size =`36`.
6. Source and target max length were both `64`.
### Multilingual Models
1. The "conceptor" model is multilingual. English, German and French is supported.
2. [Conceptnet](https://conceptnet.io/) supports many languages, but I just chose those three because those are the ones I speak.
### Metrics for flexudy-conceptor-t5-base
| Metric | Score |
| ------------- |:-------------:|
| Exact Match | 36.67 |
| F1 | 43.08 |
| Loss smooth | 1.214 |
Unfortunately, we no longer have the metrics for flexudy-conceptor-t5-small. If I recall correctly, base was just slightly better on the test set (ca. `2%` F1).
## Why not just use the data if you have it structured already?
Conceptnet is very large. Even if you just consider loading a fragment into your RAM, say with only 100K edges, this is still a large graph.
Especially, if you think about how you will save the node embeddings efficiently for querying.
If you prefer this approach, [Milvus](https://github.com/milvus-io/pymilvus) can be of great help.
You can compute query embeddings and try to find the best match. From there (after matching), you can navigate through the graph at `100%` precision.
|
{}
|
flexudy/t5-base-conceptor
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Towards Neuro-Symbolic Language Understanding
=============================================
!alt text
At Flexudy, we look for ways to unify symbolic and sub-symbolic methods to improve model interpretation and inference.
Problem
-------
1. Word embeddings are awesome . However, no one really knows what an array of 768 numbers means?
2. Text/Token classification is also awesome ️. Still, classifying things into a finite set of concepts is rather limited.
3. Last but not least, how do I know that the word *cat* is a mammal and also an animal if my neural network is only trained to predict whether something is an animal or not?
Solution
--------
1. It would be cool if my neural network would just know that cat is an animal right? *∀x.Cat(x) ⇒ Animal(x)*.
Or for example, (*∀x.SchöneBlumen(x) ⇒ Blumen(x)*) -- English meaning: For all x, If x is a beautiful flower, then x is still a flower. --
2. All of a sudden, tasks like Question Answering, Summarization, Named Entity Recognition or even Intent Classification etc become easier right?
Well, one might probably still need time to build a good and robust solution that is not as large as GPT3.
Like Peter Gärdenfors, author of conceptual spaces, we are trying to find ways to navigate between the symbolic and the sub-symbolic by thinking in concepts.
Should such a solution exist, one could easily leverage true logical reasoning engines on natural language.
How awesome would that be?
Flexudy's Conceptor
-------------------
1. We developed a poor man's implementation of the ideal solution described above.
2. Though it is a poor man's model, it is still a useful one .
### Usage
No library should anyone suffer. Especially not if it is built on top of HF Transformers.
Go to the Github repo
'pip install git+URL
Output:
### How was it trained?
1. Using Google's T5-base and T5-small. Both models are released on the Hugging Face Hub.
2. T5-base was trained for only two epochs while T5-small was trained for 5 epochs.
Where did you get the data?
---------------------------
1. I extracted and curated a fragment of Conceptnet
2. In particular, only the IsA relation was used.
3. Note that one term can belong to multiple concepts (which is pretty cool if you think about Fuzzy Description Logics).
Multiple inheritances however mean some terms belong to so many concepts. Hence, I decided to randomly throw away some due to the maximum length limitation.
### Setup
1. I finally allowed only '2' to '4' concepts at random for each term. This means, there is still great potential to make the models generalise better .
2. I used a total of '279884' training examples and '1260' for testing. Edges -- i.e 'IsA(concept u, concept v)' -- in both sets are disjoint.
3. Trained for '15K' steps with learning rate linear decay during each step. Starting at '0.001'
4. Used 'RAdam Optimiser' with weight\_decay ='0.01' and batch\_size ='36'.
5. Source and target max length were both '64'.
### Multilingual Models
1. The "conceptor" model is multilingual. English, German and French is supported.
2. Conceptnet supports many languages, but I just chose those three because those are the ones I speak.
### Metrics for flexudy-conceptor-t5-base
Unfortunately, we no longer have the metrics for flexudy-conceptor-t5-small. If I recall correctly, base was just slightly better on the test set (ca. '2%' F1).
Why not just use the data if you have it structured already?
------------------------------------------------------------
Conceptnet is very large. Even if you just consider loading a fragment into your RAM, say with only 100K edges, this is still a large graph.
Especially, if you think about how you will save the node embeddings efficiently for querying.
If you prefer this approach, Milvus can be of great help.
You can compute query embeddings and try to find the best match. From there (after matching), you can navigate through the graph at '100%' precision.
|
[
"### Usage\n\n\nNo library should anyone suffer. Especially not if it is built on top of HF Transformers.\n\n\nGo to the Github repo\n\n\n'pip install git+URL\n\n\nOutput:",
"### How was it trained?\n\n\n1. Using Google's T5-base and T5-small. Both models are released on the Hugging Face Hub.\n2. T5-base was trained for only two epochs while T5-small was trained for 5 epochs.\n\n\nWhere did you get the data?\n---------------------------\n\n\n1. I extracted and curated a fragment of Conceptnet\n2. In particular, only the IsA relation was used.\n3. Note that one term can belong to multiple concepts (which is pretty cool if you think about Fuzzy Description Logics).\nMultiple inheritances however mean some terms belong to so many concepts. Hence, I decided to randomly throw away some due to the maximum length limitation.",
"### Setup\n\n\n1. I finally allowed only '2' to '4' concepts at random for each term. This means, there is still great potential to make the models generalise better .\n2. I used a total of '279884' training examples and '1260' for testing. Edges -- i.e 'IsA(concept u, concept v)' -- in both sets are disjoint.\n3. Trained for '15K' steps with learning rate linear decay during each step. Starting at '0.001'\n4. Used 'RAdam Optimiser' with weight\\_decay ='0.01' and batch\\_size ='36'.\n5. Source and target max length were both '64'.",
"### Multilingual Models\n\n\n1. The \"conceptor\" model is multilingual. English, German and French is supported.\n2. Conceptnet supports many languages, but I just chose those three because those are the ones I speak.",
"### Metrics for flexudy-conceptor-t5-base\n\n\n\nUnfortunately, we no longer have the metrics for flexudy-conceptor-t5-small. If I recall correctly, base was just slightly better on the test set (ca. '2%' F1).\n\n\nWhy not just use the data if you have it structured already?\n------------------------------------------------------------\n\n\nConceptnet is very large. Even if you just consider loading a fragment into your RAM, say with only 100K edges, this is still a large graph.\nEspecially, if you think about how you will save the node embeddings efficiently for querying.\nIf you prefer this approach, Milvus can be of great help.\nYou can compute query embeddings and try to find the best match. From there (after matching), you can navigate through the graph at '100%' precision."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Usage\n\n\nNo library should anyone suffer. Especially not if it is built on top of HF Transformers.\n\n\nGo to the Github repo\n\n\n'pip install git+URL\n\n\nOutput:",
"### How was it trained?\n\n\n1. Using Google's T5-base and T5-small. Both models are released on the Hugging Face Hub.\n2. T5-base was trained for only two epochs while T5-small was trained for 5 epochs.\n\n\nWhere did you get the data?\n---------------------------\n\n\n1. I extracted and curated a fragment of Conceptnet\n2. In particular, only the IsA relation was used.\n3. Note that one term can belong to multiple concepts (which is pretty cool if you think about Fuzzy Description Logics).\nMultiple inheritances however mean some terms belong to so many concepts. Hence, I decided to randomly throw away some due to the maximum length limitation.",
"### Setup\n\n\n1. I finally allowed only '2' to '4' concepts at random for each term. This means, there is still great potential to make the models generalise better .\n2. I used a total of '279884' training examples and '1260' for testing. Edges -- i.e 'IsA(concept u, concept v)' -- in both sets are disjoint.\n3. Trained for '15K' steps with learning rate linear decay during each step. Starting at '0.001'\n4. Used 'RAdam Optimiser' with weight\\_decay ='0.01' and batch\\_size ='36'.\n5. Source and target max length were both '64'.",
"### Multilingual Models\n\n\n1. The \"conceptor\" model is multilingual. English, German and French is supported.\n2. Conceptnet supports many languages, but I just chose those three because those are the ones I speak.",
"### Metrics for flexudy-conceptor-t5-base\n\n\n\nUnfortunately, we no longer have the metrics for flexudy-conceptor-t5-small. If I recall correctly, base was just slightly better on the test set (ca. '2%' F1).\n\n\nWhy not just use the data if you have it structured already?\n------------------------------------------------------------\n\n\nConceptnet is very large. Even if you just consider loading a fragment into your RAM, say with only 100K edges, this is still a large graph.\nEspecially, if you think about how you will save the node embeddings efficiently for querying.\nIf you prefer this approach, Milvus can be of great help.\nYou can compute query embeddings and try to find the best match. From there (after matching), you can navigate through the graph at '100%' precision."
] |
text2text-generation
|
transformers
|

# Sentence-Doctor
Sentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text.
## 1. Problem:
Many NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and **Sentence Boundary Detection**
As a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on **clean** input.
## 2. Solution:
Here we provide a model that **attempts** to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward:
* `Given an "erroneous" sentence, and its context, reconstruct the "intended" sentence`.
## 3. Use Cases:
* Attempt to repair noisy sentences that where extracted with OCR software or text extractors.
* Attempt to repair sentence boundaries.
* Example (in German): **Input: "und ich bin im**",
* Prefix_Context: "Hallo! Mein Name ist John", Postfix_Context: "Januar 1990 geboren."
* Output: "John und ich bin im Jahr 1990 geboren"
* Possibly sentence level spelling correction -- Although this is not the intended use.
* Input: "I went to church **las yesteday**" => Output: "I went to church last Sunday".
## 4. Disclaimer
Note how we always emphises on the word *attempt*. The current version of the model was only trained on **150K** sentences from the tatoeba dataset: https://tatoeba.org/eng. (50K per language -- En, Fr, De).
Hence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data.
## 5. Datasets
We generated synthetic data from the tatoeba dataset: https://tatoeba.org/eng. Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where **sentence_doctor_dataset_300K** is a larger dataset with 100K sentences for each language).
## 6. Usage
### 6.1 Preprocessing
* Let us assume we have the following text (Note that there are no punctuation marks in the text):
```python
text = "That is my job I am a medical doctor I save lives"
```
* You decided extract the sentences and for some obscure reason, you obtained these sentences:
```python
sentences = ["That is my job I a", "m a medical doct", "I save lives"]
```
* You now wish to correct the sentence **"m a medical doct"**.
Here is the single preprocessing step for the model:
```python
input_text = "repair_sentence: " + sentences[1] + " context: {" + sentences[0] + "}{" + sentences[2] + "} </s>"
```
**Explanation**:</br>
* We are telling the model to repair the sentence with the prefix "repair_sentence: "
* Then append the sentence we want to repair **sentence[1]** which is "m a medical doct"
* Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text.
* To do that, we append the keyword "context :"
* Append **{sentence[0]}** "{That is my job I a}". (Note how it is sourrounded by curly braces).
* Append **{sentence[2]}** "{I save lives}".
* At last we tell the model this is the end of the input with </s>.
```python
print(input_text) # repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>
```
<br/>
**The context is optional**, so the input could also be ```repair_sentence: m a medical doct context: {}{} </s>```
### 6.2 Inference
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
model = AutoModelWithLMHead.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
input_text = "repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(input_ids, max_length=32, num_beams=1)
sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
assert sentence == "I am a medical doctor."
```
## 7. Fine-tuning
We also provide a script `train_any_t5_task.py` that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example:
```python
# TODO Set your training epochs
config.TRAIN_EPOCHS = 3
```
If you don't want to read the #TODO comments, just pass in your data like this
```python
# TODO Where is your data ? Enter the path
trainer.start("data/sentence_doctor_dataset_300.csv")
```
and voila!! Please feel free to correct any mistakes in the code and make a pull request.
## 8. Attribution
* [Huggingface](https://huggingface.co/) transformer lib for making this possible
* Abhishek Kumar Mishra's transformer [tutorial](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) on text summarisation. Our training code is just a modified version of their code. So many thanks.
* We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the [authors](https://huggingface.co/WikinewsSum)
* We also read a lot of work from [Suraj Patil](https://github.com/patil-suraj)
* No one has been forgotten, hopefully :)
|
{}
|
flexudy/t5-base-multi-sentence-doctor
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
!avatar
# Sentence-Doctor
Sentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text.
## 1. Problem:
Many NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and Sentence Boundary Detection
As a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on clean input.
## 2. Solution:
Here we provide a model that attempts to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward:
* 'Given an "erroneous" sentence, and its context, reconstruct the "intended" sentence'.
## 3. Use Cases:
* Attempt to repair noisy sentences that where extracted with OCR software or text extractors.
* Attempt to repair sentence boundaries.
* Example (in German): Input: "und ich bin im",
* Prefix_Context: "Hallo! Mein Name ist John", Postfix_Context: "Januar 1990 geboren."
* Output: "John und ich bin im Jahr 1990 geboren"
* Possibly sentence level spelling correction -- Although this is not the intended use.
* Input: "I went to church las yesteday" => Output: "I went to church last Sunday".
## 4. Disclaimer
Note how we always emphises on the word *attempt*. The current version of the model was only trained on 150K sentences from the tatoeba dataset: URL (50K per language -- En, Fr, De).
Hence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data.
## 5. Datasets
We generated synthetic data from the tatoeba dataset: URL Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where sentence_doctor_dataset_300K is a larger dataset with 100K sentences for each language).
## 6. Usage
### 6.1 Preprocessing
* Let us assume we have the following text (Note that there are no punctuation marks in the text):
* You decided extract the sentences and for some obscure reason, you obtained these sentences:
* You now wish to correct the sentence "m a medical doct".
Here is the single preprocessing step for the model:
Explanation:</br>
* We are telling the model to repair the sentence with the prefix "repair_sentence: "
* Then append the sentence we want to repair sentence[1] which is "m a medical doct"
* Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text.
* To do that, we append the keyword "context :"
* Append {sentence[0]} "{That is my job I a}". (Note how it is sourrounded by curly braces).
* Append {sentence[2]} "{I save lives}".
* At last we tell the model this is the end of the input with </s>.
<br/>
The context is optional, so the input could also be
### 6.2 Inference
## 7. Fine-tuning
We also provide a script 'train_any_t5_task.py' that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example:
If you don't want to read the #TODO comments, just pass in your data like this
and voila!! Please feel free to correct any mistakes in the code and make a pull request.
## 8. Attribution
* Huggingface transformer lib for making this possible
* Abhishek Kumar Mishra's transformer tutorial on text summarisation. Our training code is just a modified version of their code. So many thanks.
* We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the authors
* We also read a lot of work from Suraj Patil
* No one has been forgotten, hopefully :)
|
[
"# Sentence-Doctor\nSentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text.",
"## 1. Problem:\nMany NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and Sentence Boundary Detection\nAs a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on clean input.",
"## 2. Solution:\nHere we provide a model that attempts to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward:\n* 'Given an \"erroneous\" sentence, and its context, reconstruct the \"intended\" sentence'.",
"## 3. Use Cases:\n* Attempt to repair noisy sentences that where extracted with OCR software or text extractors.\n* Attempt to repair sentence boundaries.\n * Example (in German): Input: \"und ich bin im\", \n * Prefix_Context: \"Hallo! Mein Name ist John\", Postfix_Context: \"Januar 1990 geboren.\"\n * Output: \"John und ich bin im Jahr 1990 geboren\"\n* Possibly sentence level spelling correction -- Although this is not the intended use.\n * Input: \"I went to church las yesteday\" => Output: \"I went to church last Sunday\".",
"## 4. Disclaimer\nNote how we always emphises on the word *attempt*. The current version of the model was only trained on 150K sentences from the tatoeba dataset: URL (50K per language -- En, Fr, De).\nHence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data.",
"## 5. Datasets\nWe generated synthetic data from the tatoeba dataset: URL Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where sentence_doctor_dataset_300K is a larger dataset with 100K sentences for each language).",
"## 6. Usage",
"### 6.1 Preprocessing\n* Let us assume we have the following text (Note that there are no punctuation marks in the text):\n\n\n* You decided extract the sentences and for some obscure reason, you obtained these sentences:\n\n\n* You now wish to correct the sentence \"m a medical doct\".\n\nHere is the single preprocessing step for the model:\n\n\n\nExplanation:</br>\n* We are telling the model to repair the sentence with the prefix \"repair_sentence: \"\n* Then append the sentence we want to repair sentence[1] which is \"m a medical doct\"\n* Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text.\n * To do that, we append the keyword \"context :\"\n * Append {sentence[0]} \"{That is my job I a}\". (Note how it is sourrounded by curly braces).\n * Append {sentence[2]} \"{I save lives}\". \n* At last we tell the model this is the end of the input with </s>.\n\n\n\n<br/>\n\nThe context is optional, so the input could also be",
"### 6.2 Inference",
"## 7. Fine-tuning\nWe also provide a script 'train_any_t5_task.py' that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example:\n\n \nIf you don't want to read the #TODO comments, just pass in your data like this\n\n\nand voila!! Please feel free to correct any mistakes in the code and make a pull request.",
"## 8. Attribution\n* Huggingface transformer lib for making this possible\n* Abhishek Kumar Mishra's transformer tutorial on text summarisation. Our training code is just a modified version of their code. So many thanks.\n* We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the authors\n* We also read a lot of work from Suraj Patil\n* No one has been forgotten, hopefully :)"
] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Sentence-Doctor\nSentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text.",
"## 1. Problem:\nMany NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and Sentence Boundary Detection\nAs a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on clean input.",
"## 2. Solution:\nHere we provide a model that attempts to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward:\n* 'Given an \"erroneous\" sentence, and its context, reconstruct the \"intended\" sentence'.",
"## 3. Use Cases:\n* Attempt to repair noisy sentences that where extracted with OCR software or text extractors.\n* Attempt to repair sentence boundaries.\n * Example (in German): Input: \"und ich bin im\", \n * Prefix_Context: \"Hallo! Mein Name ist John\", Postfix_Context: \"Januar 1990 geboren.\"\n * Output: \"John und ich bin im Jahr 1990 geboren\"\n* Possibly sentence level spelling correction -- Although this is not the intended use.\n * Input: \"I went to church las yesteday\" => Output: \"I went to church last Sunday\".",
"## 4. Disclaimer\nNote how we always emphises on the word *attempt*. The current version of the model was only trained on 150K sentences from the tatoeba dataset: URL (50K per language -- En, Fr, De).\nHence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data.",
"## 5. Datasets\nWe generated synthetic data from the tatoeba dataset: URL Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where sentence_doctor_dataset_300K is a larger dataset with 100K sentences for each language).",
"## 6. Usage",
"### 6.1 Preprocessing\n* Let us assume we have the following text (Note that there are no punctuation marks in the text):\n\n\n* You decided extract the sentences and for some obscure reason, you obtained these sentences:\n\n\n* You now wish to correct the sentence \"m a medical doct\".\n\nHere is the single preprocessing step for the model:\n\n\n\nExplanation:</br>\n* We are telling the model to repair the sentence with the prefix \"repair_sentence: \"\n* Then append the sentence we want to repair sentence[1] which is \"m a medical doct\"\n* Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text.\n * To do that, we append the keyword \"context :\"\n * Append {sentence[0]} \"{That is my job I a}\". (Note how it is sourrounded by curly braces).\n * Append {sentence[2]} \"{I save lives}\". \n* At last we tell the model this is the end of the input with </s>.\n\n\n\n<br/>\n\nThe context is optional, so the input could also be",
"### 6.2 Inference",
"## 7. Fine-tuning\nWe also provide a script 'train_any_t5_task.py' that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example:\n\n \nIf you don't want to read the #TODO comments, just pass in your data like this\n\n\nand voila!! Please feel free to correct any mistakes in the code and make a pull request.",
"## 8. Attribution\n* Huggingface transformer lib for making this possible\n* Abhishek Kumar Mishra's transformer tutorial on text summarisation. Our training code is just a modified version of their code. So many thanks.\n* We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the authors\n* We also read a lot of work from Suraj Patil\n* No one has been forgotten, hopefully :)"
] |
null |
transformers
|
# flexudy-pipe-question-generation-v2
After transcribing your audio with Wav2Vec2, you might be interested in a post processor.
All paragraphs had at most 128 tokens (separated by white spaces)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = "flexudy/t5-small-wav2vec2-grammar-fixer"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
sent = """GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS"""
input_text = "fix: { " + sent + " } </s>"
input_ids = tokenizer.encode(input_text, return_tensors="pt", max_length=256, truncation=True, add_special_tokens=True)
outputs = model.generate(
input_ids=input_ids,
max_length=256,
num_beams=4,
repetition_penalty=1.0,
length_penalty=1.0,
early_stopping=True
)
sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(f"{sentence}")
```
INPUT 1:
```
WHEN ARE YOU COMING TOMORROW I AM ASKING BECAUSE OF THE MONEY YOU OWE ME PLEASE GIVE IT TO ME I AM WAITING YOU HAVE BEEN AVOIDING ME SINCE TWO THOUSAND AND THREE
```
OUTPUT 1:
```
When are you coming tomorrow? I am asking because of the money you owe me, please give it to me. I am waiting. You have been avoiding me since 2003.
```
INPUT 2:
```
GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS
```
OUTPUT 2:
```
Going along Slushy Country Roads and speaking to Damp audiences in Draughty School rooms day after day for a fortnight, he'll have to put in an appearance at some place of worship on Sunday morning and he can come to us immediately afterwards.
```
I strongly recommend improving the performance via further fine-tuning or by training more examples.
- Possible Quick Rule based improvements: Align the transcribed version and the generated version. If the similarity of two words (case-insensitive) vary by more than some threshold based on some similarity metric (e.g. Levenshtein), then keep the transcribed word.
|
{}
|
flexudy/t5-small-wav2vec2-grammar-fixer
| null |
[
"transformers",
"pytorch",
"tf",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #endpoints_compatible #has_space #region-us
|
# flexudy-pipe-question-generation-v2
After transcribing your audio with Wav2Vec2, you might be interested in a post processor.
All paragraphs had at most 128 tokens (separated by white spaces)
INPUT 1:
OUTPUT 1:
INPUT 2:
OUTPUT 2:
I strongly recommend improving the performance via further fine-tuning or by training more examples.
- Possible Quick Rule based improvements: Align the transcribed version and the generated version. If the similarity of two words (case-insensitive) vary by more than some threshold based on some similarity metric (e.g. Levenshtein), then keep the transcribed word.
|
[
"# flexudy-pipe-question-generation-v2\nAfter transcribing your audio with Wav2Vec2, you might be interested in a post processor.\n\nAll paragraphs had at most 128 tokens (separated by white spaces)\n\n\n\nINPUT 1:\n\nOUTPUT 1:\n\n\nINPUT 2:\n\n\nOUTPUT 2:\n\nI strongly recommend improving the performance via further fine-tuning or by training more examples.\n- Possible Quick Rule based improvements: Align the transcribed version and the generated version. If the similarity of two words (case-insensitive) vary by more than some threshold based on some similarity metric (e.g. Levenshtein), then keep the transcribed word."
] |
[
"TAGS\n#transformers #pytorch #tf #endpoints_compatible #has_space #region-us \n",
"# flexudy-pipe-question-generation-v2\nAfter transcribing your audio with Wav2Vec2, you might be interested in a post processor.\n\nAll paragraphs had at most 128 tokens (separated by white spaces)\n\n\n\nINPUT 1:\n\nOUTPUT 1:\n\n\nINPUT 2:\n\n\nOUTPUT 2:\n\nI strongly recommend improving the performance via further fine-tuning or by training more examples.\n- Possible Quick Rule based improvements: Align the transcribed version and the generated version. If the similarity of two words (case-insensitive) vary by more than some threshold based on some similarity metric (e.g. Levenshtein), then keep the transcribed word."
] |
text-generation
|
transformers
|
@Rick from Rick and Morty GPT-2 Conversation Model
---
|
{"tags": "conversational"}
|
flooptherocket/DialogGPT-small-rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
@Rick from Rick and Morty GPT-2 Conversation Model
---
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
example outputs:
input: ich liebe das leben --> output: Ich liebe das Leben.
input: es ist schön so viele tolle menschen um sich zu haben denn ohne sie wäre es nicht so schön --> output: Es ist schön, so viele tolle Menschen, um sich zu haben, denn ohne sie wäre es nicht so schön.
input: der kunde hat ausdrücklich nach dirk verlangt weil er den rabatt haben möchte --> output: Der Kunde hat ausdrücklich nach Dirk verlangt, weil er den Rabatt haben möchte.
the data can be prepared like this:
the broken_text is used as input, while the text is the output
```python
import re
import phonetics
import random
chars_to_ignore_regex = "[^A-Za-z0-9\ö\ä\ü\Ö\Ä\Ü\ß\-,;.:?! ]+"
broken_chars_to_ignore_regex = "[^A-Za-z0-9\ö\ä\ü\Ö\Ä\Ü\ß\- ]+"
def do_manipulation(string):
text = re.sub(chars_to_ignore_regex, '', string)
broken_text = re.sub(broken_chars_to_ignore_regex, "", text.lower())
if(random.randint(0,100) >= 50):
for xyz in range(int(len(broken_text.split(" "))/4)):
if(random.randint(0,100) > 30):
randc = random.choice(broken_text.split(" "))
if(random.randint(0,10) > 4):
broken_text = broken_text.replace(randc, ''.join(random.choice('abcdefghijklmnopqrstuvxyz') for _ in range(len(randc))).lower())
else:
broken_text = broken_text.replace(randc, phonetics.metaphone(randc).lower())
return text, broken_text
```
|
{"language": "de", "tags": ["grammar"], "widget": [{"text": "correct german grammar: es ist sch\u00f6n so viele tolle menschen um sich zu haben denn ohne sie w\u00e4re es nicht so sch\u00f6n"}]}
|
aware-ai/byt5-german-grammar
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"grammar",
"de",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #grammar #de #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
example outputs:
input: ich liebe das leben --> output: Ich liebe das Leben.
input: es ist schön so viele tolle menschen um sich zu haben denn ohne sie wäre es nicht so schön --> output: Es ist schön, so viele tolle Menschen, um sich zu haben, denn ohne sie wäre es nicht so schön.
input: der kunde hat ausdrücklich nach dirk verlangt weil er den rabatt haben möchte --> output: Der Kunde hat ausdrücklich nach Dirk verlangt, weil er den Rabatt haben möchte.
the data can be prepared like this:
the broken_text is used as input, while the text is the output
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #grammar #de #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-skills
This model is a fine-tuned version of [flozi00/t5-skills](https://huggingface.co/flozi00/t5-skills) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.8.1
- Datasets 1.14.0
- Tokenizers 0.10.2
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "t5-skills", "results": []}]}
|
aware-ai/t5-skills
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# t5-skills
This model is a fine-tuned version of flozi00/t5-skills on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.8.1
- Datasets 1.14.0
- Tokenizers 0.10.2
|
[
"# t5-skills\n\nThis model is a fine-tuned version of flozi00/t5-skills on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.8.1\n- Datasets 1.14.0\n- Tokenizers 0.10.2"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# t5-skills\n\nThis model is a fine-tuned version of flozi00/t5-skills on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.8.1\n- Datasets 1.14.0\n- Tokenizers 0.10.2"
] |
automatic-speech-recognition
|
transformers
|
**Test Result**
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| flozi00/wav2vec2-large-xlsr-53-german-with-lm | **5.7467896819046755%** | **1.8980142607670552%** |
## Evaluation
The model can be evaluated as follows on the German test data of Common Voice.
```python
import torchaudio.functional as F
import torch
from transformers import AutoModelForCTC, AutoProcessor
import re
from datasets import load_dataset, load_metric
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
counter = 0
wer_counter = 0
cer_counter = 0
def main():
model = AutoModelForCTC.from_pretrained("flozi00/wav2vec2-large-xlsr-53-german-with-lm")
processor = AutoProcessor.from_pretrained("flozi00/wav2vec2-large-xlsr-53-german-with-lm")
wer = load_metric("wer")
cer = load_metric("cer")
ds = load_dataset("common_voice", "de", split="test")
#ds = ds.select(range(100))
def calculate_metrics(batch):
global counter, wer_counter, cer_counter
resampled_audio = F.resample(torch.tensor(batch["audio"]["array"]), 48_000, 16_000).numpy()
input_values = processor(resampled_audio, return_tensors="pt", sampling_rate=16_000).input_values
with torch.no_grad():
logits = model(input_values).logits.numpy()[0]
decoded = processor.decode(logits)
pred = decoded.text
ref = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
wer_result = wer.compute(predictions=[pred], references=[ref])
cer_result = cer.compute(predictions=[pred], references=[ref])
counter += 1
wer_counter += wer_result
cer_counter += cer_result
print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}")
return batch
ds.map(calculate_metrics, remove_columns=ds.column_names)
main()
```
Credits:
The Acoustic model is an copy of [jonatasgrosman's model](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) I used to train an matching kenlm language model for
|
{"language": "de", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["common_voice"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 German with LM by Florian Zimmermeister @A\\\\Ware", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice de", "type": "common_voice", "args": "de"}, "metrics": [{"type": "wer", "value": 5.7467896819046755, "name": "Test WER"}, {"type": "cer", "value": 1.8980142607670552, "name": "Test CER"}]}]}]}
|
aware-ai/wav2vec2-large-xlsr-53-german-with-lm
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"de",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hf-asr-leaderboard #de #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
Test Result
Model: flozi00/wav2vec2-large-xlsr-53-german-with-lm, WER: 5.7467896819046755%, CER: 1.8980142607670552%
Evaluation
----------
The model can be evaluated as follows on the German test data of Common Voice.
Credits:
The Acoustic model is an copy of jonatasgrosman's model I used to train an matching kenlm language model for
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hf-asr-leaderboard #de #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
### Model Description
GPT-J 6B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-J refers to the class of models, while 6B represents the number of parameters of this particular pre-trained model.
The original GPT-J-6B model is trained with TPUs, which is not easy to use for normal users. Thus, through a converting script, we convert the TPU version GPT-J-6B into GPU version, which could be load and fine-tuned with GPUs.
As we have tried, the model can be loaded with 1 GPU with 16G memory to do inference. For fine-tune, we used 8 * 32G GPUs with DeepSpeed library to distribute the model, data and gradients, in order to allocate the huge amount of model parameters.
|
{}
|
flyhero/gpt-j-6B
| null |
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Model Description
GPT-J 6B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-J refers to the class of models, while 6B represents the number of parameters of this particular pre-trained model.
The original GPT-J-6B model is trained with TPUs, which is not easy to use for normal users. Thus, through a converting script, we convert the TPU version GPT-J-6B into GPU version, which could be load and fine-tuned with GPUs.
As we have tried, the model can be loaded with 1 GPU with 16G memory to do inference. For fine-tune, we used 8 * 32G GPUs with DeepSpeed library to distribute the model, data and gradients, in order to allocate the huge amount of model parameters.
|
[
"### Model Description\nGPT-J 6B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-J refers to the class of models, while 6B represents the number of parameters of this particular pre-trained model.\n\nThe original GPT-J-6B model is trained with TPUs, which is not easy to use for normal users. Thus, through a converting script, we convert the TPU version GPT-J-6B into GPU version, which could be load and fine-tuned with GPUs.\n\nAs we have tried, the model can be loaded with 1 GPU with 16G memory to do inference. For fine-tune, we used 8 * 32G GPUs with DeepSpeed library to distribute the model, data and gradients, in order to allocate the huge amount of model parameters."
] |
[
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Model Description\nGPT-J 6B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-J refers to the class of models, while 6B represents the number of parameters of this particular pre-trained model.\n\nThe original GPT-J-6B model is trained with TPUs, which is not easy to use for normal users. Thus, through a converting script, we convert the TPU version GPT-J-6B into GPU version, which could be load and fine-tuned with GPUs.\n\nAs we have tried, the model can be loaded with 1 GPU with 16G memory to do inference. For fine-tune, we used 8 * 32G GPUs with DeepSpeed library to distribute the model, data and gradients, in order to allocate the huge amount of model parameters."
] |
text2text-generation
|
transformers
|
# Chinese BART-Base
### News
**12/30/2022**
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
- **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
- **Position Embeddings** We extend the max_position_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
| | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG |
| :--------- | :---: | :-----: | :-----: | :---: | :---: |
| Previous | | | | | |
| bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 |
| cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 |
| bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 |
| cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 |
| Updataed | | | | | |
| bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 |
| cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 |
| bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 |
| cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 |
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
- Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache).
## Model description
This is an implementation of Chinese BART-Base.
[**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf)
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
**Github Link:** https://github.com/fastnlp/CPT
## Usage
```python
>>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("fnlp/bart-base-chinese")
>>> model = BartForConditionalGeneration.from_pretrained("fnlp/bart-base-chinese")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("北京是[MASK]的首都", max_length=50, do_sample=False)
[{'generated_text': '北 京 是 中 国 的 首 都'}]
```
**Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.**
## Citation
```bibtex
@article{shao2021cpt,
title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation},
author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu},
journal={arXiv preprint arXiv:2109.05729},
year={2021}
}
```
|
{"language": "zh", "tags": ["text2text-generation", "Chinese", "seq2seq", "BART"]}
|
fnlp/bart-base-chinese
| null |
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"Chinese",
"seq2seq",
"BART",
"zh",
"arxiv:2109.05729",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.05729"
] |
[
"zh"
] |
TAGS
#transformers #pytorch #safetensors #bart #text2text-generation #Chinese #seq2seq #BART #zh #arxiv-2109.05729 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Chinese BART-Base
=================
### News
12/30/2022
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
* Position Embeddings We extend the max\_position\_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
* Note that to use updated models, please update the 'modeling\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).
Model description
-----------------
This is an implementation of Chinese BART-Base.
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
Github Link: URL
Usage
-----
Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.
|
[
"### News\n\n\n12/30/2022\n\n\nAn updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:\n\n\n* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.\n* Position Embeddings We extend the max\\_position\\_embeddings from 512 to 1024.\n\n\nWe initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.\n\n\nThe result compared to the previous checkpoints is as followings:\n\n\n\nThe result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.\n\n\n* Note that to use updated models, please update the 'modeling\\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).\n\n\nModel description\n-----------------\n\n\nThis is an implementation of Chinese BART-Base.\n\n\nCPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation\n\n\nYunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu\n\n\nGithub Link: URL\n\n\nUsage\n-----\n\n\nNote: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer."
] |
[
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #Chinese #seq2seq #BART #zh #arxiv-2109.05729 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### News\n\n\n12/30/2022\n\n\nAn updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:\n\n\n* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.\n* Position Embeddings We extend the max\\_position\\_embeddings from 512 to 1024.\n\n\nWe initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.\n\n\nThe result compared to the previous checkpoints is as followings:\n\n\n\nThe result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.\n\n\n* Note that to use updated models, please update the 'modeling\\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).\n\n\nModel description\n-----------------\n\n\nThis is an implementation of Chinese BART-Base.\n\n\nCPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation\n\n\nYunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu\n\n\nGithub Link: URL\n\n\nUsage\n-----\n\n\nNote: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer."
] |
text2text-generation
|
transformers
|
# Chinese BART-Large
### News
**12/30/2022**
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
- **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
- **Position Embeddings** We extend the max_position_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
| | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG |
| :--------- | :---: | :-----: | :-----: | :---: | :---: |
| Previous | | | | | |
| bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 |
| cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 |
| bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 |
| cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 |
| Updataed | | | | | |
| bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 |
| cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 |
| bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 |
| cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 |
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
- Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache).
## Model description
This is an implementation of Chinese BART-Large.
[**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf)
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
**Github Link:** https://github.com/fastnlp/CPT
## Usage
```python
>>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("fnlp/bart-large-chinese")
>>> model = BartForConditionalGeneration.from_pretrained("fnlp/bart-large-chinese")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("北京是[MASK]的首都", max_length=50, do_sample=False)
[{'generated_text': '北 京 是 中 华 人 民 共 和 国 的 首 都'}]
```
**Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.**
## Citation
```bibtex
@article{shao2021cpt,
title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation},
author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu},
journal={arXiv preprint arXiv:2109.05729},
year={2021}
}
```
|
{"language": "zh", "tags": ["text2text-generation", "Chinese", "seq2seq"]}
|
fnlp/bart-large-chinese
| null |
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"Chinese",
"seq2seq",
"zh",
"arxiv:2109.05729",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.05729"
] |
[
"zh"
] |
TAGS
#transformers #pytorch #safetensors #bart #text2text-generation #Chinese #seq2seq #zh #arxiv-2109.05729 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Chinese BART-Large
==================
### News
12/30/2022
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
* Position Embeddings We extend the max\_position\_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
* Note that to use updated models, please update the 'modeling\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).
Model description
-----------------
This is an implementation of Chinese BART-Large.
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
Github Link: URL
Usage
-----
Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.
|
[
"### News\n\n\n12/30/2022\n\n\nAn updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:\n\n\n* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.\n* Position Embeddings We extend the max\\_position\\_embeddings from 512 to 1024.\n\n\nWe initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.\n\n\nThe result compared to the previous checkpoints is as followings:\n\n\n\nThe result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.\n\n\n* Note that to use updated models, please update the 'modeling\\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).\n\n\nModel description\n-----------------\n\n\nThis is an implementation of Chinese BART-Large.\n\n\nCPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation\n\n\nYunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu\n\n\nGithub Link: URL\n\n\nUsage\n-----\n\n\nNote: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer."
] |
[
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #Chinese #seq2seq #zh #arxiv-2109.05729 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### News\n\n\n12/30/2022\n\n\nAn updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:\n\n\n* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.\n* Position Embeddings We extend the max\\_position\\_embeddings from 512 to 1024.\n\n\nWe initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.\n\n\nThe result compared to the previous checkpoints is as followings:\n\n\n\nThe result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.\n\n\n* Note that to use updated models, please update the 'modeling\\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).\n\n\nModel description\n-----------------\n\n\nThis is an implementation of Chinese BART-Large.\n\n\nCPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation\n\n\nYunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu\n\n\nGithub Link: URL\n\n\nUsage\n-----\n\n\nNote: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer."
] |
text2text-generation
|
transformers
|
# Chinese CPT-Base
### News
**12/30/2022**
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
- **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
- **Position Embeddings** We extend the max_position_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
| | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG |
| :--------- | :---: | :-----: | :-----: | :---: | :---: |
| Previous | | | | | |
| bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 |
| cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 |
| bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 |
| cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 |
| Updataed | | | | | |
| bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 |
| cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 |
| bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 |
| cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 |
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
- Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache).
## Model description
This is an implementation of CPT-Base. To use CPT, please import the file `modeling_cpt.py` (**Download** [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) that define the architecture of CPT into your project.
[**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf)
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
**Github Link:** https://github.com/fastnlp/CPT
## Usage
```python
>>> from modeling_cpt import CPTForConditionalGeneration
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("fnlp/cpt-base")
>>> model = CPTForConditionalGeneration.from_pretrained("fnlp/cpt-base")
>>> inputs = tokenizer.encode("北京是[MASK]的首都", return_tensors='pt')
>>> pred_ids = model.generate(input_ids, num_beams=4, max_length=20)
>>> print(tokenizer.convert_ids_to_tokens(pred_ids[i]))
['[SEP]', '[CLS]', '北', '京', '是', '中', '国', '的', '首', '都', '[SEP]']
```
**Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.**
## Citation
```bibtex
@article{shao2021cpt,
title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation},
author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu},
journal={arXiv preprint arXiv:2109.05729},
year={2021}
}
```
|
{"language": "zh", "initializedtags": ["fill-mask", "text2text-generation", "fill-mask", "text-classification", "Summarization", "Chinese", "CPT", "BART", "BERT", "seq2seq"]}
|
fnlp/cpt-base
| null |
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"zh",
"arxiv:2109.05729",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.05729"
] |
[
"zh"
] |
TAGS
#transformers #pytorch #safetensors #bart #text2text-generation #zh #arxiv-2109.05729 #autotrain_compatible #endpoints_compatible #region-us
|
Chinese CPT-Base
================
### News
12/30/2022
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
* Position Embeddings We extend the max\_position\_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
* Note that to use updated models, please update the 'modeling\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).
Model description
-----------------
This is an implementation of CPT-Base. To use CPT, please import the file 'modeling\_cpt.py' (Download Here) that define the architecture of CPT into your project.
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
Github Link: URL
Usage
-----
Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.
|
[
"### News\n\n\n12/30/2022\n\n\nAn updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:\n\n\n* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.\n* Position Embeddings We extend the max\\_position\\_embeddings from 512 to 1024.\n\n\nWe initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.\n\n\nThe result compared to the previous checkpoints is as followings:\n\n\n\nThe result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.\n\n\n* Note that to use updated models, please update the 'modeling\\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).\n\n\nModel description\n-----------------\n\n\nThis is an implementation of CPT-Base. To use CPT, please import the file 'modeling\\_cpt.py' (Download Here) that define the architecture of CPT into your project.\n\n\nCPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation\n\n\nYunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu\n\n\nGithub Link: URL\n\n\nUsage\n-----\n\n\nNote: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer."
] |
[
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #zh #arxiv-2109.05729 #autotrain_compatible #endpoints_compatible #region-us \n",
"### News\n\n\n12/30/2022\n\n\nAn updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:\n\n\n* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.\n* Position Embeddings We extend the max\\_position\\_embeddings from 512 to 1024.\n\n\nWe initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.\n\n\nThe result compared to the previous checkpoints is as followings:\n\n\n\nThe result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.\n\n\n* Note that to use updated models, please update the 'modeling\\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).\n\n\nModel description\n-----------------\n\n\nThis is an implementation of CPT-Base. To use CPT, please import the file 'modeling\\_cpt.py' (Download Here) that define the architecture of CPT into your project.\n\n\nCPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation\n\n\nYunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu\n\n\nGithub Link: URL\n\n\nUsage\n-----\n\n\nNote: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer."
] |
text-classification
|
transformers
|
# Chinese CPT-Large
### News
**12/30/2022**
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
- **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
- **Position Embeddings** We extend the max_position_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
| | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG |
| :--------- | :---: | :-----: | :-----: | :---: | :---: |
| Previous | | | | | |
| bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 |
| cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 |
| bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 |
| cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 |
| Updataed | | | | | |
| bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 |
| cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 |
| bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 |
| cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 |
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
- Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache).
## Model description
This is an implementation of CPT-Large. To use CPT, please import the file `modeling_cpt.py` (**Download** [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) that define the architecture of CPT into your project.
[**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf)
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
**Github Link:** https://github.com/fastnlp/CPT
## Usage
```python
>>> from modeling_cpt import CPTForConditionalGeneration
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("fnlp/cpt-large")
>>> model = CPTForConditionalGeneration.from_pretrained("fnlp/cpt-large")
>>> input_ids = tokenizer.encode("北京是[MASK]的首都", return_tensors='pt')
>>> pred_ids = model.generate(input_ids, num_beams=4, max_length=20)
>>> print(tokenizer.convert_ids_to_tokens(pred_ids[0]))
['[SEP]', '[CLS]', '北', '京', '是', '中', '国', '的', '首', '都', '[SEP]']
```
**Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.**
## Citation
```bibtex
@article{shao2021cpt,
title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation},
author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu},
journal={arXiv preprint arXiv:2109.05729},
year={2021}
}
```
|
{"language": "zh", "tags": ["fill-mask", "text2text-generation", "fill-mask", "text-classification", "Summarization", "Chinese", "CPT", "BART", "BERT", "seq2seq"]}
|
fnlp/cpt-large
| null |
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"fill-mask",
"text-classification",
"Summarization",
"Chinese",
"CPT",
"BART",
"BERT",
"seq2seq",
"zh",
"arxiv:2109.05729",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.05729"
] |
[
"zh"
] |
TAGS
#transformers #pytorch #safetensors #bart #text2text-generation #fill-mask #text-classification #Summarization #Chinese #CPT #BART #BERT #seq2seq #zh #arxiv-2109.05729 #autotrain_compatible #endpoints_compatible #region-us
|
Chinese CPT-Large
=================
### News
12/30/2022
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
* Position Embeddings We extend the max\_position\_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
* Note that to use updated models, please update the 'modeling\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).
Model description
-----------------
This is an implementation of CPT-Large. To use CPT, please import the file 'modeling\_cpt.py' (Download Here) that define the architecture of CPT into your project.
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
Github Link: URL
Usage
-----
Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.
|
[
"### News\n\n\n12/30/2022\n\n\nAn updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:\n\n\n* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.\n* Position Embeddings We extend the max\\_position\\_embeddings from 512 to 1024.\n\n\nWe initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.\n\n\nThe result compared to the previous checkpoints is as followings:\n\n\n\nThe result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.\n\n\n* Note that to use updated models, please update the 'modeling\\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).\n\n\nModel description\n-----------------\n\n\nThis is an implementation of CPT-Large. To use CPT, please import the file 'modeling\\_cpt.py' (Download Here) that define the architecture of CPT into your project.\n\n\nCPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation\n\n\nYunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu\n\n\nGithub Link: URL\n\n\nUsage\n-----\n\n\nNote: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer."
] |
[
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #fill-mask #text-classification #Summarization #Chinese #CPT #BART #BERT #seq2seq #zh #arxiv-2109.05729 #autotrain_compatible #endpoints_compatible #region-us \n",
"### News\n\n\n12/30/2022\n\n\nAn updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:\n\n\n* Vocabulary We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.\n* Position Embeddings We extend the max\\_position\\_embeddings from 512 to 1024.\n\n\nWe initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.\n\n\nThe result compared to the previous checkpoints is as followings:\n\n\n\nThe result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.\n\n\n* Note that to use updated models, please update the 'modeling\\_cpt.py' (new version download Here) and the vocabulary (refresh the cache).\n\n\nModel description\n-----------------\n\n\nThis is an implementation of CPT-Large. To use CPT, please import the file 'modeling\\_cpt.py' (Download Here) that define the architecture of CPT into your project.\n\n\nCPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation\n\n\nYunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu\n\n\nGithub Link: URL\n\n\nUsage\n-----\n\n\nNote: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer."
] |
fill-mask
|
transformers
|
# ElasticBERT-BASE
## Model description
This is an implementation of the `base` version of ElasticBERT.
[**Towards Efficient NLP: A Standard Evaluation and A Strong Baseline**](https://arxiv.org/pdf/2110.07038.pdf)
Xiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu
## Code link
[**fastnlp/elasticbert**](https://github.com/fastnlp/ElasticBERT)
## Usage
```python
>>> from transformers import BertTokenizer as ElasticBertTokenizer
>>> from models.configuration_elasticbert import ElasticBertConfig
>>> from models.modeling_elasticbert import ElasticBertForSequenceClassification
>>> num_output_layers = 1
>>> config = ElasticBertConfig.from_pretrained('fnlp/elasticbert-base', num_output_layers=num_output_layers )
>>> tokenizer = ElasticBertTokenizer.from_pretrained('fnlp/elasticbert-base')
>>> model = ElasticBertForSequenceClassification.from_pretrained('fnlp/elasticbert-base', config=config)
>>> input_ids = tokenizer.encode('The actors are fantastic .', return_tensors='pt')
>>> outputs = model(input_ids)
```
## Citation
```bibtex
@article{liu2021elasticbert,
author = {Xiangyang Liu and
Tianxiang Sun and
Junliang He and
Lingling Wu and
Xinyu Zhang and
Hao Jiang and
Zhao Cao and
Xuanjing Huang and
Xipeng Qiu},
title = {Towards Efficient {NLP:} {A} Standard Evaluation and {A} Strong Baseline},
journal = {CoRR},
volume = {abs/2110.07038},
year = {2021},
url = {https://arxiv.org/abs/2110.07038},
eprinttype = {arXiv},
eprint = {2110.07038},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-07038.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": "en", "tags": ["Multi-exit-BERT"], "datasets": ["wikipedia", "bookcorpus", "c4"]}
|
fnlp/elasticbert-base
| null |
[
"transformers",
"pytorch",
"elasticbert",
"fill-mask",
"Multi-exit-BERT",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"dataset:c4",
"arxiv:2110.07038",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.07038"
] |
[
"en"
] |
TAGS
#transformers #pytorch #elasticbert #fill-mask #Multi-exit-BERT #en #dataset-wikipedia #dataset-bookcorpus #dataset-c4 #arxiv-2110.07038 #autotrain_compatible #endpoints_compatible #region-us
|
# ElasticBERT-BASE
## Model description
This is an implementation of the 'base' version of ElasticBERT.
Towards Efficient NLP: A Standard Evaluation and A Strong Baseline
Xiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu
## Code link
fastnlp/elasticbert
## Usage
|
[
"# ElasticBERT-BASE",
"## Model description\n\nThis is an implementation of the 'base' version of ElasticBERT.\n\nTowards Efficient NLP: A Standard Evaluation and A Strong Baseline\n\nXiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu",
"## Code link\n\nfastnlp/elasticbert",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #elasticbert #fill-mask #Multi-exit-BERT #en #dataset-wikipedia #dataset-bookcorpus #dataset-c4 #arxiv-2110.07038 #autotrain_compatible #endpoints_compatible #region-us \n",
"# ElasticBERT-BASE",
"## Model description\n\nThis is an implementation of the 'base' version of ElasticBERT.\n\nTowards Efficient NLP: A Standard Evaluation and A Strong Baseline\n\nXiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu",
"## Code link\n\nfastnlp/elasticbert",
"## Usage"
] |
fill-mask
|
transformers
|
# ElasticBERT-LARGE
## Model description
This is an implementation of the `large` version of ElasticBERT.
[**Towards Efficient NLP: A Standard Evaluation and A Strong Baseline**](https://arxiv.org/pdf/2110.07038.pdf)
Xiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu
## Code link
[**fastnlp/elasticbert**](https://github.com/fastnlp/ElasticBERT)
## Usage
```python
>>> from transformers import BertTokenizer as ElasticBertTokenizer
>>> from models.configuration_elasticbert import ElasticBertConfig
>>> from models.modeling_elasticbert import ElasticBertForSequenceClassification
>>> num_output_layers = 1
>>> config = ElasticBertConfig.from_pretrained('fnlp/elasticbert-large', num_output_layers=num_output_layers )
>>> tokenizer = ElasticBertTokenizer.from_pretrained('fnlp/elasticbert-large')
>>> model = ElasticBertForSequenceClassification.from_pretrained('fnlp/elasticbert-large', config=config)
>>> input_ids = tokenizer.encode('The actors are fantastic .', return_tensors='pt')
>>> outputs = model(input_ids)
```
## Citation
```bibtex
@article{liu2021elasticbert,
author = {Xiangyang Liu and
Tianxiang Sun and
Junliang He and
Lingling Wu and
Xinyu Zhang and
Hao Jiang and
Zhao Cao and
Xuanjing Huang and
Xipeng Qiu},
title = {Towards Efficient {NLP:} {A} Standard Evaluation and {A} Strong Baseline},
journal = {CoRR},
volume = {abs/2110.07038},
year = {2021},
url = {https://arxiv.org/abs/2110.07038},
eprinttype = {arXiv},
eprint = {2110.07038},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-07038.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": "en", "tags": ["Multi-exit-BERT"], "datasets": ["wikipedia", "bookcorpus", "c4"]}
|
fnlp/elasticbert-large
| null |
[
"transformers",
"pytorch",
"elasticbert",
"fill-mask",
"Multi-exit-BERT",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"dataset:c4",
"arxiv:2110.07038",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.07038"
] |
[
"en"
] |
TAGS
#transformers #pytorch #elasticbert #fill-mask #Multi-exit-BERT #en #dataset-wikipedia #dataset-bookcorpus #dataset-c4 #arxiv-2110.07038 #autotrain_compatible #endpoints_compatible #region-us
|
# ElasticBERT-LARGE
## Model description
This is an implementation of the 'large' version of ElasticBERT.
Towards Efficient NLP: A Standard Evaluation and A Strong Baseline
Xiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu
## Code link
fastnlp/elasticbert
## Usage
|
[
"# ElasticBERT-LARGE",
"## Model description\n\nThis is an implementation of the 'large' version of ElasticBERT.\n\nTowards Efficient NLP: A Standard Evaluation and A Strong Baseline\n\nXiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu",
"## Code link\n\nfastnlp/elasticbert",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #elasticbert #fill-mask #Multi-exit-BERT #en #dataset-wikipedia #dataset-bookcorpus #dataset-c4 #arxiv-2110.07038 #autotrain_compatible #endpoints_compatible #region-us \n",
"# ElasticBERT-LARGE",
"## Model description\n\nThis is an implementation of the 'large' version of ElasticBERT.\n\nTowards Efficient NLP: A Standard Evaluation and A Strong Baseline\n\nXiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu",
"## Code link\n\nfastnlp/elasticbert",
"## Usage"
] |
text2text-generation
|
transformers
|
# bart-base-python-1m
|
{"language": "py", "license": "mit", "tags": ["bart", "pytorch"], "thumbnail": "https://avatars.githubusercontent.com/u/70610668?s=400&u=f0699303289113c125e8686338739d9a63d5826c&v=4"}
|
formermagic/bart-base-python-1m
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"py",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"py"
] |
TAGS
#transformers #pytorch #bart #text2text-generation #py #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# bart-base-python-1m
|
[
"# bart-base-python-1m"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #py #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# bart-base-python-1m"
] |
text2text-generation
|
transformers
|
# Python T5 base model
Pre-trained model on CodeSearchNet Python dataset using a span-masking objective. The training objective and model were introduced in [this paper](https://arxiv.org/pdf/1910.10683.pdf) and first released in [this repository](https://github.com/google-research/text-to-text-transfer-transformer). PyT5 model used [git-t5](https://github.com/formermagic/git-t5) framework built on top of JAX/Flax to pre-train the model on a TPU v3-8 node.
# How to use
You can use this model to denoise span-masked sequences.
First, install the [git-t5](https://github.com/formermagic/git-t5) pip package:
```shell
> pip install git-t5
```
Next, download the model and tokenizer:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer,
model = AutoModelForSeq2SeqLM.from_pretrained("formermagic/pyt5-base")
tokenizer = AutoTokenizer.from_pretrained("formermagic/pyt5-base")
```
Finally, encode your input and generate the output sequence:
```python
from git_t5.utils import encode_input
text = """
def alias(self, annotationtype, set, fallback=False):
if inspect.isclass(annotationtype): annotationtype = annotationtype.ANNOTATIONTYPE
if annotationtype in self.set_alias and set in self.set_alias[annotationtype]:
return self.set_alias[annotationtype][set]
elif fallback:
return set
else:
raise KeyError("No alias for set " + set)
"""
batch, max_length = encode_input(tokenizer, text, seed=22)
outputs = model.generate(batch["input_ids"], max_length=max_length, num_beams=1)
print(tokenizer.batch_decode(outputs[..., 1:]))
print(tokenizer.batch_decode(batch["labels"]))
```
You should see the following output:
```shell
['<extra_id_0>, fallback=<extra_id_1> inspect<extra_id_2>.set_alias<extra_id_3> return self.set<extra_id_4>) def fallback']
['<extra_id_0>, fallback=<extra_id_1> inspect<extra_id_2>.set_alias<extra_id_3> return self.set<extra_id_4>) </s></s>']
```
As you can see, the predicted result is very close to the target sequence.
|
{}
|
formermagic/pyt5-base
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"arxiv:1910.10683",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1910.10683"
] |
[] |
TAGS
#transformers #pytorch #jax #tensorboard #t5 #text2text-generation #arxiv-1910.10683 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Python T5 base model
Pre-trained model on CodeSearchNet Python dataset using a span-masking objective. The training objective and model were introduced in this paper and first released in this repository. PyT5 model used git-t5 framework built on top of JAX/Flax to pre-train the model on a TPU v3-8 node.
# How to use
You can use this model to denoise span-masked sequences.
First, install the git-t5 pip package:
Next, download the model and tokenizer:
Finally, encode your input and generate the output sequence:
You should see the following output:
As you can see, the predicted result is very close to the target sequence.
|
[
"# Python T5 base model\n\nPre-trained model on CodeSearchNet Python dataset using a span-masking objective. The training objective and model were introduced in this paper and first released in this repository. PyT5 model used git-t5 framework built on top of JAX/Flax to pre-train the model on a TPU v3-8 node.",
"# How to use\n\nYou can use this model to denoise span-masked sequences.\n\nFirst, install the git-t5 pip package:\n\n\nNext, download the model and tokenizer:\n\n\nFinally, encode your input and generate the output sequence:\n\n\nYou should see the following output:\n\n\nAs you can see, the predicted result is very close to the target sequence."
] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #t5 #text2text-generation #arxiv-1910.10683 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Python T5 base model\n\nPre-trained model on CodeSearchNet Python dataset using a span-masking objective. The training objective and model were introduced in this paper and first released in this repository. PyT5 model used git-t5 framework built on top of JAX/Flax to pre-train the model on a TPU v3-8 node.",
"# How to use\n\nYou can use this model to denoise span-masked sequences.\n\nFirst, install the git-t5 pip package:\n\n\nNext, download the model and tokenizer:\n\n\nFinally, encode your input and generate the output sequence:\n\n\nYou should see the following output:\n\n\nAs you can see, the predicted result is very close to the target sequence."
] |
fill-mask
|
transformers
|
# roberta-base-python-1m
|
{"language": "py", "license": "mit", "tags": ["roberta", "pytorch"], "thumbnail": "https://avatars.githubusercontent.com/u/70610668?s=400&u=f0699303289113c125e8686338739d9a63d5826c&v=4"}
|
formermagic/roberta-base-python-1m
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"py",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"py"
] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #py #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-base-python-1m
|
[
"# roberta-base-python-1m"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #py #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-base-python-1m"
] |
null | null |
https://www.geogebra.org/m/w8uzjttg
https://www.geogebra.org/m/gvn7m78g
https://www.geogebra.org/m/arxecanq
https://www.geogebra.org/m/xb69bvww
https://www.geogebra.org/m/apvepfnd
https://www.geogebra.org/m/evmj8ckk
https://www.geogebra.org/m/qxcxwmhp
https://www.geogebra.org/m/p3cxqh6c
https://www.geogebra.org/m/ggrahbgd
https://www.geogebra.org/m/pnhymrbc
https://www.geogebra.org/m/zjukbtk9
https://www.geogebra.org/m/bbezun8r
https://www.geogebra.org/m/sgwamtru
https://www.geogebra.org/m/fpunkxxp
https://www.geogebra.org/m/acxebrr7
|
{}
|
formu/DR-Site
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
tags:
- Text2text Generation
- Conversational
- Text generation
model:
- "355M"
model-type:
- gpt2
widgets:
text_example_1:
- "One would be forgiven if one was not aware that Julian Assange is being"
title_example_1:
- "David North wsws"
text_example_2:
- "I would like to extend my sincerest greetings to the people of the world. When monstrous and absurd accusations were hurled at me and my family -- when"
title_example_2:
- "Leon Trotsky"
# GPT_2_Marxism is based on the gpt-2 355M model finetuned on a large corpus of Marxist documents, polemics and literature from historical and contemporary writers
# in the international socialist movement and the ICFI (fourth international) which upholds the principles which characterize genuine revolutionary marxism i.e. Trotskyism. # This finetuned gpt-2 model generates genuinely Marxist insights and responses.
# - Generated with the GPT2-355M model converted to pytorch using Max Woolf's aitextgen notebook (https://github.com/minimaxir/aitextgen)
# - "Finetuned on a large corpus of text mostly unstructured, unlabeled, raw copy and paste of entire selected works."
# - "Able to generate genuine Marxist responses"
# - "This model also generates insights that marxists often agree on, like freedom and equality."
import torch
import random
pip3 install aitextgen
import aitextgen
model = aitextgen("model.pytorch.bin")
text = "one would be forgiven if one was not aware that Julian Assange is currently being"
model.generate(n=3, prompt="Lenin:"+str(text), max_length=77, temperature=random.uniform(0.5, 1.5), seed=random.randint(0, 195302), lstrip=False)
"""
Lenin:one would be forgiven if one was not aware that Julian Assange is currently being persecuted by the governments of the United States, the UK and many other countries in spite of, or perhaps because of, the fact that he is an outspoken enemy of imperialism. This not unexpected. In 2003 a law was passed in the US that allowed prosecution of those who helped the FBI to violate civil
==========
Lenin:one would be forgiven if one was not aware that Julian Assange is currently being investigated by the FBI for illegally departing Ecuador - (although I had no proof available at the time) with the purpose of, as it were, of snatching up the devious Clintonite. Indeed, such an intrusion seems all the more fishy from the standpoint of a serious study of the facts
==========
Lenin:one would be forgiven if one was not aware that Julian Assange is currently being extradited before the beginning of June to answer questions which require a presumption of guilt. This follows from the very revealing papers that WikiLeaks provided in relation to the numerous criminal cases, and of the complex international network which organised it, the publication by WikiLeaks of thousands of secret cables from the intelligence agencies of the
"""
|
{}
|
fractaldna22/GPT_2_Marxism
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
tags:
- Text2text Generation
- Conversational
- Text generation
model:
- "355M"
model-type:
- gpt2
widgets:
text_example_1:
- "One would be forgiven if one was not aware that Julian Assange is being"
title_example_1:
- "David North wsws"
text_example_2:
- "I would like to extend my sincerest greetings to the people of the world. When monstrous and absurd accusations were hurled at me and my family -- when"
title_example_2:
- "Leon Trotsky"
# GPT_2_Marxism is based on the gpt-2 355M model finetuned on a large corpus of Marxist documents, polemics and literature from historical and contemporary writers
# in the international socialist movement and the ICFI (fourth international) which upholds the principles which characterize genuine revolutionary marxism i.e. Trotskyism. # This finetuned gpt-2 model generates genuinely Marxist insights and responses.
# - Generated with the GPT2-355M model converted to pytorch using Max Woolf's aitextgen notebook (URL
# - "Finetuned on a large corpus of text mostly unstructured, unlabeled, raw copy and paste of entire selected works."
# - "Able to generate genuine Marxist responses"
# - "This model also generates insights that marxists often agree on, like freedom and equality."
import torch
import random
pip3 install aitextgen
import aitextgen
model = aitextgen("URL")
text = "one would be forgiven if one was not aware that Julian Assange is currently being"
model.generate(n=3, prompt="Lenin:"+str(text), max_length=77, temperature=random.uniform(0.5, 1.5), seed=random.randint(0, 195302), lstrip=False)
"""
Lenin:one would be forgiven if one was not aware that Julian Assange is currently being persecuted by the governments of the United States, the UK and many other countries in spite of, or perhaps because of, the fact that he is an outspoken enemy of imperialism. This not unexpected. In 2003 a law was passed in the US that allowed prosecution of those who helped the FBI to violate civil
==========
Lenin:one would be forgiven if one was not aware that Julian Assange is currently being investigated by the FBI for illegally departing Ecuador - (although I had no proof available at the time) with the purpose of, as it were, of snatching up the devious Clintonite. Indeed, such an intrusion seems all the more fishy from the standpoint of a serious study of the facts
==========
Lenin:one would be forgiven if one was not aware that Julian Assange is currently being extradited before the beginning of June to answer questions which require a presumption of guilt. This follows from the very revealing papers that WikiLeaks provided in relation to the numerous criminal cases, and of the complex international network which organised it, the publication by WikiLeaks of thousands of secret cables from the intelligence agencies of the
"""
|
[
"# GPT_2_Marxism is based on the gpt-2 355M model finetuned on a large corpus of Marxist documents, polemics and literature from historical and contemporary writers",
"# in the international socialist movement and the ICFI (fourth international) which upholds the principles which characterize genuine revolutionary marxism i.e. Trotskyism. # This finetuned gpt-2 model generates genuinely Marxist insights and responses.",
"# - Generated with the GPT2-355M model converted to pytorch using Max Woolf's aitextgen notebook (URL",
"# - \"Finetuned on a large corpus of text mostly unstructured, unlabeled, raw copy and paste of entire selected works.\"",
"# - \"Able to generate genuine Marxist responses\"",
"# - \"This model also generates insights that marxists often agree on, like freedom and equality.\"\n\nimport torch\nimport random\npip3 install aitextgen\nimport aitextgen\nmodel = aitextgen(\"URL\")\n\ntext = \"one would be forgiven if one was not aware that Julian Assange is currently being\"\nmodel.generate(n=3, prompt=\"Lenin:\"+str(text), max_length=77, temperature=random.uniform(0.5, 1.5), seed=random.randint(0, 195302), lstrip=False)\n\n\"\"\"\n\nLenin:one would be forgiven if one was not aware that Julian Assange is currently being persecuted by the governments of the United States, the UK and many other countries in spite of, or perhaps because of, the fact that he is an outspoken enemy of imperialism. This not unexpected. In 2003 a law was passed in the US that allowed prosecution of those who helped the FBI to violate civil\n\n==========\nLenin:one would be forgiven if one was not aware that Julian Assange is currently being investigated by the FBI for illegally departing Ecuador - (although I had no proof available at the time) with the purpose of, as it were, of snatching up the devious Clintonite. Indeed, such an intrusion seems all the more fishy from the standpoint of a serious study of the facts\n\n==========\nLenin:one would be forgiven if one was not aware that Julian Assange is currently being extradited before the beginning of June to answer questions which require a presumption of guilt. This follows from the very revealing papers that WikiLeaks provided in relation to the numerous criminal cases, and of the complex international network which organised it, the publication by WikiLeaks of thousands of secret cables from the intelligence agencies of the\n\n\"\"\""
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# GPT_2_Marxism is based on the gpt-2 355M model finetuned on a large corpus of Marxist documents, polemics and literature from historical and contemporary writers",
"# in the international socialist movement and the ICFI (fourth international) which upholds the principles which characterize genuine revolutionary marxism i.e. Trotskyism. # This finetuned gpt-2 model generates genuinely Marxist insights and responses.",
"# - Generated with the GPT2-355M model converted to pytorch using Max Woolf's aitextgen notebook (URL",
"# - \"Finetuned on a large corpus of text mostly unstructured, unlabeled, raw copy and paste of entire selected works.\"",
"# - \"Able to generate genuine Marxist responses\"",
"# - \"This model also generates insights that marxists often agree on, like freedom and equality.\"\n\nimport torch\nimport random\npip3 install aitextgen\nimport aitextgen\nmodel = aitextgen(\"URL\")\n\ntext = \"one would be forgiven if one was not aware that Julian Assange is currently being\"\nmodel.generate(n=3, prompt=\"Lenin:\"+str(text), max_length=77, temperature=random.uniform(0.5, 1.5), seed=random.randint(0, 195302), lstrip=False)\n\n\"\"\"\n\nLenin:one would be forgiven if one was not aware that Julian Assange is currently being persecuted by the governments of the United States, the UK and many other countries in spite of, or perhaps because of, the fact that he is an outspoken enemy of imperialism. This not unexpected. In 2003 a law was passed in the US that allowed prosecution of those who helped the FBI to violate civil\n\n==========\nLenin:one would be forgiven if one was not aware that Julian Assange is currently being investigated by the FBI for illegally departing Ecuador - (although I had no proof available at the time) with the purpose of, as it were, of snatching up the devious Clintonite. Indeed, such an intrusion seems all the more fishy from the standpoint of a serious study of the facts\n\n==========\nLenin:one would be forgiven if one was not aware that Julian Assange is currently being extradited before the beginning of June to answer questions which require a presumption of guilt. This follows from the very revealing papers that WikiLeaks provided in relation to the numerous criminal cases, and of the complex international network which organised it, the publication by WikiLeaks of thousands of secret cables from the intelligence agencies of the\n\n\"\"\""
] |
text-generation
|
transformers
|
## Fact checking
This generative model - trained on FEVER - aims to predict whether a claim is consistent with the provided evidence.
### Installation and simple usage
One quick way to install it is to type
```bash
pip install fact_checking
```
and then use the following code:
```python
from transformers import (
GPT2LMHeadModel,
GPT2Tokenizer,
)
from fact_checking import FactChecker
_evidence = """
Justine Tanya Bateman (born February 19, 1966) is an American writer, producer, and actress . She is best known for her regular role as Mallory Keaton on the sitcom Family Ties (1982 -- 1989). Until recently, Bateman ran a production and consulting company, SECTION 5 . In the fall of 2012, she started studying computer science at UCLA.
"""
_claim = 'Justine Bateman is a poet.'
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
fact_checking_model = GPT2LMHeadModel.from_pretrained('fractalego/fact-checking')
fact_checker = FactChecker(fact_checking_model, tokenizer)
is_claim_true = fact_checker.validate(_evidence, _claim)
print(is_claim_true)
```
which gives the output
```bash
False
```
### Probabilistic output with replicas
The output can include a probabilistic component, obtained by iterating a number of times the output generation.
The system generates an ensemble of answers and groups them by Yes or No.
For example, one can ask
```python
from transformers import (
GPT2LMHeadModel,
GPT2Tokenizer,
)
from fact_checking import FactChecker
_evidence = """
Jane writes code for Huggingface.
"""
_claim = 'Jane is an engineer.'
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
fact_checking_model = GPT2LMHeadModel.from_pretrained('fractalego/fact-checking')
fact_checker = FactChecker(fact_checking_model, tokenizer)
is_claim_true = fact_checker.validate_with_replicas(_evidence, _claim)
print(is_claim_true)
```
with output
```bash
{'Y': 0.95, 'N': 0.05}
```
### Score on FEVER
The predictions are evaluated on a subset of the FEVER dev dataset,
restricted to the SUPPORTING and REFUTING options:
| precision | recall | F1|
| --- | --- | --- |
|0.94|0.98|0.96|
These results should be taken with many grains of salt. This is still a work in progress,
and there might be leakage coming from the underlining GPT2 model unnaturally raising the scores.
|
{}
|
fractalego/fact-checking
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"doi:10.57967/hf/0009",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #doi-10.57967/hf/0009 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Fact checking
-------------
This generative model - trained on FEVER - aims to predict whether a claim is consistent with the provided evidence.
### Installation and simple usage
One quick way to install it is to type
and then use the following code:
which gives the output
### Probabilistic output with replicas
The output can include a probabilistic component, obtained by iterating a number of times the output generation.
The system generates an ensemble of answers and groups them by Yes or No.
For example, one can ask
with output
### Score on FEVER
The predictions are evaluated on a subset of the FEVER dev dataset,
restricted to the SUPPORTING and REFUTING options:
precision: 0.94, recall: 0.98, F1: 0.96
These results should be taken with many grains of salt. This is still a work in progress,
and there might be leakage coming from the underlining GPT2 model unnaturally raising the scores.
|
[
"### Installation and simple usage\n\n\nOne quick way to install it is to type\n\n\nand then use the following code:\n\n\nwhich gives the output",
"### Probabilistic output with replicas\n\n\nThe output can include a probabilistic component, obtained by iterating a number of times the output generation.\nThe system generates an ensemble of answers and groups them by Yes or No.\n\n\nFor example, one can ask\n\n\nwith output",
"### Score on FEVER\n\n\nThe predictions are evaluated on a subset of the FEVER dev dataset,\nrestricted to the SUPPORTING and REFUTING options:\n\n\nprecision: 0.94, recall: 0.98, F1: 0.96\n\n\nThese results should be taken with many grains of salt. This is still a work in progress,\nand there might be leakage coming from the underlining GPT2 model unnaturally raising the scores."
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #doi-10.57967/hf/0009 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Installation and simple usage\n\n\nOne quick way to install it is to type\n\n\nand then use the following code:\n\n\nwhich gives the output",
"### Probabilistic output with replicas\n\n\nThe output can include a probabilistic component, obtained by iterating a number of times the output generation.\nThe system generates an ensemble of answers and groups them by Yes or No.\n\n\nFor example, one can ask\n\n\nwith output",
"### Score on FEVER\n\n\nThe predictions are evaluated on a subset of the FEVER dev dataset,\nrestricted to the SUPPORTING and REFUTING options:\n\n\nprecision: 0.94, recall: 0.98, F1: 0.96\n\n\nThese results should be taken with many grains of salt. This is still a work in progress,\nand there might be leakage coming from the underlining GPT2 model unnaturally raising the scores."
] |
question-answering
|
transformers
|
## Introduction
This is a zero-shot relation extractor based on the paper [Exploring the zero-shot limit of FewRel](https://www.aclweb.org/anthology/2020.coling-main.124).
## Installation
```bash
$ pip install zero-shot-re
```
## Run the Extractor
```python
from transformers import AutoTokenizer
from zero_shot_re import RelTaggerModel, RelationExtractor
model = RelTaggerModel.from_pretrained("fractalego/fewrel-zero-shot")
tokenizer = AutoTokenizer.from_pretrained("fractalego/fewrel-zero-shot")
relations = ['noble title', 'founding date', 'occupation of a person']
extractor = RelationExtractor(model, tokenizer, relations)
ranked_rels = extractor.rank(text='John Smith received an OBE', head='John Smith', tail='OBE')
print(ranked_rels)
```
with results
```python3
[('noble title', 0.9690611883997917),
('occupation of a person', 0.0012609362602233887),
('founding date', 0.00024014711380004883)]
```
## Accuracy
The results as in the paper are
| Model | 0-shot 5-ways | 0-shot 10-ways |
|------------------------|--------------|----------------|
|(1) Distillbert |70.1±0.5 | 55.9±0.6 |
|(2) Bert Large |80.8±0.4 | 69.6±0.5 |
|(3) Distillbert + SQUAD |81.3±0.4 | 70.0±0.2 |
|(4) Bert Large + SQUAD |86.0±0.6 | 76.2±0.4 |
This version uses the (4) Bert Large + SQUAD model
## Cite as
```bibtex
@inproceedings{cetoli-2020-exploring,
title = "Exploring the zero-shot limit of {F}ew{R}el",
author = "Cetoli, Alberto",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.coling-main.124",
doi = "10.18653/v1/2020.coling-main.124",
pages = "1447--1451",
abstract = "This paper proposes a general purpose relation extractor that uses Wikidata descriptions to represent the relation{'}s surface form. The results are tested on the FewRel 1.0 dataset, which provides an excellent framework for training and evaluating the proposed zero-shot learning system in English. This relation extractor architecture exploits the implicit knowledge of a language model through a question-answering approach.",
}
```
|
{}
|
fractalego/fewrel-zero-shot
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #endpoints_compatible #region-us
|
Introduction
------------
This is a zero-shot relation extractor based on the paper Exploring the zero-shot limit of FewRel.
Installation
------------
Run the Extractor
-----------------
with results
Accuracy
--------
The results as in the paper are
Model: (1) Distillbert, 0-shot 5-ways: 70.1±0.5, 0-shot 10-ways: 55.9±0.6
Model: (2) Bert Large, 0-shot 5-ways: 80.8±0.4, 0-shot 10-ways: 69.6±0.5
Model: (3) Distillbert + SQUAD, 0-shot 5-ways: 81.3±0.4, 0-shot 10-ways: 70.0±0.2
Model: (4) Bert Large + SQUAD, 0-shot 5-ways: 86.0±0.6, 0-shot 10-ways: 76.2±0.4
This version uses the (4) Bert Large + SQUAD model
Cite as
-------
|
[] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# Personal speech to text model
s2t models often do not understand my accent, so I fine tuned this one from "facebook/wav2vec2-large-robust-ft-swbd-300h" using about 1000 recordings of my voice.
Do not download unless you have exactly my accent.
|
{}
|
fractalego/personal-speech-to-text-model
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #has_space #region-us
|
# Personal speech to text model
s2t models often do not understand my accent, so I fine tuned this one from "facebook/wav2vec2-large-robust-ft-swbd-300h" using about 1000 recordings of my voice.
Do not download unless you have exactly my accent.
|
[
"# Personal speech to text model\ns2t models often do not understand my accent, so I fine tuned this one from \"facebook/wav2vec2-large-robust-ft-swbd-300h\" using about 1000 recordings of my voice.\n\nDo not download unless you have exactly my accent."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #has_space #region-us \n",
"# Personal speech to text model\ns2t models often do not understand my accent, so I fine tuned this one from \"facebook/wav2vec2-large-robust-ft-swbd-300h\" using about 1000 recordings of my voice.\n\nDo not download unless you have exactly my accent."
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1002
- Accuracy: 0.9406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9039 | 1.0 | 318 | 0.5777 | 0.7335 |
| 0.4486 | 2.0 | 636 | 0.2860 | 0.8768 |
| 0.2528 | 3.0 | 954 | 0.1792 | 0.9210 |
| 0.176 | 4.0 | 1272 | 0.1398 | 0.9274 |
| 0.1417 | 5.0 | 1590 | 0.1209 | 0.9329 |
| 0.1245 | 6.0 | 1908 | 0.1110 | 0.94 |
| 0.1135 | 7.0 | 2226 | 0.1061 | 0.9390 |
| 0.1074 | 8.0 | 2544 | 0.1026 | 0.94 |
| 0.1032 | 9.0 | 2862 | 0.1006 | 0.9410 |
| 0.1017 | 10.0 | 3180 | 0.1002 | 0.9406 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["clinc_oos"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9406451612903226, "name": "Accuracy"}]}]}]}
|
frahman/distilbert-base-uncased-distilled-clinc
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-distilled-clinc
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the clinc\_oos dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1002
* Accuracy: 0.9406
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 48
* eval\_batch\_size: 48
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7703
- Accuracy: 0.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 |
| 2.6309 | 2.0 | 636 | 1.8797 | 0.8310 |
| 1.5443 | 3.0 | 954 | 1.1537 | 0.8974 |
| 1.0097 | 4.0 | 1272 | 0.8560 | 0.9135 |
| 0.7918 | 5.0 | 1590 | 0.7703 | 0.9187 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["clinc_oos"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9187096774193548, "name": "Accuracy"}]}]}]}
|
frahman/distilbert-base-uncased-finetuned-clinc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-clinc
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the clinc\_oos dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7703
* Accuracy: 0.9187
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 48
* eval\_batch\_size: 48
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.9205
- F1: 0.9207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8234 | 1.0 | 250 | 0.3185 | 0.9025 | 0.8992 |
| 0.2466 | 2.0 | 500 | 0.2202 | 0.9205 | 0.9207 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9205, "name": "Accuracy"}, {"type": "f1", "value": 0.9206660865871332, "name": "F1"}]}]}]}
|
frahman/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2202
* Accuracy: 0.9205
* F1: 0.9207
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
token-classification
|
transformers
|
# SciBERT finetuned on JNLPA for NER downstream task
## Language Model
[SciBERT](https://arxiv.org/pdf/1903.10676.pdf) is a pretrained language model based on BERT and trained by the
[Allen Institute for AI](https://allenai.org/) on papers from the corpus of
[Semantic Scholar](https://www.semanticscholar.org/).
Corpus size is 1.14M papers, 3.1B tokens. SciBERT has its own vocabulary (scivocab) that's built to best match
the training corpus.
## Downstream task
[`allenai/scibert_scivocab_cased`](https://huggingface.co/allenai/scibert_scivocab_cased#) has been finetuned for Named Entity
Recognition (NER) dowstream task. The code to train the NER can be found [here](https://github.com/fran-martinez/bio_ner_bert).
### Data
The corpus used to fine-tune the NER is [BioNLP / JNLPBA shared task](http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004).
- Training data consist of 2,000 PubMed abstracts with term/word annotation. This corresponds to 18,546 samples (senteces).
- Evaluation data consist of 404 PubMed abstracts with term/word annotation. This corresponds to 3,856 samples (sentences).
The classes (at word level) and its distribution (number of examples for each class) for training and evaluation datasets are shown below:
| Class Label | # training examples| # evaluation examples|
|:--------------|--------------:|----------------:|
|O | 382,963 | 81,647 |
|B-protein | 30,269 | 5,067 |
|I-protein | 24,848 | 4,774 |
|B-cell_type | 6,718 | 1,921 |
|I-cell_type | 8,748 | 2,991 |
|B-DNA | 9,533 | 1,056 |
|I-DNA | 15,774 | 1,789 |
|B-cell_line | 3,830 | 500 |
|I-cell_line | 7,387 | 9,89 |
|B-RNA | 951 | 118 |
|I-RNA | 1,530 | 187 |
### Model
An exhaustive hyperparameter search was done.
The hyperparameters that provided the best results are:
- Max length sequence: 128
- Number of epochs: 6
- Batch size: 32
- Dropout: 0.3
- Optimizer: Adam
The used learning rate was 5e-5 with a decreasing linear schedule. A warmup was used at the beggining of the training
with a ratio of steps equal to 0.1 from the total training steps.
The model from the epoch with the best F1-score was selected, in this case, the model from epoch 5.
### Evaluation
The following table shows the evaluation metrics calculated at span/entity level:
| | precision| recall| f1-score|
|:---------|-----------:|---------:|---------:|
cell_line | 0.5205 | 0.7100 | 0.6007 |
cell_type | 0.7736 | 0.7422 | 0.7576 |
protein | 0.6953 | 0.8459 | 0.7633 |
DNA | 0.6997 | 0.7894 | 0.7419 |
RNA | 0.6985 | 0.8051 | 0.7480 |
| | | |
**micro avg** | 0.6984 | 0.8076 | 0.7490|
**macro avg** | 0.7032 | 0.8076 | 0.7498 |
The macro F1-score is equal to 0.7498, compared to the value provided by the Allen Institute for AI in their
[paper](https://arxiv.org/pdf/1903.10676.pdf), which is equal to 0.7728. This drop in performance could be due to
several reasons, but one hypothesis could be the fact that the authors used an additional conditional random field,
while this model uses a regular classification layer with softmax activation on top of SciBERT model.
At word level, this model achieves a precision of 0.7742, a recall of 0.8536 and a F1-score of 0.8093.
### Model usage in inference
Use the pipeline:
````python
from transformers import pipeline
text = "Mouse thymus was used as a source of glucocorticoid receptor from normal CS lymphocytes."
nlp_ner = pipeline("ner",
model='fran-martinez/scibert_scivocab_cased_ner_jnlpba',
tokenizer='fran-martinez/scibert_scivocab_cased_ner_jnlpba')
nlp_ner(text)
"""
Output:
---------------------------
[
{'word': 'glucocorticoid',
'score': 0.9894881248474121,
'entity': 'B-protein'},
{'word': 'receptor',
'score': 0.989505410194397,
'entity': 'I-protein'},
{'word': 'normal',
'score': 0.7680378556251526,
'entity': 'B-cell_type'},
{'word': 'cs',
'score': 0.5176806449890137,
'entity': 'I-cell_type'},
{'word': 'lymphocytes',
'score': 0.9898491501808167,
'entity': 'I-cell_type'}
]
"""
````
Or load model and tokenizer as follows:
````python
import torch
from transformers import AutoTokenizer, AutoModelForTokenClassification
# Example
text = "Mouse thymus was used as a source of glucocorticoid receptor from normal CS lymphocytes."
# Load model
tokenizer = AutoTokenizer.from_pretrained("fran-martinez/scibert_scivocab_cased_ner_jnlpba")
model = AutoModelForTokenClassification.from_pretrained("fran-martinez/scibert_scivocab_cased_ner_jnlpba")
# Get input for BERT
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
# Predict
with torch.no_grad():
outputs = model(input_ids)
# From the output let's take the first element of the tuple.
# Then, let's get rid of [CLS] and [SEP] tokens (first and last)
predictions = outputs[0].argmax(axis=-1)[0][1:-1]
# Map label class indexes to string labels.
for token, pred in zip(tokenizer.tokenize(text), predictions):
print(token, '->', model.config.id2label[pred.numpy().item()])
"""
Output:
---------------------------
mouse -> O
thymus -> O
was -> O
used -> O
as -> O
a -> O
source -> O
of -> O
glucocorticoid -> B-protein
receptor -> I-protein
from -> O
normal -> B-cell_type
cs -> I-cell_type
lymphocytes -> I-cell_type
. -> O
"""
````
|
{"language": "scientific english"}
|
fran-martinez/scibert_scivocab_cased_ner_jnlpba
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"arxiv:1903.10676",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1903.10676"
] |
[
"scientific english"
] |
TAGS
#transformers #pytorch #jax #bert #token-classification #arxiv-1903.10676 #autotrain_compatible #endpoints_compatible #region-us
|
SciBERT finetuned on JNLPA for NER downstream task
==================================================
Language Model
--------------
SciBERT is a pretrained language model based on BERT and trained by the
Allen Institute for AI on papers from the corpus of
Semantic Scholar.
Corpus size is 1.14M papers, 3.1B tokens. SciBERT has its own vocabulary (scivocab) that's built to best match
the training corpus.
Downstream task
---------------
'allenai/scibert\_scivocab\_cased' has been finetuned for Named Entity
Recognition (NER) dowstream task. The code to train the NER can be found here.
### Data
The corpus used to fine-tune the NER is BioNLP / JNLPBA shared task.
* Training data consist of 2,000 PubMed abstracts with term/word annotation. This corresponds to 18,546 samples (senteces).
* Evaluation data consist of 404 PubMed abstracts with term/word annotation. This corresponds to 3,856 samples (sentences).
The classes (at word level) and its distribution (number of examples for each class) for training and evaluation datasets are shown below:
### Model
An exhaustive hyperparameter search was done.
The hyperparameters that provided the best results are:
* Max length sequence: 128
* Number of epochs: 6
* Batch size: 32
* Dropout: 0.3
* Optimizer: Adam
The used learning rate was 5e-5 with a decreasing linear schedule. A warmup was used at the beggining of the training
with a ratio of steps equal to 0.1 from the total training steps.
The model from the epoch with the best F1-score was selected, in this case, the model from epoch 5.
### Evaluation
The following table shows the evaluation metrics calculated at span/entity level:
The macro F1-score is equal to 0.7498, compared to the value provided by the Allen Institute for AI in their
paper, which is equal to 0.7728. This drop in performance could be due to
several reasons, but one hypothesis could be the fact that the authors used an additional conditional random field,
while this model uses a regular classification layer with softmax activation on top of SciBERT model.
At word level, this model achieves a precision of 0.7742, a recall of 0.8536 and a F1-score of 0.8093.
### Model usage in inference
Use the pipeline:
'
Or load model and tokenizer as follows:
'
|
[
"### Data\n\n\nThe corpus used to fine-tune the NER is BioNLP / JNLPBA shared task.\n\n\n* Training data consist of 2,000 PubMed abstracts with term/word annotation. This corresponds to 18,546 samples (senteces).\n* Evaluation data consist of 404 PubMed abstracts with term/word annotation. This corresponds to 3,856 samples (sentences).\n\n\nThe classes (at word level) and its distribution (number of examples for each class) for training and evaluation datasets are shown below:",
"### Model\n\n\nAn exhaustive hyperparameter search was done.\nThe hyperparameters that provided the best results are:\n\n\n* Max length sequence: 128\n* Number of epochs: 6\n* Batch size: 32\n* Dropout: 0.3\n* Optimizer: Adam\n\n\nThe used learning rate was 5e-5 with a decreasing linear schedule. A warmup was used at the beggining of the training\nwith a ratio of steps equal to 0.1 from the total training steps.\n\n\nThe model from the epoch with the best F1-score was selected, in this case, the model from epoch 5.",
"### Evaluation\n\n\nThe following table shows the evaluation metrics calculated at span/entity level:\n\n\n\nThe macro F1-score is equal to 0.7498, compared to the value provided by the Allen Institute for AI in their\npaper, which is equal to 0.7728. This drop in performance could be due to\nseveral reasons, but one hypothesis could be the fact that the authors used an additional conditional random field,\nwhile this model uses a regular classification layer with softmax activation on top of SciBERT model.\n\n\nAt word level, this model achieves a precision of 0.7742, a recall of 0.8536 and a F1-score of 0.8093.",
"### Model usage in inference\n\n\nUse the pipeline:\n'\nOr load model and tokenizer as follows:\n'"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #token-classification #arxiv-1903.10676 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Data\n\n\nThe corpus used to fine-tune the NER is BioNLP / JNLPBA shared task.\n\n\n* Training data consist of 2,000 PubMed abstracts with term/word annotation. This corresponds to 18,546 samples (senteces).\n* Evaluation data consist of 404 PubMed abstracts with term/word annotation. This corresponds to 3,856 samples (sentences).\n\n\nThe classes (at word level) and its distribution (number of examples for each class) for training and evaluation datasets are shown below:",
"### Model\n\n\nAn exhaustive hyperparameter search was done.\nThe hyperparameters that provided the best results are:\n\n\n* Max length sequence: 128\n* Number of epochs: 6\n* Batch size: 32\n* Dropout: 0.3\n* Optimizer: Adam\n\n\nThe used learning rate was 5e-5 with a decreasing linear schedule. A warmup was used at the beggining of the training\nwith a ratio of steps equal to 0.1 from the total training steps.\n\n\nThe model from the epoch with the best F1-score was selected, in this case, the model from epoch 5.",
"### Evaluation\n\n\nThe following table shows the evaluation metrics calculated at span/entity level:\n\n\n\nThe macro F1-score is equal to 0.7498, compared to the value provided by the Allen Institute for AI in their\npaper, which is equal to 0.7728. This drop in performance could be due to\nseveral reasons, but one hypothesis could be the fact that the authors used an additional conditional random field,\nwhile this model uses a regular classification layer with softmax activation on top of SciBERT model.\n\n\nAt word level, this model achieves a precision of 0.7742, a recall of 0.8536 and a F1-score of 0.8093.",
"### Model usage in inference\n\n\nUse the pipeline:\n'\nOr load model and tokenizer as follows:\n'"
] |
question-answering
|
transformers
|
**[`microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext`](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_qa.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py)**
Tunning script:
```bash
BASE_MODEL=microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
OUTPUT_DIR=~/Documents/projects/tunned_models/ms_pubmed_bert_squadv2/
python run_qa.py \
--model_name_or_path $BASE_MODEL\
--dataset_name squad_v2 \
--do_train \
--do_eval \
--version_2_with_negative \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir $OUTPUT_DIR
```
|
{}
|
franklu/pubmed_bert_squadv2
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #endpoints_compatible #has_space #region-us
|
'microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext' fine-tuned on 'SQuAD V2' using 'run_qa.py'
Tunning script:
|
[] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #endpoints_compatible #has_space #region-us \n"
] |
image-classification
|
transformers
|
# CSP-Darknet-53 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The CSP-Darknet-53 architecture was introduced in [this paper](https://arxiv.org/pdf/1911.11929.pdf).
## Model description
The core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/cspdarknet53").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1911-11929,
author = {Chien{-}Yao Wang and
Hong{-}Yuan Mark Liao and
I{-}Hau Yeh and
Yueh{-}Hua Wu and
Ping{-}Yang Chen and
Jun{-}Wei Hsieh},
title = {CSPNet: {A} New Backbone that can Enhance Learning Capability of {CNN}},
journal = {CoRR},
volume = {abs/1911.11929},
year = {2019},
url = {http://arxiv.org/abs/1911.11929},
eprinttype = {arXiv},
eprint = {1911.11929},
timestamp = {Tue, 03 Dec 2019 20:41:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-11929.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch"], "datasets": ["frgfm/imagenette"]}
|
frgfm/cspdarknet53
| null |
[
"transformers",
"pytorch",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:1911.11929",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.11929"
] |
[] |
TAGS
#transformers #pytorch #image-classification #dataset-frgfm/imagenette #arxiv-1911.11929 #license-apache-2.0 #endpoints_compatible #region-us
|
# CSP-Darknet-53 model
Pretrained on ImageNette. The CSP-Darknet-53 architecture was introduced in this paper.
## Model description
The core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# CSP-Darknet-53 model\n\nPretrained on ImageNette. The CSP-Darknet-53 architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #image-classification #dataset-frgfm/imagenette #arxiv-1911.11929 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# CSP-Darknet-53 model\n\nPretrained on ImageNette. The CSP-Darknet-53 architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# CSP-Darknet-53 Mish model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The CSP-Darknet-53 Mish architecture was introduced in [this paper](https://arxiv.org/pdf/1911.11929.pdf).
## Model description
The core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture and replace activations with Mish.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/cspdarknet53_mish").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1911-11929,
author = {Chien{-}Yao Wang and
Hong{-}Yuan Mark Liao and
I{-}Hau Yeh and
Yueh{-}Hua Wu and
Ping{-}Yang Chen and
Jun{-}Wei Hsieh},
title = {CSPNet: {A} New Backbone that can Enhance Learning Capability of {CNN}},
journal = {CoRR},
volume = {abs/1911.11929},
year = {2019},
url = {http://arxiv.org/abs/1911.11929},
eprinttype = {arXiv},
eprint = {1911.11929},
timestamp = {Tue, 03 Dec 2019 20:41:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-11929.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch"], "datasets": ["frgfm/imagenette"]}
|
frgfm/cspdarknet53_mish
| null |
[
"transformers",
"pytorch",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:1911.11929",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.11929"
] |
[] |
TAGS
#transformers #pytorch #image-classification #dataset-frgfm/imagenette #arxiv-1911.11929 #license-apache-2.0 #endpoints_compatible #region-us
|
# CSP-Darknet-53 Mish model
Pretrained on ImageNette. The CSP-Darknet-53 Mish architecture was introduced in this paper.
## Model description
The core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture and replace activations with Mish.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# CSP-Darknet-53 Mish model\n\nPretrained on ImageNette. The CSP-Darknet-53 Mish architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture and replace activations with Mish.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #image-classification #dataset-frgfm/imagenette #arxiv-1911.11929 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# CSP-Darknet-53 Mish model\n\nPretrained on ImageNette. The CSP-Darknet-53 Mish architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture and replace activations with Mish.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# Darknet-19 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The Darknet-19 architecture was introduced in [this paper](https://pjreddie.com/media/files/papers/YOLO9000.pdf).
## Model description
The core idea of the author is to combine high throughput of a highway net with performance gains using better activations (Leaky ReLU) and batch normalization. This architecture is used as a backbone for YOLOv2.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/darknet19").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/RedmonF16,
author = {Joseph Redmon and
Ali Farhadi},
title = {{YOLO9000:} Better, Faster, Stronger},
journal = {CoRR},
volume = {abs/1612.08242},
year = {2016},
url = {http://arxiv.org/abs/1612.08242},
eprinttype = {arXiv},
eprint = {1612.08242},
timestamp = {Mon, 13 Aug 2018 16:48:25 +0200},
biburl = {https://dblp.org/rec/journals/corr/RedmonF16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch"], "datasets": ["frgfm/imagenette"]}
|
frgfm/darknet19
| null |
[
"transformers",
"pytorch",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:1612.08242",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1612.08242"
] |
[] |
TAGS
#transformers #pytorch #image-classification #dataset-frgfm/imagenette #arxiv-1612.08242 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# Darknet-19 model
Pretrained on ImageNette. The Darknet-19 architecture was introduced in this paper.
## Model description
The core idea of the author is to combine high throughput of a highway net with performance gains using better activations (Leaky ReLU) and batch normalization. This architecture is used as a backbone for YOLOv2.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# Darknet-19 model\n\nPretrained on ImageNette. The Darknet-19 architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to combine high throughput of a highway net with performance gains using better activations (Leaky ReLU) and batch normalization. This architecture is used as a backbone for YOLOv2.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #image-classification #dataset-frgfm/imagenette #arxiv-1612.08242 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# Darknet-19 model\n\nPretrained on ImageNette. The Darknet-19 architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to combine high throughput of a highway net with performance gains using better activations (Leaky ReLU) and batch normalization. This architecture is used as a backbone for YOLOv2.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# Darknet-53 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The Darknet-53 architecture was introduced in [this paper](https://pjreddie.com/media/files/papers/YOLOv3.pdf).
## Model description
The core idea of the author is to increase the depth of the Darknet-19 architecture, and adding shortcut connections to ease the gradient propagation.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/darknet53").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1804-02767,
author = {Joseph Redmon and
Ali Farhadi},
title = {YOLOv3: An Incremental Improvement},
journal = {CoRR},
volume = {abs/1804.02767},
year = {2018},
url = {http://arxiv.org/abs/1804.02767},
eprinttype = {arXiv},
eprint = {1804.02767},
timestamp = {Mon, 13 Aug 2018 16:48:24 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1804-02767.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch"], "datasets": ["frgfm/imagenette"]}
|
frgfm/darknet53
| null |
[
"transformers",
"pytorch",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:1804.02767",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.02767"
] |
[] |
TAGS
#transformers #pytorch #image-classification #dataset-frgfm/imagenette #arxiv-1804.02767 #license-apache-2.0 #endpoints_compatible #region-us
|
# Darknet-53 model
Pretrained on ImageNette. The Darknet-53 architecture was introduced in this paper.
## Model description
The core idea of the author is to increase the depth of the Darknet-19 architecture, and adding shortcut connections to ease the gradient propagation.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# Darknet-53 model\n\nPretrained on ImageNette. The Darknet-53 architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to increase the depth of the Darknet-19 architecture, and adding shortcut connections to ease the gradient propagation.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #image-classification #dataset-frgfm/imagenette #arxiv-1804.02767 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Darknet-53 model\n\nPretrained on ImageNette. The Darknet-53 architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to increase the depth of the Darknet-19 architecture, and adding shortcut connections to ease the gradient propagation.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# RepVGG-A0 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf).
## Model description
The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/repvgg_a0").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2101-03697,
author = {Xiaohan Ding and
Xiangyu Zhang and
Ningning Ma and
Jungong Han and
Guiguang Ding and
Jian Sun},
title = {RepVGG: Making VGG-style ConvNets Great Again},
journal = {CoRR},
volume = {abs/2101.03697},
year = {2021},
url = {https://arxiv.org/abs/2101.03697},
eprinttype = {arXiv},
eprint = {2101.03697},
timestamp = {Tue, 09 Feb 2021 15:29:34 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch", "onnx"], "datasets": ["frgfm/imagenette"]}
|
frgfm/repvgg_a0
| null |
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2101.03697",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.03697"
] |
[] |
TAGS
#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2101.03697 #license-apache-2.0 #endpoints_compatible #region-us
|
# RepVGG-A0 model
Pretrained on ImageNette. The RepVGG architecture was introduced in this paper.
## Model description
The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# RepVGG-A0 model\n\nPretrained on ImageNette. The RepVGG architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2101.03697 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# RepVGG-A0 model\n\nPretrained on ImageNette. The RepVGG architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# RepVGG-A1 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf).
## Model description
The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/repvgg_a1").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2101-03697,
author = {Xiaohan Ding and
Xiangyu Zhang and
Ningning Ma and
Jungong Han and
Guiguang Ding and
Jian Sun},
title = {RepVGG: Making VGG-style ConvNets Great Again},
journal = {CoRR},
volume = {abs/2101.03697},
year = {2021},
url = {https://arxiv.org/abs/2101.03697},
eprinttype = {arXiv},
eprint = {2101.03697},
timestamp = {Tue, 09 Feb 2021 15:29:34 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch", "onnx"], "datasets": ["frgfm/imagenette"]}
|
frgfm/repvgg_a1
| null |
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2101.03697",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.03697"
] |
[] |
TAGS
#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2101.03697 #license-apache-2.0 #endpoints_compatible #region-us
|
# RepVGG-A1 model
Pretrained on ImageNette. The RepVGG architecture was introduced in this paper.
## Model description
The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# RepVGG-A1 model\n\nPretrained on ImageNette. The RepVGG architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2101.03697 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# RepVGG-A1 model\n\nPretrained on ImageNette. The RepVGG architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# RepVGG-A2 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf).
## Model description
The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/repvgg_a2").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2101-03697,
author = {Xiaohan Ding and
Xiangyu Zhang and
Ningning Ma and
Jungong Han and
Guiguang Ding and
Jian Sun},
title = {RepVGG: Making VGG-style ConvNets Great Again},
journal = {CoRR},
volume = {abs/2101.03697},
year = {2021},
url = {https://arxiv.org/abs/2101.03697},
eprinttype = {arXiv},
eprint = {2101.03697},
timestamp = {Tue, 09 Feb 2021 15:29:34 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch", "onnx"], "datasets": ["frgfm/imagenette"]}
|
frgfm/repvgg_a2
| null |
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2101.03697",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.03697"
] |
[] |
TAGS
#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2101.03697 #license-apache-2.0 #endpoints_compatible #region-us
|
# RepVGG-A2 model
Pretrained on ImageNette. The RepVGG architecture was introduced in this paper.
## Model description
The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# RepVGG-A2 model\n\nPretrained on ImageNette. The RepVGG architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2101.03697 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# RepVGG-A2 model\n\nPretrained on ImageNette. The RepVGG architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# ResNet-18 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ResNet architecture was introduced in [this paper](https://arxiv.org/pdf/1512.03385.pdf).
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/resnet18").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
eprinttype = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch", "onnx"], "datasets": ["frgfm/imagenette"]}
|
frgfm/resnet18
| null |
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:1512.03385",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1512.03385"
] |
[] |
TAGS
#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-1512.03385 #license-apache-2.0 #endpoints_compatible #region-us
|
# ResNet-18 model
Pretrained on ImageNette. The ResNet architecture was introduced in this paper.
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# ResNet-18 model\n\nPretrained on ImageNette. The ResNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-1512.03385 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# ResNet-18 model\n\nPretrained on ImageNette. The ResNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# ResNet-34 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ResNet architecture was introduced in [this paper](https://arxiv.org/pdf/1512.03385.pdf).
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/resnet34").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
eprinttype = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch", "onnx"], "datasets": ["frgfm/imagenette"]}
|
frgfm/resnet34
| null |
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:1512.03385",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1512.03385"
] |
[] |
TAGS
#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-1512.03385 #license-apache-2.0 #endpoints_compatible #region-us
|
# ResNet-34 model
Pretrained on ImageNette. The ResNet architecture was introduced in this paper.
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# ResNet-34 model\n\nPretrained on ImageNette. The ResNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-1512.03385 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# ResNet-34 model\n\nPretrained on ImageNette. The ResNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# ReXNet-1.0x model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/rexnet1_0x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch", "onnx"], "datasets": ["frgfm/imagenette"]}
|
frgfm/rexnet1_0x
| null |
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2007.00992",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2007.00992"
] |
[] |
TAGS
#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2007.00992 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# ReXNet-1.0x model
Pretrained on ImageNette. The ReXNet architecture was introduced in this paper.
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# ReXNet-1.0x model\n\nPretrained on ImageNette. The ReXNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2007.00992 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# ReXNet-1.0x model\n\nPretrained on ImageNette. The ReXNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# ReXNet-1.3x model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/rexnet1_3x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch", "onnx"], "datasets": ["frgfm/imagenette"]}
|
frgfm/rexnet1_3x
| null |
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2007.00992",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2007.00992"
] |
[] |
TAGS
#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2007.00992 #license-apache-2.0 #endpoints_compatible #region-us
|
# ReXNet-1.3x model
Pretrained on ImageNette. The ReXNet architecture was introduced in this paper.
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# ReXNet-1.3x model\n\nPretrained on ImageNette. The ReXNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2007.00992 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# ReXNet-1.3x model\n\nPretrained on ImageNette. The ReXNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# ReXNet-1.5x model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/rexnet1_5x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch", "onnx"], "datasets": ["frgfm/imagenette"]}
|
frgfm/rexnet1_5x
| null |
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2007.00992",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2007.00992"
] |
[] |
TAGS
#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2007.00992 #license-apache-2.0 #endpoints_compatible #region-us
|
# ReXNet-1.5x model
Pretrained on ImageNette. The ReXNet architecture was introduced in this paper.
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# ReXNet-1.5x model\n\nPretrained on ImageNette. The ReXNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2007.00992 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# ReXNet-1.5x model\n\nPretrained on ImageNette. The ReXNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
image-classification
|
transformers
|
# ReXNet-2.0x model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/rexnet2_0x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "pytorch", "onnx"], "datasets": ["frgfm/imagenette"]}
|
frgfm/rexnet2_0x
| null |
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2007.00992",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2007.00992"
] |
[] |
TAGS
#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2007.00992 #license-apache-2.0 #endpoints_compatible #region-us
|
# ReXNet-2.0x model
Pretrained on ImageNette. The ReXNet architecture was introduced in this paper.
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and pip/conda are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using pypi as follows:
or using conda:
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:
## Usage instructions
Original paper
Source of this implementation
|
[
"# ReXNet-2.0x model\n\nPretrained on ImageNette. The ReXNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
[
"TAGS\n#transformers #pytorch #onnx #image-classification #dataset-frgfm/imagenette #arxiv-2007.00992 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# ReXNet-2.0x model\n\nPretrained on ImageNette. The ReXNet architecture was introduced in this paper.",
"## Model description\n\nThe core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.",
"## Installation",
"### Prerequisites\n\nPython 3.6 (or higher) and pip/conda are required to install Holocron.",
"### Latest stable release\n\nYou can install the last stable release of the package using pypi as follows:\n\n\n\nor using conda:",
"### Developer mode\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install Git first)*:",
"## Usage instructions\n\n\n\n\nOriginal paper\n\n\n\nSource of this implementation"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ted_mt-Spanish-to-Italian
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-it](https://huggingface.co/Helsinki-NLP/opus-mt-es-it) on the new_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| No log | 1.0 | 46 | 1.4873 | 29.6133 | 26.9081 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["new_dataset"], "model-index": [{"name": "ted_mt-Spanish-to-Italian", "results": []}]}
|
frtna/ted_mt-Spanish-to-Italian
| null |
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:new_dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-new_dataset #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
ted\_mt-Spanish-to-Italian
==========================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-es-it on the new\_dataset dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0
* Pytorch 1.11.0
* Datasets 2.0.0
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.11.0\n* Datasets 2.0.0\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-new_dataset #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.11.0\n* Datasets 2.0.0\n* Tokenizers 0.11.6"
] |
null | null |
# Fasttext
2 million word vectors trained with subword information on Common Crawl (600B tokens).
Read more:
* https://fasttext.cc/docs/en/english-vectors.html
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/fasttext-crawl-subwords-300
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Fasttext
2 million word vectors trained with subword information on Common Crawl (600B tokens).
Read more:
* URL
|
[
"# Fasttext\n\n2 million word vectors trained with subword information on Common Crawl (600B tokens).\n\nRead more:\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Fasttext\n\n2 million word vectors trained with subword information on Common Crawl (600B tokens).\n\nRead more:\n* URL"
] |
null | null |
# Fasttext
1 million word vectors trained on Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset (16B tokens).
Read more:
* https://fasttext.cc/docs/en/english-vectors.html
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/fasttext-wiki-news-subwords-300
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Fasttext
1 million word vectors trained on Wikipedia 2017, UMBC webbase corpus and URL news dataset (16B tokens).
Read more:
* URL
|
[
"# Fasttext\n\n1 million word vectors trained on Wikipedia 2017, UMBC webbase corpus and URL news dataset (16B tokens).\n\nRead more:\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Fasttext\n\n1 million word vectors trained on Wikipedia 2017, UMBC webbase corpus and URL news dataset (16B tokens).\n\nRead more:\n* URL"
] |
null | null |
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/glove-twitter-100
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* URL
* URL
|
[
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
null | null |
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/glove-twitter-200
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* URL
* URL
|
[
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
null | null |
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/glove-twitter-25
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* URL
* URL
|
[
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
null | null |
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/glove-twitter-50
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* URL
* URL
|
[
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
null | null |
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/glove-wiki-gigaword-100
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* URL
* URL
|
[
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
null | null |
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/glove-wiki-gigaword-200
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* URL
* URL
|
[
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
null | null |
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/glove-wiki-gigaword-300
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* URL
* URL
|
[
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
null | null |
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/glove-wiki-gigaword-50
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* URL
* URL
|
[
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL"
] |
null | null |
# Paragram Embeddings
Towards Universal Paraphrastic Sentence Embeddings (25 dimensions)
Read more:
* https://www.cs.cmu.edu/~jwieting/
* https://www.cs.cmu.edu/~jwieting/wieting2016ICLR.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/paragram-25
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Paragram Embeddings
Towards Universal Paraphrastic Sentence Embeddings (25 dimensions)
Read more:
* URL
* URL
|
[
"# Paragram Embeddings \n\nTowards Universal Paraphrastic Sentence Embeddings (25 dimensions)\n\nRead more:\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Paragram Embeddings \n\nTowards Universal Paraphrastic Sentence Embeddings (25 dimensions)\n\nRead more:\n* URL\n* URL"
] |
null | null |
# Paragram Embeddings
300 dimensional Paragram embeddings tuned on SimLex999 dataset
Read more:
* https://www.cs.cmu.edu/~jwieting/
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/paragram-300-sl999
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Paragram Embeddings
300 dimensional Paragram embeddings tuned on SimLex999 dataset
Read more:
* URL
|
[
"# Paragram Embeddings \n\n300 dimensional Paragram embeddings tuned on SimLex999 dataset\n\nRead more:\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Paragram Embeddings \n\n300 dimensional Paragram embeddings tuned on SimLex999 dataset\n\nRead more:\n* URL"
] |
null | null |
# Paragram Embeddings
300 dimensional Paragram embeddings tuned on WordSim353 dataset
Read more:
* https://www.cs.cmu.edu/~jwieting/
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/paragram-300-ws353
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Paragram Embeddings
300 dimensional Paragram embeddings tuned on WordSim353 dataset
Read more:
* URL
|
[
"# Paragram Embeddings \n\n300 dimensional Paragram embeddings tuned on WordSim353 dataset\n\nRead more:\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Paragram Embeddings \n\n300 dimensional Paragram embeddings tuned on WordSim353 dataset\n\nRead more:\n* URL"
] |
null | null |
# Paragram Embeddings
Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations (300 dimensions)
Read more:
* https://www.cs.cmu.edu/~jwieting/
* https://www.cs.cmu.edu/~jwieting/wieting2017Millions.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/paranmt-300
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#glove #gensim #fse #region-us
|
# Paragram Embeddings
Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations (300 dimensions)
Read more:
* URL
* URL
|
[
"# Paragram Embeddings \n\nPushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations (300 dimensions)\n\nRead more:\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #region-us \n",
"# Paragram Embeddings \n\nPushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations (300 dimensions)\n\nRead more:\n* URL\n* URL"
] |
null | null |
# Word2Vec
Pre-trained vectors trained on a part of the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. The phrases were obtained using a simple data-driven approach described in 'Distributed Representations of Words and Phrases and their Compositionality'
Read more:
* https://code.google.com/archive/p/word2vec/
* https://arxiv.org/abs/1301.3781
* https://arxiv.org/abs/1310.4546
* https://www.microsoft.com/en-us/research/publication/linguistic-regularities-in-continuous-space-word-representations/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F189726%2Frvecs.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/word2vec-google-news-300
| null |
[
"glove",
"gensim",
"fse",
"arxiv:1301.3781",
"arxiv:1310.4546",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1301.3781",
"1310.4546"
] |
[] |
TAGS
#glove #gensim #fse #arxiv-1301.3781 #arxiv-1310.4546 #has_space #region-us
|
# Word2Vec
Pre-trained vectors trained on a part of the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. The phrases were obtained using a simple data-driven approach described in 'Distributed Representations of Words and Phrases and their Compositionality'
Read more:
* URL
* URL
* URL
* URL
|
[
"# Word2Vec \n\nPre-trained vectors trained on a part of the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. The phrases were obtained using a simple data-driven approach described in 'Distributed Representations of Words and Phrases and their Compositionality' \n\nRead more:\n* URL\n* URL\n* URL\n* URL"
] |
[
"TAGS\n#glove #gensim #fse #arxiv-1301.3781 #arxiv-1310.4546 #has_space #region-us \n",
"# Word2Vec \n\nPre-trained vectors trained on a part of the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. The phrases were obtained using a simple data-driven approach described in 'Distributed Representations of Words and Phrases and their Compositionality' \n\nRead more:\n* URL\n* URL\n* URL\n* URL"
] |
text-generation
|
transformers
|
#Bully Maguire demo bot
|
{"tags": ["conversational"]}
|
ftnvir/DialoGPT-medium-bullyMaguire
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Bully Maguire demo bot
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-to-speech
|
espnet
|
This model was trained by ftshijt using aishell3/tts1 recipe in <a href="https://github.com/espnet/espnet/">espnet</a>.
<p> </p>
<ul>
<li><strong>Python API</strong><pre><code class="language-python">See https://github.com/espnet/espnet_model_zoo</code></pre></li>
<li><strong>Evaluate in the recipe</strong><pre>
<code class="language-bash">
See ESPNet repo for how to use pre-trained models
</pre></li>
<li><strong>Config</strong><pre><code>config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_phn_pypinyin_g2p_phone
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 3750000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 240000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_no_dev/text
- text
- text
- - dump/raw/train_no_dev/wav.scp
- speech
- sound
- - dump/xvector/train_no_dev/xvector.scp
- spembs
- kaldi_ark
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/xvector/dev/xvector.scp
- spembs
- kaldi_ark
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ''
- d
- sh
- j
- i4
- zh
- l
- x
- e
- b
- g
- i1
- h
- q
- m
- u4
- t
- z
- ch
- i3
- i2
- f
- s
- n
- r
- ian4
- e4
- ong1
- en2
- ai4
- k
- ing2
- a1
- iou3
- uo3
- ao4
- u3
- ui4
- p
- e2
- an1
- eng2
- c
- in1
- ai2
- an4
- ian2
- ing1
- ai3
- ang4
- ao3
- ian1
- uo4
- ian3
- iao4
- ang1
- u2
- ü4
- u1
- a4
- eng1
- ing4
- üan2
- ie4
- en1
- iu4
- uei4
- ou4
- er4
- e1
- ei4
- an3
- ong2
- uo2
- ang3
- ou1
- ou3
- ong4
- eng4
- an2
- iang4
- a3
- iang1
- ia1
- iao1
- uan4
- ia4
- iu3
- ang2
- uo1
- ei3
- e3
- in4
- iang3
- ü1
- uan1
- en3
- iao3
- ie3
- ao1
- ai1
- ü2
- ing3
- er2
- ü3
- uan3
- üe4
- in3
- en
- ei2
- üe2
- ie2
- en4
- ua4
- in2
- iu2
- uan2
- a2
- ie1
- ou2
- ui1
- iang2
- ong3
- i
- uang3
- eng3
- ün4
- uang4
- uai4
- iong4
- v3
- iou2
- ui2
- un1
- üan4
- uang1
- ei1
- uang2
- o2
- a
- ao2
- iao2
- ui3
- un4
- o1
- ua2
- un2
- uen2
- iu1
- v4
- ua1
- uei1
- üan3
- ün1
- üe1
- ün2
- uen4
- uei3
- uei2
- un3
- iou4
- o4
- er3
- uen1
- iong3
- iou1
- ia3
- üan1
- ia2
- iong1
- üe3
- uen3
- ve4
- iong2
- uai2
- uai1
- ua3
- ün3
- er
- uai3
- ia
- o3
- v2
- o
- ueng1
- ei
- '2'
- ua
- io1
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pypinyin_g2p_phone
feats_extract: fbank
feats_extract_conf:
n_fft: 2048
hop_length: 300
win_length: 1200
fs: 24000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/feats_stats.npz
tts: tacotron2
tts_conf:
embed_dim: 512
elayers: 1
eunits: 512
econv_layers: 3
econv_chans: 512
econv_filts: 5
atype: location
adim: 512
aconv_chans: 32
aconv_filts: 15
cumulate_att_w: true
dlayers: 2
dunits: 1024
prenet_layers: 2
prenet_units: 256
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
output_activation: null
use_batch_norm: true
use_concate: true
use_residual: false
spk_embed_dim: 512
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
dropout_rate: 0.5
zoneout_rate: 0.1
reduction_factor: 1
use_masking: true
bce_pos_weight: 10.0
use_guided_attn_loss: true
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 1.0
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: false</code></pre></li>
</ul>
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["aishell3"], "inference": false}
|
ftshijt/ESPnet2_pretrained_model_ftshijt_aishell3_tts_train_raw_phn_pypinyin_g2p_phone_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:aishell3",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-aishell3 #license-cc-by-4.0 #region-us
|
This model was trained by ftshijt using aishell3/tts1 recipe in <a href="URL
<p> </p>
<ul>
<li><strong>Python API</strong><pre><code class="language-python">See URL
<li><strong>Evaluate in the recipe</strong><pre>
<code class="language-bash">
See ESPNet repo for how to use pre-trained models
</pre></li>
<li><strong>Config</strong><pre><code>config: conf/URL
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_phn_pypinyin_g2p_phone
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 3750000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 240000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_no_dev/text
- text
- text
- - dump/raw/train_no_dev/URL
- speech
- sound
- - dump/xvector/train_no_dev/URL
- spembs
- kaldi_ark
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/URL
- speech
- sound
- - dump/xvector/dev/URL
- spembs
- kaldi_ark
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ''
- d
- sh
- j
- i4
- zh
- l
- x
- e
- b
- g
- i1
- h
- q
- m
- u4
- t
- z
- ch
- i3
- i2
- f
- s
- n
- r
- ian4
- e4
- ong1
- en2
- ai4
- k
- ing2
- a1
- iou3
- uo3
- ao4
- u3
- ui4
- p
- e2
- an1
- eng2
- c
- in1
- ai2
- an4
- ian2
- ing1
- ai3
- ang4
- ao3
- ian1
- uo4
- ian3
- iao4
- ang1
- u2
- ü4
- u1
- a4
- eng1
- ing4
- üan2
- ie4
- en1
- iu4
- uei4
- ou4
- er4
- e1
- ei4
- an3
- ong2
- uo2
- ang3
- ou1
- ou3
- ong4
- eng4
- an2
- iang4
- a3
- iang1
- ia1
- iao1
- uan4
- ia4
- iu3
- ang2
- uo1
- ei3
- e3
- in4
- iang3
- ü1
- uan1
- en3
- iao3
- ie3
- ao1
- ai1
- ü2
- ing3
- er2
- ü3
- uan3
- üe4
- in3
- en
- ei2
- üe2
- ie2
- en4
- ua4
- in2
- iu2
- uan2
- a2
- ie1
- ou2
- ui1
- iang2
- ong3
- i
- uang3
- eng3
- ün4
- uang4
- uai4
- iong4
- v3
- iou2
- ui2
- un1
- üan4
- uang1
- ei1
- uang2
- o2
- a
- ao2
- iao2
- ui3
- un4
- o1
- ua2
- un2
- uen2
- iu1
- v4
- ua1
- uei1
- üan3
- ün1
- üe1
- ün2
- uen4
- uei3
- uei2
- un3
- iou4
- o4
- er3
- uen1
- iong3
- iou1
- ia3
- üan1
- ia2
- iong1
- üe3
- uen3
- ve4
- iong2
- uai2
- uai1
- ua3
- ün3
- er
- uai3
- ia
- o3
- v2
- o
- ueng1
- ei
- '2'
- ua
- io1
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pypinyin_g2p_phone
feats_extract: fbank
feats_extract_conf:
n_fft: 2048
hop_length: 300
win_length: 1200
fs: 24000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/feats_stats.npz
tts: tacotron2
tts_conf:
embed_dim: 512
elayers: 1
eunits: 512
econv_layers: 3
econv_chans: 512
econv_filts: 5
atype: location
adim: 512
aconv_chans: 32
aconv_filts: 15
cumulate_att_w: true
dlayers: 2
dunits: 1024
prenet_layers: 2
prenet_units: 256
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
output_activation: null
use_batch_norm: true
use_concate: true
use_residual: false
spk_embed_dim: 512
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
dropout_rate: 0.5
zoneout_rate: 0.1
reduction_factor: 1
use_masking: true
bce_pos_weight: 10.0
use_guided_attn_loss: true
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 1.0
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: false</code></pre></li>
</ul>
|
[] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-aishell3 #license-cc-by-4.0 #region-us \n"
] |
text-to-speech
|
espnet
|
This model was trained by ftshijt using thchs30/tts1 recipe in <a href="https://github.com/espnet/espnet/">espnet</a>.
<p> </p>
<ul>
<li><strong>Python API</strong><pre><code class="language-python">See https://github.com/espnet/espnet_model_zoo</code></pre></li>
<li><strong>Evaluate in the recipe</strong><pre>
<code class="language-bash">Please see ESPNet for how to use pre-trained model
</pre></li>
<li><strong>Config</strong><pre><code>config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_phn_pypinyin_g2p_phone
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 3750000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/text
- text
- text
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/xvector/train/xvector.scp
- spembs
- kaldi_ark
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/xvector/dev/xvector.scp
- spembs
- kaldi_ark
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ''
- d
- sh
- j
- zh
- l
- i4
- x
- b
- g
- h
- e
- q
- t
- m
- ch
- i1
- z
- u4
- i2
- i3
- n
- f
- s
- r
- k
- c
- p
- ai4
- e4
- a1
- an4
- ian4
- ing2
- u3
- ian2
- ong1
- e2
- in1
- eng2
- ui4
- ao4
- u2
- iao4
- üan2
- en2
- an1
- u1
- ai2
- ao3
- ing4
- eng1
- iou3
- ü4
- uo4
- üe4
- ong2
- ian1
- ing1
- uo3
- ie4
- ang1
- uei4
- ang4
- an2
- a4
- ou4
- ei4
- uai4
- ie3
- ang3
- ong4
- ai3
- ü2
- uo2
- an3
- ang2
- ou3
- er2
- ou1
- uo1
- en1
- ia1
- ü3
- uan1
- in2
- iong4
- ian3
- iang3
- a3
- iang2
- ia4
- ü1
- uan4
- iao3
- iang4
- uen2
- iang1
- uan3
- ai1
- ie2
- ei3
- uan2
- uang2
- in4
- üe2
- ao1
- eng3
- iu4
- iao1
- er4
- iu2
- in3
- un1
- uang1
- eng4
- a2
- uang3
- en3
- uang4
- ong3
- ing3
- e3
- ei2
- ou2
- ao2
- i
- ün4
- uei2
- ua4
- iou4
- ui1
- ua1
- en4
- ün2
- iao2
- ie1
- iou2
- iu3
- ün1
- üan4
- en
- ei1
- o2
- un4
- ui3
- iu1
- üan3
- e1
- v3
- ua2
- ia2
- ui2
- un2
- o4
- un3
- er3
- ia3
- iong1
- uei3
- o1
- üe1
- üan1
- iong3
- v4
- iong2
- uen4
- uai2
- uei1
- iou1
- a
- ua3
- uen1
- o3
- ueng1
- uai1
- uen3
- üe3
- ou
- uai3
- ve4
- er
- ün3
- o
- ua
- ia
- ' l ='
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pypinyin_g2p_phone
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 16000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/feats_stats.npz
tts: tacotron2
tts_conf:
embed_dim: 512
elayers: 1
eunits: 512
econv_layers: 3
econv_chans: 512
econv_filts: 5
atype: location
adim: 512
aconv_chans: 32
aconv_filts: 15
cumulate_att_w: true
dlayers: 2
dunits: 1024
prenet_layers: 2
prenet_units: 256
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
output_activation: null
use_batch_norm: true
use_concate: true
use_residual: false
spk_embed_dim: 512
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
dropout_rate: 0.5
zoneout_rate: 0.1
reduction_factor: 1
use_masking: true
bce_pos_weight: 10.0
use_guided_attn_loss: true
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 1.0
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: false</code></pre></li>
</ul>
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["thchs30"], "inference": false}
|
ftshijt/ESPnet2_pretrained_model_ftshijt_thchs30_tts_train_raw_phn_pypinyin_g2p_phone_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:thchs30",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-thchs30 #license-cc-by-4.0 #region-us
|
This model was trained by ftshijt using thchs30/tts1 recipe in <a href="URL
<p> </p>
<ul>
<li><strong>Python API</strong><pre><code class="language-python">See URL
<li><strong>Evaluate in the recipe</strong><pre>
<code class="language-bash">Please see ESPNet for how to use pre-trained model
</pre></li>
<li><strong>Config</strong><pre><code>config: conf/URL
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_phn_pypinyin_g2p_phone
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 3750000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/text
- text
- text
- - dump/raw/train/URL
- speech
- sound
- - dump/xvector/train/URL
- spembs
- kaldi_ark
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/URL
- speech
- sound
- - dump/xvector/dev/URL
- spembs
- kaldi_ark
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ''
- d
- sh
- j
- zh
- l
- i4
- x
- b
- g
- h
- e
- q
- t
- m
- ch
- i1
- z
- u4
- i2
- i3
- n
- f
- s
- r
- k
- c
- p
- ai4
- e4
- a1
- an4
- ian4
- ing2
- u3
- ian2
- ong1
- e2
- in1
- eng2
- ui4
- ao4
- u2
- iao4
- üan2
- en2
- an1
- u1
- ai2
- ao3
- ing4
- eng1
- iou3
- ü4
- uo4
- üe4
- ong2
- ian1
- ing1
- uo3
- ie4
- ang1
- uei4
- ang4
- an2
- a4
- ou4
- ei4
- uai4
- ie3
- ang3
- ong4
- ai3
- ü2
- uo2
- an3
- ang2
- ou3
- er2
- ou1
- uo1
- en1
- ia1
- ü3
- uan1
- in2
- iong4
- ian3
- iang3
- a3
- iang2
- ia4
- ü1
- uan4
- iao3
- iang4
- uen2
- iang1
- uan3
- ai1
- ie2
- ei3
- uan2
- uang2
- in4
- üe2
- ao1
- eng3
- iu4
- iao1
- er4
- iu2
- in3
- un1
- uang1
- eng4
- a2
- uang3
- en3
- uang4
- ong3
- ing3
- e3
- ei2
- ou2
- ao2
- i
- ün4
- uei2
- ua4
- iou4
- ui1
- ua1
- en4
- ün2
- iao2
- ie1
- iou2
- iu3
- ün1
- üan4
- en
- ei1
- o2
- un4
- ui3
- iu1
- üan3
- e1
- v3
- ua2
- ia2
- ui2
- un2
- o4
- un3
- er3
- ia3
- iong1
- uei3
- o1
- üe1
- üan1
- iong3
- v4
- iong2
- uen4
- uai2
- uei1
- iou1
- a
- ua3
- uen1
- o3
- ueng1
- uai1
- uen3
- üe3
- ou
- uai3
- ve4
- er
- ün3
- o
- ua
- ia
- ' l ='
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pypinyin_g2p_phone
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 16000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/feats_stats.npz
tts: tacotron2
tts_conf:
embed_dim: 512
elayers: 1
eunits: 512
econv_layers: 3
econv_chans: 512
econv_filts: 5
atype: location
adim: 512
aconv_chans: 32
aconv_filts: 15
cumulate_att_w: true
dlayers: 2
dunits: 1024
prenet_layers: 2
prenet_units: 256
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
output_activation: null
use_batch_norm: true
use_concate: true
use_residual: false
spk_embed_dim: 512
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
dropout_rate: 0.5
zoneout_rate: 0.1
reduction_factor: 1
use_masking: true
bce_pos_weight: 10.0
use_guided_attn_loss: true
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 1.0
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: false</code></pre></li>
</ul>
|
[] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-thchs30 #license-cc-by-4.0 #region-us \n"
] |
null | null |
https://vrip.unmsm.edu.pe/forum/profile/liexylezzy/
https://vrip.unmsm.edu.pe/forum/profile/ellindanatasya/
https://vrip.unmsm.edu.pe/forum/profile/oploscgv/
https://vrip.unmsm.edu.pe/forum/profile/Zackoplos/
https://vrip.unmsm.edu.pe/forum/profile/unholyzulk/
https://vrip.unmsm.edu.pe/forum/profile/aurorarezash/
|
{}
|
fullshowbox/DSADAWF
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
URL
URL
URL
URL
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
https://community.afpglobal.org/network/members/profile?UserKey=fb4fdcef-dde4-4258-a423-2159545d84c1
https://community.afpglobal.org/network/members/profile?UserKey=e6ccc088-b709-45ec-b61e-4d56088acbda
https://community.afpglobal.org/network/members/profile?UserKey=ba280059-0890-4510-81d0-a79522b75ac8
https://community.afpglobal.org/network/members/profile?UserKey=799ba769-6e99-4a6a-a173-4f1b817e978c
https://community.afpglobal.org/network/members/profile?UserKey=babb84d7-e91a-4972-b26a-51067c66d793
https://community.afpglobal.org/network/members/profile?UserKey=8e4656bc-8d0d-44e1-b280-e68a2ace9353
https://community.afpglobal.org/network/members/profile?UserKey=8e7b41a8-9bed-4cb0-9021-a164b0aa6dd3
https://community.afpglobal.org/network/members/profile?UserKey=e4f38596-d772-4fbe-9e93-9aef5618f26e
https://community.afpglobal.org/network/members/profile?UserKey=18221e49-74ba-4155-ac1e-6f184bfb2398
https://community.afpglobal.org/network/members/profile?UserKey=ef4391e8-03df-467f-bf3f-4a45087817eb
https://community.afpglobal.org/network/members/profile?UserKey=832774fd-a035-421a-8236-61cf45a7747d
https://community.afpglobal.org/network/members/profile?UserKey=9f05cd73-b75c-4820-b60a-5df6357b2af9
https://community.afpglobal.org/network/members/profile?UserKey=c1727992-5024-4321-b0c9-ecc6f51e6532
https://www.hybrid-analysis.com/sample/255948e335dd9f873d11bf0224f8d180cd097509d23d27506292c22443fa92b8
https://www.facebook.com/PS5Giveaways2021
https://cgvmovie.cookpad-blog.jp/articles/589986
https://myanimelist.net/blog.php?eid=850892
https://comicvine.gamespot.com/profile/full-tv-free/about-me/
https://pantip.com/topic/40658194
|
{}
|
fullshowbox/full-tv-free
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
https://volunteer.alz.org/network/members/profile?UserKey=f4774542-39b3-4cfd-8c21-7b834795f7d7
https://volunteer.alz.org/network/members/profile?UserKey=05a00b90-f854-45fb-9a3a-7420144d290c
https://volunteer.alz.org/network/members/profile?UserKey=45cceddd-29b9-4c6c-8612-e2a16aaa391a
https://volunteer.alz.org/network/members/profile?UserKey=ae3c28f9-72a3-4af5-bd50-3b2ea2c0d3a3
https://volunteer.alz.org/network/members/profile?UserKey=7ab8e28e-e31f-4906-ab06-84b9ea3a880f
https://volunteer.alz.org/network/members/profile?UserKey=1b31fc90-e18e-4ef6-81f0-5c0b55fb95a3
https://volunteer.alz.org/network/members/profile?UserKey=23971b11-04ad-4eb4-abc5-6e659c6b071c
123movies-watch-online-movie-full-free-2021
https://myanimelist.net/blog.php?eid=849353
https://comicvine.gamespot.com/profile/nacenetwork21/about-me/
https://pantip.com/topic/40639721
|
{}
|
fullshowbox/nacenetwork21
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
URL
URL
URL
URL
URL
URL
123movies-watch-online-movie-full-free-2021
URL
URL
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
https://www.nace.org/network/members/profile?UserKey=461a690a-bff6-4e4c-be63-ea8e39264459
https://www.nace.org/network/members/profile?UserKey=b4a6a66a-fb8a-4f2b-8af9-04f003ad9d46
https://www.nace.org/network/members/profile?UserKey=24544ab2-551d-42aa-adbe-7a1c1d68fd9c
https://www.nace.org/network/members/profile?UserKey=3e8035d5-056a-482d-9010-9883e5990f4a
https://www.nace.org/network/members/profile?UserKey=d7241c69-28c4-4146-a077-a00cc2c9ccf5
https://www.nace.org/network/members/profile?UserKey=2c58c2fb-13a4-4e5a-b044-f467bb295d83
https://www.nace.org/network/members/profile?UserKey=dd8a290c-e53a-4b56-9a17-d35dbcb6b8bd
https://www.nace.org/network/members/profile?UserKey=0e96a1af-91f4-496a-af02-6d753a1bbded
|
{}
|
fullshowbox/networkprofile
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
URL
URL
URL
URL
URL
URL
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
https://ragbrai.com/groups/hd-movie-watch-french-exit-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-nobody-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-voyagers-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-godzilla-vs-kong-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-raya-and-the-last-dragon-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-mortal-kombat-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-the-father-2021-full-movie-online-for-free/
|
{}
|
fullshowbox/ragbrai
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
URL
URL
URL
URL
URL
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
feature-extraction
|
transformers
|
# Funnel Transformer intermediate model (B6-6-6 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `intermediate` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/intermediate-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/intermediate-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/intermediate-base
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03236"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us
|
# Funnel Transformer intermediate model (B6-6-6 without decoder)
Pretrained model on English language using a similar objective objective as ELECTRA. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Note: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the 'intermediate' model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on:
- BookCorpus, a dataset consisting of 11,038 unpublished books,
- English Wikipedia (excluding lists, tables and headers),
- Clue Web, a dataset of 733,019,372 English web pages,
- GigaWord, an archive of newswire text data,
- Common Crawl, a dataset of raw web pages.
### BibTeX entry and citation info
|
[
"# Funnel Transformer intermediate model (B6-6-6 without decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nNote: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth\nof the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if\nyou need one input per initial token. You should use the 'intermediate' model in that case.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Funnel Transformer intermediate model (B6-6-6 without decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nNote: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth\nof the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if\nyou need one input per initial token. You should use the 'intermediate' model in that case.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
# Funnel Transformer intermediate model (B6-6-6 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate")
model = FunneModel.from_pretrained("funnel-transformer/intermediate")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate")
model = TFFunnelModel.from_pretrained("funnel-transformer/intermediatesmall")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/intermediate
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03236"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us
|
# Funnel Transformer intermediate model (B6-6-6 with decoder)
Pretrained model on English language using a similar objective objective as ELECTRA. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on:
- BookCorpus, a dataset consisting of 11,038 unpublished books,
- English Wikipedia (excluding lists, tables and headers),
- Clue Web, a dataset of 733,019,372 English web pages,
- GigaWord, an archive of newswire text data,
- Common Crawl, a dataset of raw web pages.
### BibTeX entry and citation info
|
[
"# Funnel Transformer intermediate model (B6-6-6 with decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Funnel Transformer intermediate model (B6-6-6 with decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
# Funnel Transformer large model (B8-8-8 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `large` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/large-base
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03236"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us
|
# Funnel Transformer large model (B8-8-8 without decoder)
Pretrained model on English language using a similar objective objective as ELECTRA. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Note: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the 'large' model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on:
- BookCorpus, a dataset consisting of 11,038 unpublished books,
- English Wikipedia (excluding lists, tables and headers),
- Clue Web, a dataset of 733,019,372 English web pages,
- GigaWord, an archive of newswire text data,
- Common Crawl, a dataset of raw web pages.
### BibTeX entry and citation info
|
[
"# Funnel Transformer large model (B8-8-8 without decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nNote: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth\nof the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if\nyou need one input per initial token. You should use the 'large' model in that case.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Funnel Transformer large model (B8-8-8 without decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nNote: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth\nof the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if\nyou need one input per initial token. You should use the 'large' model in that case.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
# Funnel Transformer large model (B8-8-8 with decoder)
Pretrained model on English language using a similar objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large")
model = FunneModel.from_pretrained("funnel-transformer/large")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large")
model = TFFunnelModel.from_pretrained("funnel-transformer/large")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/large
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03236"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us
|
# Funnel Transformer large model (B8-8-8 with decoder)
Pretrained model on English language using a similar objective as ELECTRA. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on:
- BookCorpus, a dataset consisting of 11,038 unpublished books,
- English Wikipedia (excluding lists, tables and headers),
- Clue Web, a dataset of 733,019,372 English web pages,
- GigaWord, an archive of newswire text data,
- Common Crawl, a dataset of raw web pages.
### BibTeX entry and citation info
|
[
"# Funnel Transformer large model (B8-8-8 with decoder)\n\nPretrained model on English language using a similar objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Funnel Transformer large model (B8-8-8 with decoder)\n\nPretrained model on English language using a similar objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
# Funnel Transformer medium model (B6-3x2-3x2 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `medium` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/medium-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/medium-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/medium-base
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03236"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us
|
# Funnel Transformer medium model (B6-3x2-3x2 without decoder)
Pretrained model on English language using a similar objective objective as ELECTRA. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Note: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the 'medium' model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on:
- BookCorpus, a dataset consisting of 11,038 unpublished books,
- English Wikipedia (excluding lists, tables and headers),
- Clue Web, a dataset of 733,019,372 English web pages,
- GigaWord, an archive of newswire text data,
- Common Crawl, a dataset of raw web pages.
### BibTeX entry and citation info
|
[
"# Funnel Transformer medium model (B6-3x2-3x2 without decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nNote: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth\nof the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if\nyou need one input per initial token. You should use the 'medium' model in that case.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Funnel Transformer medium model (B6-3x2-3x2 without decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nNote: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth\nof the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if\nyou need one input per initial token. You should use the 'medium' model in that case.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
# Funnel Transformer medium model (B6-3x2-3x2 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium")
model = FunneModel.from_pretrained("funnel-transformer/medium")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium")
model = TFFunnelModel.from_pretrained("funnel-transformer/medium")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/medium
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03236"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us
|
# Funnel Transformer medium model (B6-3x2-3x2 with decoder)
Pretrained model on English language using a similar objective objective as ELECTRA. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on:
- BookCorpus, a dataset consisting of 11,038 unpublished books,
- English Wikipedia (excluding lists, tables and headers),
- Clue Web, a dataset of 733,019,372 English web pages,
- GigaWord, an archive of newswire text data,
- Common Crawl, a dataset of raw web pages.
### BibTeX entry and citation info
|
[
"# Funnel Transformer medium model (B6-3x2-3x2 with decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Funnel Transformer medium model (B6-3x2-3x2 with decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
# Funnel Transformer small model (B4-4-4 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `small` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/small-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/small-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/small-base
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03236"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us
|
# Funnel Transformer small model (B4-4-4 without decoder)
Pretrained model on English language using a similar objective objective as ELECTRA. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Note: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the 'small' model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on:
- BookCorpus, a dataset consisting of 11,038 unpublished books,
- English Wikipedia (excluding lists, tables and headers),
- Clue Web, a dataset of 733,019,372 English web pages,
- GigaWord, an archive of newswire text data,
- Common Crawl, a dataset of raw web pages.
### BibTeX entry and citation info
|
[
"# Funnel Transformer small model (B4-4-4 without decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nNote: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth\nof the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if\nyou need one input per initial token. You should use the 'small' model in that case.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Funnel Transformer small model (B4-4-4 without decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nNote: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth\nof the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if\nyou need one input per initial token. You should use the 'small' model in that case.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
# Funnel Transformer small model (B4-4-4 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small")
model = FunneModel.from_pretrained("funnel-transformer/small")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small")
model = TFFunnelModel.from_pretrained("funnel-transformer/small")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/small
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03236"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# Funnel Transformer small model (B4-4-4 with decoder)
Pretrained model on English language using a similar objective objective as ELECTRA. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on:
- BookCorpus, a dataset consisting of 11,038 unpublished books,
- English Wikipedia (excluding lists, tables and headers),
- Clue Web, a dataset of 733,019,372 English web pages,
- GigaWord, an archive of newswire text data,
- Common Crawl, a dataset of raw web pages.
### BibTeX entry and citation info
|
[
"# Funnel Transformer small model (B4-4-4 with decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# Funnel Transformer small model (B4-4-4 with decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
# Funnel Transformer xlarge model (B10-10-10 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `xlarge` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/xlarge-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/xlarge-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/xlarge-base
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03236"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us
|
# Funnel Transformer xlarge model (B10-10-10 without decoder)
Pretrained model on English language using a similar objective objective as ELECTRA. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Note: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the 'xlarge' model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on:
- BookCorpus, a dataset consisting of 11,038 unpublished books,
- English Wikipedia (excluding lists, tables and headers),
- Clue Web, a dataset of 733,019,372 English web pages,
- GigaWord, an archive of newswire text data,
- Common Crawl, a dataset of raw web pages.
### BibTeX entry and citation info
|
[
"# Funnel Transformer xlarge model (B10-10-10 without decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nNote: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth\nof the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if\nyou need one input per initial token. You should use the 'xlarge' model in that case.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Funnel Transformer xlarge model (B10-10-10 without decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nNote: This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth\nof the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if\nyou need one input per initial token. You should use the 'xlarge' model in that case.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
# Funnel Transformer xlarge model (B10-10-10 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge")
model = FunneModel.from_pretrained("funnel-transformer/xlarge")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge")
model = TFFunnelModel.from_pretrained("funnel-transformer/xlarge")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/xlarge
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03236"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# Funnel Transformer xlarge model (B10-10-10 with decoder)
Pretrained model on English language using a similar objective objective as ELECTRA. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on:
- BookCorpus, a dataset consisting of 11,038 unpublished books,
- English Wikipedia (excluding lists, tables and headers),
- Clue Web, a dataset of 733,019,372 English web pages,
- GigaWord, an archive of newswire text data,
- Common Crawl, a dataset of raw web pages.
### BibTeX entry and citation info
|
[
"# Funnel Transformer xlarge model (B10-10-10 with decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #funnel #feature-extraction #en #dataset-bookcorpus #dataset-wikipedia #dataset-gigaword #arxiv-2006.03236 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# Funnel Transformer xlarge model (B10-10-10 with decoder)\n\nPretrained model on English language using a similar objective objective as ELECTRA. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been\nwritten by the Hugging Face team.",
"## Model description\n\nFunnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and\nthe pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model to extract a vector representation of a given text, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on:\n- BookCorpus, a dataset consisting of 11,038 unpublished books,\n- English Wikipedia (excluding lists, tables and headers),\n- Clue Web, a dataset of 733,019,372 English web pages,\n- GigaWord, an archive of newswire text data,\n- Common Crawl, a dataset of raw web pages.",
"### BibTeX entry and citation info"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc-headline
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 167 | 2.2978 | 31.8313 | 10.3824 | 29.6182 | 29.4336 | 10.3153 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-base-finetuned-bbc-headline", "results": []}]}
|
furyhawk/t5-base-finetuned-bbc-headline
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-base-finetuned-bbc-headline
==============================
This model is a fine-tuned version of t5-base on the None dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 12
* eval\_batch\_size: 12
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.1
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 334 | 0.1500 | 24.5024 | 21.4979 | 24.0227 | 24.0303 | 19.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-base-finetuned-bbc", "results": []}]}
|
furyhawk/t5-base-finetuned-bbc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-base-finetuned-bbc
=====================
This model is a fine-tuned version of t5-base on the None dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 6
* eval\_batch\_size: 6
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.1
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-bbc-headline
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 167 | 3.6454 | 22.4311 | 5.9878 | 20.118 | 20.482 | 18.9009 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-small-finetuned-bbc-headline", "results": []}]}
|
furyhawk/t5-small-finetuned-bbc-headline
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-bbc-headline
===============================
This model is a fine-tuned version of t5-small on the None dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 12
* eval\_batch\_size: 12
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.1
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-bbc
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3238
- Rouge1: 21.2266
- Rouge2: 16.0927
- Rougel: 19.6785
- Rougelsum: 19.8849
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.4882 | 1.0 | 1001 | 0.3238 | 21.2266 | 16.0927 | 19.6785 | 19.8849 | 19.0 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.10.0
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-bbc", "results": []}]}
|
furyhawk/t5-small-finetuned-bbc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-bbc
======================
This model is a fine-tuned version of t5-small on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3238
* Rouge1: 21.2266
* Rouge2: 16.0927
* Rougel: 19.6785
* Rougelsum: 19.8849
* Gen Len: 19.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.0
* Pytorch 1.10.0
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.0\n* Pytorch 1.10.0\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.0\n* Pytorch 1.10.0\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 128 | 2.9003 | 19.4784 | 2.8529 | 14.7786 | 15.0614 | 18.9825 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "model-index": [{"name": "t5-small-finetuned-xsum", "results": []}]}
|
furyhawk/t5-small-finetuned-xsum
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-xsum
=======================
This model is a fine-tuned version of t5-small on the xsum dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.1
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0964 | 1.0 | 2346 | 7.0532 |
| 6.9055 | 2.0 | 4692 | 6.8710 |
| 6.8574 | 3.0 | 7038 | 6.8917 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-cased-wikitext2", "results": []}]}
|
fznmhmmd/bert-base-cased-wikitext2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-cased-wikitext2
=========================
This model is a fine-tuned version of bert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 6.8575
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8273
- Matthews Correlation: 0.5544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5256 | 1.0 | 535 | 0.5419 | 0.4248 |
| 0.3486 | 2.0 | 1070 | 0.5187 | 0.4999 |
| 0.2406 | 3.0 | 1605 | 0.6580 | 0.5054 |
| 0.1692 | 4.0 | 2140 | 0.7455 | 0.5403 |
| 0.1343 | 5.0 | 2675 | 0.8273 | 0.5544 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5543972545286807, "name": "Matthews Correlation"}]}]}]}
|
fznmhmmd/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8273
* Matthews Correlation: 0.5544
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5571 | 1.0 | 2249 | 6.4684 |
| 6.1921 | 2.0 | 4498 | 6.1984 |
| 6.0016 | 3.0 | 6747 | 6.1112 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-wikitext2", "results": []}]}
|
fznmhmmd/gpt2-wikitext2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gpt2-wikitext2
==============
This model is a fine-tuned version of gpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 6.1112
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-es-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - ES dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1788
- Wer: 1.0239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.02 | 100 | 6.6465 | 1.0 |
| No log | 0.04 | 200 | 3.0150 | 1.0 |
| No log | 0.05 | 300 | 2.8622 | 1.0003 |
| No log | 0.07 | 400 | 0.9506 | 0.9771 |
| 5.1598 | 0.09 | 500 | 0.4883 | 1.0009 |
| 5.1598 | 0.11 | 600 | 0.3893 | 1.0203 |
| 5.1598 | 0.13 | 700 | 0.3417 | 1.0283 |
| 5.1598 | 0.14 | 800 | 0.3352 | 1.0335 |
| 5.1598 | 0.16 | 900 | 0.2987 | 1.0168 |
| 0.3671 | 0.18 | 1000 | 0.2921 | 1.0159 |
| 0.3671 | 0.2 | 1100 | 0.2770 | 1.0096 |
| 0.3671 | 0.22 | 1200 | 0.2790 | 1.0398 |
| 0.3671 | 0.24 | 1300 | 0.2659 | 1.0190 |
| 0.3671 | 0.25 | 1400 | 0.2657 | 1.0528 |
| 0.289 | 0.27 | 1500 | 0.2556 | 1.0301 |
| 0.289 | 0.29 | 1600 | 0.2514 | 1.0193 |
| 0.289 | 0.31 | 1700 | 0.2708 | 1.0699 |
| 0.289 | 0.33 | 1800 | 0.2455 | 1.0723 |
| 0.289 | 0.34 | 1900 | 0.2456 | 1.0100 |
| 0.271 | 0.36 | 2000 | 0.2338 | 1.0533 |
| 0.271 | 0.38 | 2100 | 0.2479 | 1.0128 |
| 0.271 | 0.4 | 2200 | 0.2483 | 1.0386 |
| 0.271 | 0.42 | 2300 | 0.2436 | 1.0528 |
| 0.271 | 0.43 | 2400 | 0.2382 | 1.0476 |
| 0.2634 | 0.45 | 2500 | 0.2329 | 1.0680 |
| 0.2634 | 0.47 | 2600 | 0.2433 | 1.0581 |
| 0.2634 | 0.49 | 2700 | 0.2354 | 1.0641 |
| 0.2634 | 0.51 | 2800 | 0.2318 | 1.0504 |
| 0.2634 | 0.52 | 2900 | 0.2325 | 1.0500 |
| 0.2522 | 0.54 | 3000 | 0.2344 | 1.0380 |
| 0.2522 | 0.56 | 3100 | 0.2244 | 1.0663 |
| 0.2522 | 0.58 | 3200 | 0.2340 | 1.0647 |
| 0.2522 | 0.6 | 3300 | 0.2288 | 1.0538 |
| 0.2522 | 0.61 | 3400 | 0.2212 | 1.0614 |
| 0.2468 | 0.63 | 3500 | 0.2487 | 1.0557 |
| 0.2468 | 0.65 | 3600 | 0.2330 | 1.0510 |
| 0.2468 | 0.67 | 3700 | 0.2308 | 1.0506 |
| 0.2468 | 0.69 | 3800 | 0.2320 | 1.0451 |
| 0.2468 | 0.71 | 3900 | 0.2261 | 1.0701 |
| 0.2505 | 0.72 | 4000 | 0.2281 | 1.0713 |
| 0.2505 | 0.74 | 4100 | 0.2277 | 1.0741 |
| 0.2505 | 0.76 | 4200 | 0.2253 | 1.0814 |
| 0.2505 | 0.78 | 4300 | 0.2215 | 1.0437 |
| 0.2505 | 0.8 | 4400 | 0.2220 | 1.0557 |
| 0.2434 | 0.81 | 4500 | 0.2184 | 1.0533 |
| 0.2434 | 0.83 | 4600 | 0.2222 | 1.0819 |
| 0.2434 | 0.85 | 4700 | 0.2162 | 1.0238 |
| 0.2434 | 0.87 | 4800 | 0.2132 | 1.0457 |
| 0.2434 | 0.89 | 4900 | 0.2068 | 1.0611 |
| 0.2347 | 0.9 | 5000 | 0.2166 | 1.0332 |
| 0.2347 | 0.92 | 5100 | 0.2087 | 1.0433 |
| 0.2347 | 0.94 | 5200 | 0.2100 | 1.0292 |
| 0.2347 | 0.96 | 5300 | 0.2067 | 1.0734 |
| 0.2347 | 0.98 | 5400 | 0.2148 | 1.0279 |
| 0.2333 | 0.99 | 5500 | 0.2125 | 1.0277 |
| 0.2333 | 1.01 | 5600 | 0.2054 | 1.0453 |
| 0.2333 | 1.03 | 5700 | 0.2091 | 1.0557 |
| 0.2333 | 1.05 | 5800 | 0.2086 | 1.0239 |
| 0.2333 | 1.07 | 5900 | 0.2051 | 1.0645 |
| 0.2087 | 1.09 | 6000 | 0.2103 | 1.0240 |
| 0.2087 | 1.1 | 6100 | 0.2145 | 1.0197 |
| 0.2087 | 1.12 | 6200 | 0.2136 | 1.0248 |
| 0.2087 | 1.14 | 6300 | 0.2045 | 1.0443 |
| 0.2087 | 1.16 | 6400 | 0.2089 | 1.0397 |
| 0.2013 | 1.18 | 6500 | 0.2012 | 1.0654 |
| 0.2013 | 1.19 | 6600 | 0.2054 | 1.0414 |
| 0.2013 | 1.21 | 6700 | 0.2081 | 1.0632 |
| 0.2013 | 1.23 | 6800 | 0.2104 | 1.0190 |
| 0.2013 | 1.25 | 6900 | 0.2045 | 1.0813 |
| 0.2092 | 1.27 | 7000 | 0.2096 | 1.0751 |
| 0.2092 | 1.28 | 7100 | 0.2103 | 1.0328 |
| 0.2092 | 1.3 | 7200 | 0.2044 | 1.0011 |
| 0.2092 | 1.32 | 7300 | 0.2089 | 1.0260 |
| 0.2092 | 1.34 | 7400 | 0.2063 | 1.0551 |
| 0.2076 | 1.36 | 7500 | 0.2029 | 1.0075 |
| 0.2076 | 1.37 | 7600 | 0.2040 | 1.0528 |
| 0.2076 | 1.39 | 7700 | 0.2075 | 1.0398 |
| 0.2076 | 1.41 | 7800 | 0.2023 | 1.0231 |
| 0.2076 | 1.43 | 7900 | 0.2049 | 1.0318 |
| 0.2028 | 1.45 | 8000 | 0.2072 | 1.0763 |
| 0.2028 | 1.47 | 8100 | 0.2075 | 1.0762 |
| 0.2028 | 1.48 | 8200 | 0.2052 | 1.0838 |
| 0.2028 | 1.5 | 8300 | 0.2053 | 1.0407 |
| 0.2028 | 1.52 | 8400 | 0.2066 | 1.0266 |
| 0.2025 | 1.54 | 8500 | 0.2037 | 1.0628 |
| 0.2025 | 1.56 | 8600 | 0.2010 | 1.0351 |
| 0.2025 | 1.57 | 8700 | 0.1961 | 1.0812 |
| 0.2025 | 1.59 | 8800 | 0.1963 | 1.0868 |
| 0.2025 | 1.61 | 8900 | 0.2022 | 1.0710 |
| 0.1997 | 1.63 | 9000 | 0.2051 | 1.0764 |
| 0.1997 | 1.65 | 9100 | 0.1987 | 1.0581 |
| 0.1997 | 1.66 | 9200 | 0.2051 | 1.0611 |
| 0.1997 | 1.68 | 9300 | 0.1999 | 1.0808 |
| 0.1997 | 1.7 | 9400 | 0.1972 | 1.0703 |
| 0.1983 | 1.72 | 9500 | 0.1961 | 1.0584 |
| 0.1983 | 1.74 | 9600 | 0.2031 | 1.0938 |
| 0.1983 | 1.75 | 9700 | 0.2019 | 1.0891 |
| 0.1983 | 1.77 | 9800 | 0.2006 | 1.0542 |
| 0.1983 | 1.79 | 9900 | 0.1925 | 1.0627 |
| 0.1961 | 1.81 | 10000 | 0.1976 | 1.0751 |
| 0.1961 | 1.83 | 10100 | 0.2051 | 1.0611 |
| 0.1961 | 1.85 | 10200 | 0.2037 | 1.0656 |
| 0.1961 | 1.86 | 10300 | 0.2025 | 1.0291 |
| 0.1961 | 1.88 | 10400 | 0.1977 | 1.0525 |
| 0.2025 | 1.9 | 10500 | 0.2030 | 1.0670 |
| 0.2025 | 1.92 | 10600 | 0.1980 | 1.0765 |
| 0.2025 | 1.94 | 10700 | 0.1975 | 1.0254 |
| 0.2025 | 1.95 | 10800 | 0.1986 | 1.0636 |
| 0.2025 | 1.97 | 10900 | 0.1956 | 1.0352 |
| 0.2025 | 1.99 | 11000 | 0.1954 | 1.0265 |
| 0.2025 | 2.01 | 11100 | 0.1957 | 1.0752 |
| 0.2025 | 2.03 | 11200 | 0.1943 | 1.0784 |
| 0.2025 | 2.04 | 11300 | 0.1898 | 1.0341 |
| 0.2025 | 2.06 | 11400 | 0.1921 | 1.0301 |
| 0.1805 | 2.08 | 11500 | 0.1910 | 1.0230 |
| 0.1805 | 2.1 | 11600 | 0.1961 | 1.0203 |
| 0.1805 | 2.12 | 11700 | 0.1973 | 1.0776 |
| 0.1805 | 2.13 | 11800 | 0.1876 | 1.0788 |
| 0.1805 | 2.15 | 11900 | 0.1934 | 1.0251 |
| 0.177 | 2.17 | 12000 | 0.1967 | 1.0340 |
| 0.177 | 2.19 | 12100 | 0.1932 | 1.0131 |
| 0.177 | 2.21 | 12200 | 0.1926 | 1.0078 |
| 0.177 | 2.23 | 12300 | 0.1947 | 0.9991 |
| 0.177 | 2.24 | 12400 | 0.1914 | 1.0213 |
| 0.1782 | 2.26 | 12500 | 0.1962 | 0.9882 |
| 0.1782 | 2.28 | 12600 | 0.1960 | 1.0562 |
| 0.1782 | 2.3 | 12700 | 0.2006 | 1.0401 |
| 0.1782 | 2.32 | 12800 | 0.1950 | 1.0688 |
| 0.1782 | 2.33 | 12900 | 0.1920 | 1.0435 |
| 0.1796 | 2.35 | 13000 | 0.1926 | 1.0667 |
| 0.1796 | 2.37 | 13100 | 0.1949 | 1.0859 |
| 0.1796 | 2.39 | 13200 | 0.1932 | 1.0670 |
| 0.1796 | 2.41 | 13300 | 0.1882 | 1.0663 |
| 0.1796 | 2.42 | 13400 | 0.1877 | 1.0760 |
| 0.1775 | 2.44 | 13500 | 0.1893 | 1.0859 |
| 0.1775 | 2.46 | 13600 | 0.1936 | 1.0702 |
| 0.1775 | 2.48 | 13700 | 0.1871 | 1.0414 |
| 0.1775 | 2.5 | 13800 | 0.1917 | 1.0430 |
| 0.1775 | 2.51 | 13900 | 0.1922 | 1.0422 |
| 0.1778 | 2.53 | 14000 | 0.1875 | 1.0585 |
| 0.1778 | 2.55 | 14100 | 0.1876 | 1.0603 |
| 0.1778 | 2.57 | 14200 | 0.1888 | 1.0628 |
| 0.1778 | 2.59 | 14300 | 0.1948 | 1.0782 |
| 0.1778 | 2.6 | 14400 | 0.1942 | 1.0695 |
| 0.1784 | 2.62 | 14500 | 0.1842 | 1.0863 |
| 0.1784 | 2.64 | 14600 | 0.1850 | 1.0543 |
| 0.1784 | 2.66 | 14700 | 0.1824 | 1.0683 |
| 0.1784 | 2.68 | 14800 | 0.1888 | 1.0693 |
| 0.1784 | 2.7 | 14900 | 0.1871 | 1.0175 |
| 0.1753 | 2.71 | 15000 | 0.1889 | 1.0549 |
| 0.1753 | 2.73 | 15100 | 0.1865 | 1.0544 |
| 0.1753 | 2.75 | 15200 | 0.1918 | 1.0726 |
| 0.1753 | 2.77 | 15300 | 0.1964 | 1.0915 |
| 0.1753 | 2.79 | 15400 | 0.1900 | 1.0610 |
| 0.1768 | 2.8 | 15500 | 0.1894 | 1.0763 |
| 0.1768 | 2.82 | 15600 | 0.1882 | 1.0548 |
| 0.1768 | 2.84 | 15700 | 0.1861 | 1.0902 |
| 0.1768 | 2.86 | 15800 | 0.1860 | 1.0551 |
| 0.1768 | 2.88 | 15900 | 0.1879 | 1.0581 |
| 0.1761 | 2.89 | 16000 | 0.1899 | 1.0544 |
| 0.1761 | 2.91 | 16100 | 0.1860 | 1.0530 |
| 0.1761 | 2.93 | 16200 | 0.1894 | 1.0596 |
| 0.1761 | 2.95 | 16300 | 0.1835 | 1.0394 |
| 0.1761 | 2.97 | 16400 | 0.1852 | 1.0445 |
| 0.1754 | 2.98 | 16500 | 0.1847 | 1.0390 |
| 0.1754 | 3.0 | 16600 | 0.1828 | 1.0440 |
| 0.1754 | 3.02 | 16700 | 0.1869 | 1.0560 |
| 0.1754 | 3.04 | 16800 | 0.1882 | 1.0573 |
| 0.1754 | 3.06 | 16900 | 0.1912 | 1.0600 |
| 0.1592 | 3.08 | 17000 | 0.1921 | 1.0529 |
| 0.1592 | 3.09 | 17100 | 0.1881 | 1.0175 |
| 0.1592 | 3.11 | 17200 | 0.1891 | 1.0654 |
| 0.1592 | 3.13 | 17300 | 0.1889 | 1.0687 |
| 0.1592 | 3.15 | 17400 | 0.1916 | 1.0642 |
| 0.1556 | 3.17 | 17500 | 0.1850 | 1.0295 |
| 0.1556 | 3.18 | 17600 | 0.1875 | 1.0273 |
| 0.1556 | 3.2 | 17700 | 0.1894 | 1.0051 |
| 0.1556 | 3.22 | 17800 | 0.1870 | 1.0462 |
| 0.1556 | 3.24 | 17900 | 0.1831 | 1.0308 |
| 0.1557 | 3.26 | 18000 | 0.1878 | 1.0603 |
| 0.1557 | 3.27 | 18100 | 0.1850 | 1.0566 |
| 0.1557 | 3.29 | 18200 | 0.1843 | 1.0629 |
| 0.1557 | 3.31 | 18300 | 0.1886 | 1.0378 |
| 0.1557 | 3.33 | 18400 | 0.1892 | 1.0381 |
| 0.159 | 3.35 | 18500 | 0.1942 | 1.0519 |
| 0.159 | 3.36 | 18600 | 0.1829 | 1.0622 |
| 0.159 | 3.38 | 18700 | 0.1894 | 1.0557 |
| 0.159 | 3.4 | 18800 | 0.1895 | 1.0627 |
| 0.159 | 3.42 | 18900 | 0.1863 | 1.0362 |
| 0.1582 | 3.44 | 19000 | 0.1888 | 1.0491 |
| 0.1582 | 3.46 | 19100 | 0.1854 | 1.0483 |
| 0.1582 | 3.47 | 19200 | 0.1797 | 0.9787 |
| 0.1582 | 3.49 | 19300 | 0.1785 | 1.0086 |
| 0.1582 | 3.51 | 19400 | 0.1797 | 0.9915 |
| 0.1507 | 3.53 | 19500 | 0.1873 | 1.0266 |
| 0.1507 | 3.55 | 19600 | 0.1838 | 1.0299 |
| 0.1507 | 3.56 | 19700 | 0.1817 | 1.0355 |
| 0.1507 | 3.58 | 19800 | 0.1819 | 1.0271 |
| 0.1507 | 3.6 | 19900 | 0.1883 | 1.0248 |
| 0.1601 | 3.62 | 20000 | 0.1823 | 1.0406 |
| 0.1601 | 3.64 | 20100 | 0.1801 | 1.0261 |
| 0.1601 | 3.65 | 20200 | 0.1783 | 1.0329 |
| 0.1601 | 3.67 | 20300 | 0.1857 | 1.0162 |
| 0.1601 | 3.69 | 20400 | 0.1814 | 1.0212 |
| 0.1552 | 3.71 | 20500 | 0.1837 | 1.0232 |
| 0.1552 | 3.73 | 20600 | 0.1843 | 1.0314 |
| 0.1552 | 3.74 | 20700 | 0.1842 | 1.0258 |
| 0.1552 | 3.76 | 20800 | 0.1821 | 1.0479 |
| 0.1552 | 3.78 | 20900 | 0.1864 | 1.0459 |
| 0.1576 | 3.8 | 21000 | 0.1831 | 1.0364 |
| 0.1576 | 3.82 | 21100 | 0.1852 | 1.0271 |
| 0.1576 | 3.83 | 21200 | 0.1865 | 1.0204 |
| 0.1576 | 3.85 | 21300 | 0.1794 | 1.0324 |
| 0.1576 | 3.87 | 21400 | 0.1826 | 1.0315 |
| 0.1585 | 3.89 | 21500 | 0.1824 | 1.0327 |
| 0.1585 | 3.91 | 21600 | 0.1838 | 1.0208 |
| 0.1585 | 3.93 | 21700 | 0.1850 | 1.0199 |
| 0.1585 | 3.94 | 21800 | 0.1841 | 1.0050 |
| 0.1585 | 3.96 | 21900 | 0.1783 | 1.0003 |
| 0.1572 | 3.98 | 22000 | 0.1787 | 1.0115 |
| 0.1572 | 4.0 | 22100 | 0.1810 | 1.0235 |
| 0.1572 | 4.02 | 22200 | 0.1763 | 1.0191 |
| 0.1572 | 4.03 | 22300 | 0.1764 | 1.0332 |
| 0.1572 | 4.05 | 22400 | 0.1794 | 1.0429 |
| 0.1406 | 4.07 | 22500 | 0.1905 | 1.0288 |
| 0.1406 | 4.09 | 22600 | 0.1776 | 1.0244 |
| 0.1406 | 4.11 | 22700 | 0.1782 | 1.0451 |
| 0.1406 | 4.12 | 22800 | 0.1771 | 1.0387 |
| 0.1406 | 4.14 | 22900 | 0.1788 | 1.0435 |
| 0.14 | 4.16 | 23000 | 0.1792 | 1.0421 |
| 0.14 | 4.18 | 23100 | 0.1841 | 1.0241 |
| 0.14 | 4.2 | 23200 | 0.1769 | 1.0546 |
| 0.14 | 4.21 | 23300 | 0.1815 | 1.0602 |
| 0.14 | 4.23 | 23400 | 0.1784 | 1.0369 |
| 0.1394 | 4.25 | 23500 | 0.1809 | 1.0406 |
| 0.1394 | 4.27 | 23600 | 0.1744 | 1.0133 |
| 0.1394 | 4.29 | 23700 | 0.1771 | 1.0214 |
| 0.1394 | 4.31 | 23800 | 0.1765 | 1.0064 |
| 0.1394 | 4.32 | 23900 | 0.1793 | 1.0200 |
| 0.14 | 4.34 | 24000 | 0.1776 | 1.0352 |
| 0.14 | 4.36 | 24100 | 0.1775 | 1.0294 |
| 0.14 | 4.38 | 24200 | 0.1763 | 1.0213 |
| 0.14 | 4.4 | 24300 | 0.1697 | 1.0302 |
| 0.14 | 4.41 | 24400 | 0.1771 | 1.0259 |
| 0.1408 | 4.43 | 24500 | 0.1747 | 1.0409 |
| 0.1408 | 4.45 | 24600 | 0.1769 | 1.0278 |
| 0.1408 | 4.47 | 24700 | 0.1767 | 1.0190 |
| 0.1408 | 4.49 | 24800 | 0.1745 | 1.0281 |
| 0.1408 | 4.5 | 24900 | 0.1738 | 1.0356 |
| 0.1391 | 4.52 | 25000 | 0.1781 | 1.0429 |
| 0.1391 | 4.54 | 25100 | 0.1784 | 1.0076 |
| 0.1391 | 4.56 | 25200 | 0.1771 | 1.0157 |
| 0.1391 | 4.58 | 25300 | 0.1758 | 1.0337 |
| 0.1391 | 4.59 | 25400 | 0.1758 | 1.0466 |
| 0.1398 | 4.61 | 25500 | 0.1724 | 1.0403 |
| 0.1398 | 4.63 | 25600 | 0.1765 | 1.0481 |
| 0.1398 | 4.65 | 25700 | 0.1757 | 1.0320 |
| 0.1398 | 4.67 | 25800 | 0.1814 | 1.0479 |
| 0.1398 | 4.69 | 25900 | 0.1713 | 1.0251 |
| 0.1427 | 4.7 | 26000 | 0.1735 | 1.0340 |
| 0.1427 | 4.72 | 26100 | 0.1765 | 1.0358 |
| 0.1427 | 4.74 | 26200 | 0.1731 | 1.0220 |
| 0.1427 | 4.76 | 26300 | 0.1769 | 1.0261 |
| 0.1427 | 4.78 | 26400 | 0.1747 | 1.0139 |
| 0.1424 | 4.79 | 26500 | 0.1791 | 1.0406 |
| 0.1424 | 4.81 | 26600 | 0.1735 | 1.0497 |
| 0.1424 | 4.83 | 26700 | 0.1710 | 1.0433 |
| 0.1424 | 4.85 | 26800 | 0.1771 | 1.0002 |
| 0.1424 | 4.87 | 26900 | 0.1748 | 1.0046 |
| 0.1419 | 4.88 | 27000 | 0.1794 | 1.0332 |
| 0.1419 | 4.9 | 27100 | 0.1772 | 1.0558 |
| 0.1419 | 4.92 | 27200 | 0.1757 | 1.0477 |
| 0.1419 | 4.94 | 27300 | 0.1735 | 1.0324 |
| 0.1419 | 4.96 | 27400 | 0.1758 | 1.0260 |
| 0.1433 | 4.97 | 27500 | 0.1767 | 1.0422 |
| 0.1433 | 4.99 | 27600 | 0.1695 | 1.0386 |
| 0.1433 | 5.01 | 27700 | 0.1763 | 1.0571 |
| 0.1433 | 5.03 | 27800 | 0.1743 | 1.0367 |
| 0.1433 | 5.05 | 27900 | 0.1804 | 1.0255 |
| 0.1306 | 5.07 | 28000 | 0.1803 | 1.0377 |
| 0.1306 | 5.08 | 28100 | 0.1750 | 1.0552 |
| 0.1306 | 5.1 | 28200 | 0.1743 | 1.0512 |
| 0.1306 | 5.12 | 28300 | 0.1777 | 1.0584 |
| 0.1306 | 5.14 | 28400 | 0.1726 | 1.0374 |
| 0.123 | 5.16 | 28500 | 0.1776 | 1.0439 |
| 0.123 | 5.17 | 28600 | 0.1759 | 1.0682 |
| 0.123 | 5.19 | 28700 | 0.1724 | 1.0511 |
| 0.123 | 5.21 | 28800 | 0.1677 | 1.0560 |
| 0.123 | 5.23 | 28900 | 0.1699 | 1.0421 |
| 0.1217 | 5.25 | 29000 | 0.1803 | 1.0370 |
| 0.1217 | 5.26 | 29100 | 0.1770 | 1.0474 |
| 0.1217 | 5.28 | 29200 | 0.1733 | 1.0332 |
| 0.1217 | 5.3 | 29300 | 0.1746 | 1.0158 |
| 0.1217 | 5.32 | 29400 | 0.1763 | 1.0341 |
| 0.1246 | 5.34 | 29500 | 0.1775 | 1.0348 |
| 0.1246 | 5.35 | 29600 | 0.1730 | 1.0492 |
| 0.1246 | 5.37 | 29700 | 0.1730 | 1.0503 |
| 0.1246 | 5.39 | 29800 | 0.1727 | 1.0437 |
| 0.1246 | 5.41 | 29900 | 0.1744 | 1.0539 |
| 0.127 | 5.43 | 30000 | 0.1748 | 1.0463 |
| 0.127 | 5.44 | 30100 | 0.1746 | 1.0555 |
| 0.127 | 5.46 | 30200 | 0.1810 | 1.0558 |
| 0.127 | 5.48 | 30300 | 0.1773 | 1.0407 |
| 0.127 | 5.5 | 30400 | 0.1722 | 1.0489 |
| 0.1276 | 5.52 | 30500 | 0.1720 | 1.0520 |
| 0.1276 | 5.54 | 30600 | 0.1777 | 1.0347 |
| 0.1276 | 5.55 | 30700 | 0.1685 | 1.0347 |
| 0.1276 | 5.57 | 30800 | 0.1659 | 1.0338 |
| 0.1276 | 5.59 | 30900 | 0.1756 | 1.0228 |
| 0.1246 | 5.61 | 31000 | 0.1717 | 1.0409 |
| 0.1246 | 5.63 | 31100 | 0.1764 | 1.0202 |
| 0.1246 | 5.64 | 31200 | 0.1693 | 1.0314 |
| 0.1246 | 5.66 | 31300 | 0.1731 | 1.0319 |
| 0.1246 | 5.68 | 31400 | 0.1688 | 1.0380 |
| 0.1271 | 5.7 | 31500 | 0.1671 | 1.0350 |
| 0.1271 | 5.72 | 31600 | 0.1676 | 1.0430 |
| 0.1271 | 5.73 | 31700 | 0.1656 | 1.0441 |
| 0.1271 | 5.75 | 31800 | 0.1664 | 1.0403 |
| 0.1271 | 5.77 | 31900 | 0.1691 | 1.0152 |
| 0.1259 | 5.79 | 32000 | 0.1702 | 1.0018 |
| 0.1259 | 5.81 | 32100 | 0.1664 | 1.0246 |
| 0.1259 | 5.82 | 32200 | 0.1737 | 1.0340 |
| 0.1259 | 5.84 | 32300 | 0.1742 | 1.0449 |
| 0.1259 | 5.86 | 32400 | 0.1707 | 1.0279 |
| 0.1273 | 5.88 | 32500 | 0.1697 | 1.0471 |
| 0.1273 | 5.9 | 32600 | 0.1668 | 1.0322 |
| 0.1273 | 5.92 | 32700 | 0.1706 | 1.0378 |
| 0.1273 | 5.93 | 32800 | 0.1704 | 1.0350 |
| 0.1273 | 5.95 | 32900 | 0.1725 | 1.0244 |
| 0.123 | 5.97 | 33000 | 0.1678 | 1.0447 |
| 0.123 | 5.99 | 33100 | 0.1681 | 1.0438 |
| 0.123 | 6.01 | 33200 | 0.1689 | 1.0297 |
| 0.123 | 6.02 | 33300 | 0.1690 | 1.0333 |
| 0.123 | 6.04 | 33400 | 0.1734 | 1.0296 |
| 0.1163 | 6.06 | 33500 | 0.1748 | 1.0307 |
| 0.1163 | 6.08 | 33600 | 0.1715 | 1.0123 |
| 0.1163 | 6.1 | 33700 | 0.1668 | 1.0117 |
| 0.1163 | 6.11 | 33800 | 0.1690 | 1.0230 |
| 0.1163 | 6.13 | 33900 | 0.1693 | 1.0166 |
| 0.1101 | 6.15 | 34000 | 0.1728 | 1.0162 |
| 0.1101 | 6.17 | 34100 | 0.1683 | 1.0107 |
| 0.1101 | 6.19 | 34200 | 0.1703 | 0.9814 |
| 0.1101 | 6.2 | 34300 | 0.1692 | 1.0007 |
| 0.1101 | 6.22 | 34400 | 0.1690 | 1.0000 |
| 0.1118 | 6.24 | 34500 | 0.1734 | 0.9972 |
| 0.1118 | 6.26 | 34600 | 0.1739 | 1.0096 |
| 0.1118 | 6.28 | 34700 | 0.1749 | 1.0047 |
| 0.1118 | 6.3 | 34800 | 0.1709 | 1.0111 |
| 0.1118 | 6.31 | 34900 | 0.1717 | 1.0179 |
| 0.1153 | 6.33 | 35000 | 0.1690 | 1.0155 |
| 0.1153 | 6.35 | 35100 | 0.1710 | 1.0144 |
| 0.1153 | 6.37 | 35200 | 0.1719 | 1.0030 |
| 0.1153 | 6.39 | 35300 | 0.1690 | 1.0272 |
| 0.1153 | 6.4 | 35400 | 0.1673 | 1.0103 |
| 0.1106 | 6.42 | 35500 | 0.1710 | 1.0222 |
| 0.1106 | 6.44 | 35600 | 0.1747 | 1.0173 |
| 0.1106 | 6.46 | 35700 | 0.1721 | 0.9933 |
| 0.1106 | 6.48 | 35800 | 0.1670 | 1.0184 |
| 0.1106 | 6.49 | 35900 | 0.1714 | 1.0122 |
| 0.1116 | 6.51 | 36000 | 0.1717 | 1.0035 |
| 0.1116 | 6.53 | 36100 | 0.1685 | 1.0099 |
| 0.1116 | 6.55 | 36200 | 0.1687 | 1.0288 |
| 0.1116 | 6.57 | 36300 | 0.1664 | 1.0314 |
| 0.1116 | 6.58 | 36400 | 0.1665 | 1.0264 |
| 0.1128 | 6.6 | 36500 | 0.1681 | 1.0420 |
| 0.1128 | 6.62 | 36600 | 0.1682 | 1.0409 |
| 0.1128 | 6.64 | 36700 | 0.1717 | 1.0271 |
| 0.1128 | 6.66 | 36800 | 0.1717 | 1.0166 |
| 0.1128 | 6.68 | 36900 | 0.1755 | 1.0175 |
| 0.1134 | 6.69 | 37000 | 0.1623 | 1.0185 |
| 0.1134 | 6.71 | 37100 | 0.1674 | 1.0302 |
| 0.1134 | 6.73 | 37200 | 0.1633 | 1.0325 |
| 0.1134 | 6.75 | 37300 | 0.1628 | 1.0228 |
| 0.1134 | 6.77 | 37400 | 0.1636 | 1.0243 |
| 0.1102 | 6.78 | 37500 | 0.1667 | 1.0282 |
| 0.1102 | 6.8 | 37600 | 0.1623 | 1.0212 |
| 0.1102 | 6.82 | 37700 | 0.1639 | 1.0140 |
| 0.1102 | 6.84 | 37800 | 0.1587 | 1.0258 |
| 0.1102 | 6.86 | 37900 | 0.1610 | 1.0087 |
| 0.1113 | 6.87 | 38000 | 0.1647 | 1.0199 |
| 0.1113 | 6.89 | 38100 | 0.1609 | 1.0054 |
| 0.1113 | 6.91 | 38200 | 0.1602 | 1.0145 |
| 0.1113 | 6.93 | 38300 | 0.1602 | 1.0144 |
| 0.1113 | 6.95 | 38400 | 0.1602 | 1.0375 |
| 0.1071 | 6.96 | 38500 | 0.1592 | 1.0259 |
| 0.1071 | 6.98 | 38600 | 0.1612 | 1.0236 |
| 0.1071 | 7.0 | 38700 | 0.1621 | 1.0277 |
| 0.1071 | 7.02 | 38800 | 0.1669 | 1.0367 |
| 0.1071 | 7.04 | 38900 | 0.1742 | 1.0484 |
| 0.1062 | 7.05 | 39000 | 0.1752 | 1.0302 |
| 0.1062 | 7.07 | 39100 | 0.1676 | 1.0244 |
| 0.1062 | 7.09 | 39200 | 0.1723 | 1.0300 |
| 0.1062 | 7.11 | 39300 | 0.1727 | 1.0294 |
| 0.1062 | 7.13 | 39400 | 0.1711 | 1.0255 |
| 0.1021 | 7.15 | 39500 | 0.1699 | 1.0471 |
| 0.1021 | 7.16 | 39600 | 0.1682 | 1.0426 |
| 0.1021 | 7.18 | 39700 | 0.1713 | 1.0233 |
| 0.1021 | 7.2 | 39800 | 0.1682 | 1.0259 |
| 0.1021 | 7.22 | 39900 | 0.1710 | 1.0162 |
| 0.103 | 7.24 | 40000 | 0.1725 | 1.0283 |
| 0.103 | 7.25 | 40100 | 0.1729 | 1.0264 |
| 0.103 | 7.27 | 40200 | 0.1665 | 1.0451 |
| 0.103 | 7.29 | 40300 | 0.1671 | 1.0386 |
| 0.103 | 7.31 | 40400 | 0.1671 | 1.0316 |
| 0.0981 | 7.33 | 40500 | 0.1708 | 1.0257 |
| 0.0981 | 7.34 | 40600 | 0.1642 | 1.0152 |
| 0.0981 | 7.36 | 40700 | 0.1707 | 1.0110 |
| 0.0981 | 7.38 | 40800 | 0.1675 | 1.0186 |
| 0.0981 | 7.4 | 40900 | 0.1702 | 1.0123 |
| 0.1005 | 7.42 | 41000 | 0.1699 | 1.0159 |
| 0.1005 | 7.43 | 41100 | 0.1703 | 1.0219 |
| 0.1005 | 7.45 | 41200 | 0.1707 | 1.0194 |
| 0.1005 | 7.47 | 41300 | 0.1644 | 1.0016 |
| 0.1005 | 7.49 | 41400 | 0.1716 | 0.9941 |
| 0.1021 | 7.51 | 41500 | 0.1670 | 1.0159 |
| 0.1021 | 7.53 | 41600 | 0.1667 | 1.0033 |
| 0.1021 | 7.54 | 41700 | 0.1667 | 1.0176 |
| 0.1021 | 7.56 | 41800 | 0.1679 | 1.0194 |
| 0.1021 | 7.58 | 41900 | 0.1632 | 1.0418 |
| 0.0963 | 7.6 | 42000 | 0.1712 | 1.0152 |
| 0.0963 | 7.62 | 42100 | 0.1632 | 1.0364 |
| 0.0963 | 7.63 | 42200 | 0.1702 | 1.0229 |
| 0.0963 | 7.65 | 42300 | 0.1655 | 1.0179 |
| 0.0963 | 7.67 | 42400 | 0.1698 | 1.0329 |
| 0.1014 | 7.69 | 42500 | 0.1691 | 1.0398 |
| 0.1014 | 7.71 | 42600 | 0.1638 | 1.0487 |
| 0.1014 | 7.72 | 42700 | 0.1617 | 1.0210 |
| 0.1014 | 7.74 | 42800 | 0.1648 | 1.0124 |
| 0.1014 | 7.76 | 42900 | 0.1608 | 1.0202 |
| 0.1008 | 7.78 | 43000 | 0.1611 | 1.0353 |
| 0.1008 | 7.8 | 43100 | 0.1633 | 1.0319 |
| 0.1008 | 7.81 | 43200 | 0.1640 | 1.0032 |
| 0.1008 | 7.83 | 43300 | 0.1589 | 0.9985 |
| 0.1008 | 7.85 | 43400 | 0.1630 | 0.9975 |
| 0.0988 | 7.87 | 43500 | 0.1604 | 1.0053 |
| 0.0988 | 7.89 | 43600 | 0.1687 | 1.0063 |
| 0.0988 | 7.91 | 43700 | 0.1619 | 1.0096 |
| 0.0988 | 7.92 | 43800 | 0.1565 | 0.9901 |
| 0.0988 | 7.94 | 43900 | 0.1619 | 0.9742 |
| 0.102 | 7.96 | 44000 | 0.1598 | 0.9593 |
| 0.102 | 7.98 | 44100 | 0.1635 | 0.9718 |
| 0.102 | 8.0 | 44200 | 0.1624 | 0.9903 |
| 0.102 | 8.01 | 44300 | 0.1605 | 0.9882 |
| 0.102 | 8.03 | 44400 | 0.1657 | 1.0128 |
| 0.0961 | 8.05 | 44500 | 0.1651 | 1.0155 |
| 0.0961 | 8.07 | 44600 | 0.1680 | 1.0194 |
| 0.0961 | 8.09 | 44700 | 0.1694 | 1.0112 |
| 0.0961 | 8.1 | 44800 | 0.1665 | 1.0073 |
| 0.0961 | 8.12 | 44900 | 0.1612 | 1.0200 |
| 0.0894 | 8.14 | 45000 | 0.1652 | 1.0337 |
| 0.0894 | 8.16 | 45100 | 0.1626 | 1.0086 |
| 0.0894 | 8.18 | 45200 | 0.1639 | 1.0083 |
| 0.0894 | 8.19 | 45300 | 0.1634 | 1.0223 |
| 0.0894 | 8.21 | 45400 | 0.1631 | 1.0339 |
| 0.0887 | 8.23 | 45500 | 0.1640 | 1.0311 |
| 0.0887 | 8.25 | 45600 | 0.1661 | 1.0264 |
| 0.0887 | 8.27 | 45700 | 0.1650 | 1.0315 |
| 0.0887 | 8.29 | 45800 | 0.1624 | 1.0390 |
| 0.0887 | 8.3 | 45900 | 0.1624 | 1.0350 |
| 0.0884 | 8.32 | 46000 | 0.1615 | 1.0318 |
| 0.0884 | 8.34 | 46100 | 0.1628 | 1.0410 |
| 0.0884 | 8.36 | 46200 | 0.1627 | 1.0429 |
| 0.0884 | 8.38 | 46300 | 0.1644 | 1.0320 |
| 0.0884 | 8.39 | 46400 | 0.1633 | 1.0177 |
| 0.0893 | 8.41 | 46500 | 0.1654 | 1.0189 |
| 0.0893 | 8.43 | 46600 | 0.1598 | 1.0154 |
| 0.0893 | 8.45 | 46700 | 0.1618 | 1.0250 |
| 0.0893 | 8.47 | 46800 | 0.1639 | 1.0402 |
| 0.0893 | 8.48 | 46900 | 0.1616 | 1.0336 |
| 0.0869 | 8.5 | 47000 | 0.1613 | 1.0296 |
| 0.0869 | 8.52 | 47100 | 0.1648 | 1.0568 |
| 0.0869 | 8.54 | 47200 | 0.1625 | 1.0256 |
| 0.0869 | 8.56 | 47300 | 0.1609 | 1.0390 |
| 0.0869 | 8.57 | 47400 | 0.1606 | 1.0450 |
| 0.0894 | 8.59 | 47500 | 0.1605 | 1.0445 |
| 0.0894 | 8.61 | 47600 | 0.1660 | 1.0402 |
| 0.0894 | 8.63 | 47700 | 0.1618 | 1.0444 |
| 0.0894 | 8.65 | 47800 | 0.1669 | 1.0333 |
| 0.0894 | 8.66 | 47900 | 0.1627 | 1.0364 |
| 0.0885 | 8.68 | 48000 | 0.1616 | 1.0334 |
| 0.0885 | 8.7 | 48100 | 0.1626 | 1.0564 |
| 0.0885 | 8.72 | 48200 | 0.1624 | 1.0396 |
| 0.0885 | 8.74 | 48300 | 0.1623 | 1.0396 |
| 0.0885 | 8.76 | 48400 | 0.1612 | 1.0112 |
| 0.0888 | 8.77 | 48500 | 0.1638 | 1.0292 |
| 0.0888 | 8.79 | 48600 | 0.1639 | 0.9988 |
| 0.0888 | 8.81 | 48700 | 0.1618 | 1.0127 |
| 0.0888 | 8.83 | 48800 | 0.1584 | 1.0042 |
| 0.0888 | 8.85 | 48900 | 0.1615 | 1.0041 |
| 0.0887 | 8.86 | 49000 | 0.1637 | 1.0269 |
| 0.0887 | 8.88 | 49100 | 0.1627 | 0.9989 |
| 0.0887 | 8.9 | 49200 | 0.1583 | 1.0104 |
| 0.0887 | 8.92 | 49300 | 0.1600 | 1.0214 |
| 0.0887 | 8.94 | 49400 | 0.1599 | 1.0126 |
| 0.0893 | 8.95 | 49500 | 0.1595 | 1.0516 |
| 0.0893 | 8.97 | 49600 | 0.1625 | 1.0464 |
| 0.0893 | 8.99 | 49700 | 0.1595 | 1.0361 |
| 0.0893 | 9.01 | 49800 | 0.1614 | 1.0469 |
| 0.0893 | 9.03 | 49900 | 0.1612 | 1.0304 |
| 0.0834 | 9.04 | 50000 | 0.1643 | 1.0335 |
| 0.0834 | 9.06 | 50100 | 0.1640 | 1.0175 |
| 0.0834 | 9.08 | 50200 | 0.1655 | 1.0264 |
| 0.0834 | 9.1 | 50300 | 0.1678 | 1.0243 |
| 0.0834 | 9.12 | 50400 | 0.1659 | 1.0145 |
| 0.079 | 9.14 | 50500 | 0.1644 | 1.0316 |
| 0.079 | 9.15 | 50600 | 0.1630 | 1.0326 |
| 0.079 | 9.17 | 50700 | 0.1634 | 1.0154 |
| 0.079 | 9.19 | 50800 | 0.1697 | 1.0095 |
| 0.079 | 9.21 | 50900 | 0.1678 | 1.0050 |
| 0.078 | 9.23 | 51000 | 0.1626 | 1.0159 |
| 0.078 | 9.24 | 51100 | 0.1666 | 1.0238 |
| 0.078 | 9.26 | 51200 | 0.1644 | 1.0244 |
| 0.078 | 9.28 | 51300 | 0.1655 | 1.0345 |
| 0.078 | 9.3 | 51400 | 0.1615 | 1.0237 |
| 0.0776 | 9.32 | 51500 | 0.1664 | 1.0180 |
| 0.0776 | 9.33 | 51600 | 0.1603 | 1.0208 |
| 0.0776 | 9.35 | 51700 | 0.1594 | 1.0230 |
| 0.0776 | 9.37 | 51800 | 0.1622 | 1.0201 |
| 0.0776 | 9.39 | 51900 | 0.1596 | 1.0039 |
| 0.0782 | 9.41 | 52000 | 0.1645 | 1.0204 |
| 0.0782 | 9.42 | 52100 | 0.1640 | 1.0318 |
| 0.0782 | 9.44 | 52200 | 0.1621 | 1.0290 |
| 0.0782 | 9.46 | 52300 | 0.1638 | 1.0318 |
| 0.0782 | 9.48 | 52400 | 0.1613 | 1.0217 |
| 0.0782 | 9.5 | 52500 | 0.1609 | 1.0261 |
| 0.0782 | 9.52 | 52600 | 0.1625 | 1.0101 |
| 0.0782 | 9.53 | 52700 | 0.1613 | 1.0058 |
| 0.0782 | 9.55 | 52800 | 0.1599 | 1.0068 |
| 0.0782 | 9.57 | 52900 | 0.1600 | 1.0110 |
| 0.0797 | 9.59 | 53000 | 0.1594 | 1.0171 |
| 0.0797 | 9.61 | 53100 | 0.1583 | 1.0124 |
| 0.0797 | 9.62 | 53200 | 0.1646 | 1.0093 |
| 0.0797 | 9.64 | 53300 | 0.1580 | 1.0201 |
| 0.0797 | 9.66 | 53400 | 0.1599 | 1.0207 |
| 0.0783 | 9.68 | 53500 | 0.1577 | 1.0226 |
| 0.0783 | 9.7 | 53600 | 0.1593 | 1.0160 |
| 0.0783 | 9.71 | 53700 | 0.1570 | 1.0173 |
| 0.0783 | 9.73 | 53800 | 0.1614 | 1.0299 |
| 0.0783 | 9.75 | 53900 | 0.1610 | 1.0184 |
| 0.0779 | 9.77 | 54000 | 0.1606 | 1.0173 |
| 0.0779 | 9.79 | 54100 | 0.1577 | 1.0032 |
| 0.0779 | 9.8 | 54200 | 0.1590 | 1.0070 |
| 0.0779 | 9.82 | 54300 | 0.1580 | 1.0257 |
| 0.0779 | 9.84 | 54400 | 0.1592 | 1.0108 |
| 0.0778 | 9.86 | 54500 | 0.1617 | 0.9907 |
| 0.0778 | 9.88 | 54600 | 0.1605 | 1.0189 |
| 0.0778 | 9.89 | 54700 | 0.1605 | 1.0177 |
| 0.0778 | 9.91 | 54800 | 0.1536 | 1.0275 |
| 0.0778 | 9.93 | 54900 | 0.1658 | 1.0282 |
| 0.0777 | 9.95 | 55000 | 0.1543 | 1.0385 |
| 0.0777 | 9.97 | 55100 | 0.1559 | 1.0375 |
| 0.0777 | 9.99 | 55200 | 0.1590 | 1.0215 |
| 0.0777 | 10.0 | 55300 | 0.1624 | 1.0242 |
| 0.0777 | 10.02 | 55400 | 0.1635 | 1.0244 |
| 0.0712 | 10.04 | 55500 | 0.1629 | 1.0298 |
| 0.0712 | 10.06 | 55600 | 0.1601 | 1.0299 |
| 0.0712 | 10.08 | 55700 | 0.1625 | 1.0117 |
| 0.0712 | 10.09 | 55800 | 0.1650 | 1.0233 |
| 0.0712 | 10.11 | 55900 | 0.1631 | 1.0061 |
| 0.0667 | 10.13 | 56000 | 0.1637 | 1.0226 |
| 0.0667 | 10.15 | 56100 | 0.1607 | 1.0042 |
| 0.0667 | 10.17 | 56200 | 0.1599 | 1.0117 |
| 0.0667 | 10.18 | 56300 | 0.1623 | 1.0246 |
| 0.0667 | 10.2 | 56400 | 0.1639 | 1.0294 |
| 0.0695 | 10.22 | 56500 | 0.1650 | 1.0232 |
| 0.0695 | 10.24 | 56600 | 0.1620 | 1.0289 |
| 0.0695 | 10.26 | 56700 | 0.1667 | 1.0209 |
| 0.0695 | 10.27 | 56800 | 0.1580 | 1.0163 |
| 0.0695 | 10.29 | 56900 | 0.1646 | 1.0293 |
| 0.0686 | 10.31 | 57000 | 0.1636 | 1.0106 |
| 0.0686 | 10.33 | 57100 | 0.1586 | 1.0044 |
| 0.0686 | 10.35 | 57200 | 0.1582 | 1.0213 |
| 0.0686 | 10.37 | 57300 | 0.1627 | 1.0151 |
| 0.0686 | 10.38 | 57400 | 0.1619 | 1.0248 |
| 0.0686 | 10.4 | 57500 | 0.1596 | 1.0098 |
| 0.0686 | 10.42 | 57600 | 0.1606 | 1.0031 |
| 0.0686 | 10.44 | 57700 | 0.1620 | 1.0046 |
| 0.0686 | 10.46 | 57800 | 0.1592 | 1.0018 |
| 0.0686 | 10.47 | 57900 | 0.1592 | 1.0058 |
| 0.0669 | 10.49 | 58000 | 0.1605 | 0.9961 |
| 0.0669 | 10.51 | 58100 | 0.1632 | 1.0102 |
| 0.0669 | 10.53 | 58200 | 0.1593 | 1.0061 |
| 0.0669 | 10.55 | 58300 | 0.1586 | 1.0091 |
| 0.0669 | 10.56 | 58400 | 0.1603 | 1.0085 |
| 0.068 | 10.58 | 58500 | 0.1579 | 1.0031 |
| 0.068 | 10.6 | 58600 | 0.1591 | 1.0021 |
| 0.068 | 10.62 | 58700 | 0.1590 | 1.0163 |
| 0.068 | 10.64 | 58800 | 0.1584 | 1.0045 |
| 0.068 | 10.65 | 58900 | 0.1594 | 1.0158 |
| 0.0693 | 10.67 | 59000 | 0.1568 | 1.0052 |
| 0.0693 | 10.69 | 59100 | 0.1581 | 0.9955 |
| 0.0693 | 10.71 | 59200 | 0.1622 | 0.9917 |
| 0.0693 | 10.73 | 59300 | 0.1580 | 1.0018 |
| 0.0693 | 10.75 | 59400 | 0.1601 | 1.0077 |
| 0.0699 | 10.76 | 59500 | 0.1605 | 0.9997 |
| 0.0699 | 10.78 | 59600 | 0.1585 | 1.0009 |
| 0.0699 | 10.8 | 59700 | 0.1541 | 1.0058 |
| 0.0699 | 10.82 | 59800 | 0.1583 | 1.0026 |
| 0.0699 | 10.84 | 59900 | 0.1592 | 0.9992 |
| 0.0671 | 10.85 | 60000 | 0.1590 | 1.0004 |
| 0.0671 | 10.87 | 60100 | 0.1585 | 1.0060 |
| 0.0671 | 10.89 | 60200 | 0.1579 | 1.0063 |
| 0.0671 | 10.91 | 60300 | 0.1582 | 0.9949 |
| 0.0671 | 10.93 | 60400 | 0.1562 | 1.0004 |
| 0.0661 | 10.94 | 60500 | 0.1560 | 0.9950 |
| 0.0661 | 10.96 | 60600 | 0.1564 | 0.9990 |
| 0.0661 | 10.98 | 60700 | 0.1552 | 0.9982 |
| 0.0661 | 11.0 | 60800 | 0.1596 | 1.0018 |
| 0.0661 | 11.02 | 60900 | 0.1618 | 0.9905 |
| 0.0634 | 11.03 | 61000 | 0.1652 | 0.9890 |
| 0.0634 | 11.05 | 61100 | 0.1649 | 0.9886 |
| 0.0634 | 11.07 | 61200 | 0.1668 | 0.9870 |
| 0.0634 | 11.09 | 61300 | 0.1663 | 0.9921 |
| 0.0634 | 11.11 | 61400 | 0.1650 | 0.9919 |
| 0.0587 | 11.13 | 61500 | 0.1674 | 0.9831 |
| 0.0587 | 11.14 | 61600 | 0.1633 | 0.9793 |
| 0.0587 | 11.16 | 61700 | 0.1665 | 0.9781 |
| 0.0587 | 11.18 | 61800 | 0.1642 | 0.9821 |
| 0.0587 | 11.2 | 61900 | 0.1638 | 0.9797 |
| 0.0581 | 11.22 | 62000 | 0.1628 | 0.9727 |
| 0.0581 | 11.23 | 62100 | 0.1661 | 0.9796 |
| 0.0581 | 11.25 | 62200 | 0.1641 | 0.9830 |
| 0.0581 | 11.27 | 62300 | 0.1601 | 0.9867 |
| 0.0581 | 11.29 | 62400 | 0.1626 | 0.9757 |
| 0.0584 | 11.31 | 62500 | 0.1632 | 1.0014 |
| 0.0584 | 11.32 | 62600 | 0.1626 | 1.0052 |
| 0.0584 | 11.34 | 62700 | 0.1586 | 1.0098 |
| 0.0584 | 11.36 | 62800 | 0.1597 | 1.0151 |
| 0.0584 | 11.38 | 62900 | 0.1624 | 1.0054 |
| 0.0589 | 11.4 | 63000 | 0.1618 | 1.0018 |
| 0.0589 | 11.41 | 63100 | 0.1635 | 1.0032 |
| 0.0589 | 11.43 | 63200 | 0.1654 | 1.0142 |
| 0.0589 | 11.45 | 63300 | 0.1646 | 1.0031 |
| 0.0589 | 11.47 | 63400 | 0.1618 | 1.0118 |
| 0.0579 | 11.49 | 63500 | 0.1634 | 1.0218 |
| 0.0579 | 11.51 | 63600 | 0.1616 | 1.0179 |
| 0.0579 | 11.52 | 63700 | 0.1603 | 1.0036 |
| 0.0579 | 11.54 | 63800 | 0.1610 | 1.0150 |
| 0.0579 | 11.56 | 63900 | 0.1605 | 1.0285 |
| 0.0572 | 11.58 | 64000 | 0.1621 | 1.0261 |
| 0.0572 | 11.6 | 64100 | 0.1625 | 1.0252 |
| 0.0572 | 11.61 | 64200 | 0.1677 | 1.0257 |
| 0.0572 | 11.63 | 64300 | 0.1656 | 1.0243 |
| 0.0572 | 11.65 | 64400 | 0.1669 | 1.0270 |
| 0.0592 | 11.67 | 64500 | 0.1605 | 1.0305 |
| 0.0592 | 11.69 | 64600 | 0.1633 | 1.0277 |
| 0.0592 | 11.7 | 64700 | 0.1606 | 1.0176 |
| 0.0592 | 11.72 | 64800 | 0.1618 | 1.0249 |
| 0.0592 | 11.74 | 64900 | 0.1609 | 1.0113 |
| 0.0595 | 11.76 | 65000 | 0.1609 | 1.0254 |
| 0.0595 | 11.78 | 65100 | 0.1662 | 1.0275 |
| 0.0595 | 11.79 | 65200 | 0.1652 | 1.0164 |
| 0.0595 | 11.81 | 65300 | 0.1638 | 1.0266 |
| 0.0595 | 11.83 | 65400 | 0.1589 | 1.0274 |
| 0.0588 | 11.85 | 65500 | 0.1607 | 1.0136 |
| 0.0588 | 11.87 | 65600 | 0.1592 | 1.0136 |
| 0.0588 | 11.88 | 65700 | 0.1581 | 1.0183 |
| 0.0588 | 11.9 | 65800 | 0.1587 | 1.0133 |
| 0.0588 | 11.92 | 65900 | 0.1596 | 1.0170 |
| 0.0558 | 11.94 | 66000 | 0.1590 | 1.0161 |
| 0.0558 | 11.96 | 66100 | 0.1597 | 1.0193 |
| 0.0558 | 11.98 | 66200 | 0.1590 | 1.0193 |
| 0.0558 | 11.99 | 66300 | 0.1608 | 1.0242 |
| 0.0558 | 12.01 | 66400 | 0.1642 | 1.0231 |
| 0.0555 | 12.03 | 66500 | 0.1679 | 1.0168 |
| 0.0555 | 12.05 | 66600 | 0.1674 | 1.0083 |
| 0.0555 | 12.07 | 66700 | 0.1658 | 1.0069 |
| 0.0555 | 12.08 | 66800 | 0.1661 | 1.0134 |
| 0.0555 | 12.1 | 66900 | 0.1682 | 1.0274 |
| 0.0508 | 12.12 | 67000 | 0.1702 | 1.0219 |
| 0.0508 | 12.14 | 67100 | 0.1694 | 1.0219 |
| 0.0508 | 12.16 | 67200 | 0.1667 | 1.0236 |
| 0.0508 | 12.17 | 67300 | 0.1672 | 1.0253 |
| 0.0508 | 12.19 | 67400 | 0.1640 | 1.0215 |
| 0.0513 | 12.21 | 67500 | 0.1649 | 1.0242 |
| 0.0513 | 12.23 | 67600 | 0.1687 | 1.0262 |
| 0.0513 | 12.25 | 67700 | 0.1655 | 1.0231 |
| 0.0513 | 12.26 | 67800 | 0.1692 | 1.0176 |
| 0.0513 | 12.28 | 67900 | 0.1675 | 1.0202 |
| 0.0519 | 12.3 | 68000 | 0.1644 | 1.0241 |
| 0.0519 | 12.32 | 68100 | 0.1651 | 1.0297 |
| 0.0519 | 12.34 | 68200 | 0.1661 | 1.0287 |
| 0.0519 | 12.36 | 68300 | 0.1665 | 1.0257 |
| 0.0519 | 12.37 | 68400 | 0.1685 | 1.0233 |
| 0.0522 | 12.39 | 68500 | 0.1636 | 1.0177 |
| 0.0522 | 12.41 | 68600 | 0.1709 | 1.0200 |
| 0.0522 | 12.43 | 68700 | 0.1684 | 1.0164 |
| 0.0522 | 12.45 | 68800 | 0.1666 | 1.0119 |
| 0.0522 | 12.46 | 68900 | 0.1683 | 1.0136 |
| 0.05 | 12.48 | 69000 | 0.1696 | 1.0127 |
| 0.05 | 12.5 | 69100 | 0.1708 | 1.0184 |
| 0.05 | 12.52 | 69200 | 0.1654 | 1.0282 |
| 0.05 | 12.54 | 69300 | 0.1700 | 1.0235 |
| 0.05 | 12.55 | 69400 | 0.1688 | 1.0257 |
| 0.0513 | 12.57 | 69500 | 0.1646 | 1.0274 |
| 0.0513 | 12.59 | 69600 | 0.1660 | 1.0247 |
| 0.0513 | 12.61 | 69700 | 0.1657 | 1.0188 |
| 0.0513 | 12.63 | 69800 | 0.1654 | 1.0087 |
| 0.0513 | 12.64 | 69900 | 0.1681 | 1.0146 |
| 0.0512 | 12.66 | 70000 | 0.1660 | 1.0185 |
| 0.0512 | 12.68 | 70100 | 0.1690 | 1.0214 |
| 0.0512 | 12.7 | 70200 | 0.1683 | 1.0160 |
| 0.0512 | 12.72 | 70300 | 0.1695 | 1.0198 |
| 0.0512 | 12.74 | 70400 | 0.1666 | 1.0193 |
| 0.0484 | 12.75 | 70500 | 0.1654 | 1.0142 |
| 0.0484 | 12.77 | 70600 | 0.1598 | 1.0154 |
| 0.0484 | 12.79 | 70700 | 0.1623 | 1.0139 |
| 0.0484 | 12.81 | 70800 | 0.1662 | 1.0180 |
| 0.0484 | 12.83 | 70900 | 0.1659 | 1.0232 |
| 0.0501 | 12.84 | 71000 | 0.1662 | 1.0202 |
| 0.0501 | 12.86 | 71100 | 0.1639 | 1.0161 |
| 0.0501 | 12.88 | 71200 | 0.1666 | 1.0151 |
| 0.0501 | 12.9 | 71300 | 0.1644 | 1.0129 |
| 0.0501 | 12.92 | 71400 | 0.1642 | 1.0171 |
| 0.0482 | 12.93 | 71500 | 0.1635 | 1.0162 |
| 0.0482 | 12.95 | 71600 | 0.1637 | 1.0186 |
| 0.0482 | 12.97 | 71700 | 0.1639 | 1.0142 |
| 0.0482 | 12.99 | 71800 | 0.1643 | 1.0122 |
| 0.0482 | 13.01 | 71900 | 0.1679 | 1.0156 |
| 0.0483 | 13.02 | 72000 | 0.1717 | 1.0224 |
| 0.0483 | 13.04 | 72100 | 0.1742 | 1.0229 |
| 0.0483 | 13.06 | 72200 | 0.1718 | 1.0237 |
| 0.0483 | 13.08 | 72300 | 0.1742 | 1.0266 |
| 0.0483 | 13.1 | 72400 | 0.1736 | 1.0257 |
| 0.0443 | 13.12 | 72500 | 0.1741 | 1.0275 |
| 0.0443 | 13.13 | 72600 | 0.1745 | 1.0325 |
| 0.0443 | 13.15 | 72700 | 0.1737 | 1.0296 |
| 0.0443 | 13.17 | 72800 | 0.1722 | 1.0303 |
| 0.0443 | 13.19 | 72900 | 0.1702 | 1.0305 |
| 0.0424 | 13.21 | 73000 | 0.1733 | 1.0241 |
| 0.0424 | 13.22 | 73100 | 0.1748 | 1.0243 |
| 0.0424 | 13.24 | 73200 | 0.1760 | 1.0231 |
| 0.0424 | 13.26 | 73300 | 0.1745 | 1.0241 |
| 0.0424 | 13.28 | 73400 | 0.1772 | 1.0217 |
| 0.0424 | 13.3 | 73500 | 0.1755 | 1.0206 |
| 0.0424 | 13.31 | 73600 | 0.1743 | 1.0242 |
| 0.0424 | 13.33 | 73700 | 0.1738 | 1.0208 |
| 0.0424 | 13.35 | 73800 | 0.1736 | 1.0249 |
| 0.0424 | 13.37 | 73900 | 0.1747 | 1.0271 |
| 0.0437 | 13.39 | 74000 | 0.1707 | 1.0241 |
| 0.0437 | 13.4 | 74100 | 0.1731 | 1.0269 |
| 0.0437 | 13.42 | 74200 | 0.1743 | 1.0290 |
| 0.0437 | 13.44 | 74300 | 0.1739 | 1.0266 |
| 0.0437 | 13.46 | 74400 | 0.1763 | 1.0246 |
| 0.0443 | 13.48 | 74500 | 0.1724 | 1.0209 |
| 0.0443 | 13.49 | 74600 | 0.1744 | 1.0244 |
| 0.0443 | 13.51 | 74700 | 0.1717 | 1.0232 |
| 0.0443 | 13.53 | 74800 | 0.1754 | 1.0217 |
| 0.0443 | 13.55 | 74900 | 0.1721 | 1.0234 |
| 0.0435 | 13.57 | 75000 | 0.1751 | 1.0197 |
| 0.0435 | 13.59 | 75100 | 0.1727 | 1.0285 |
| 0.0435 | 13.6 | 75200 | 0.1715 | 1.0221 |
| 0.0435 | 13.62 | 75300 | 0.1746 | 1.0247 |
| 0.0435 | 13.64 | 75400 | 0.1712 | 1.0231 |
| 0.0436 | 13.66 | 75500 | 0.1719 | 1.0228 |
| 0.0436 | 13.68 | 75600 | 0.1727 | 1.0197 |
| 0.0436 | 13.69 | 75700 | 0.1750 | 1.0252 |
| 0.0436 | 13.71 | 75800 | 0.1702 | 1.0241 |
| 0.0436 | 13.73 | 75900 | 0.1720 | 1.0250 |
| 0.0433 | 13.75 | 76000 | 0.1744 | 1.0210 |
| 0.0433 | 13.77 | 76100 | 0.1735 | 1.0211 |
| 0.0433 | 13.78 | 76200 | 0.1727 | 1.0205 |
| 0.0433 | 13.8 | 76300 | 0.1706 | 1.0218 |
| 0.0433 | 13.82 | 76400 | 0.1709 | 1.0238 |
| 0.0431 | 13.84 | 76500 | 0.1705 | 1.0197 |
| 0.0431 | 13.86 | 76600 | 0.1734 | 1.0223 |
| 0.0431 | 13.87 | 76700 | 0.1695 | 1.0250 |
| 0.0431 | 13.89 | 76800 | 0.1734 | 1.0232 |
| 0.0431 | 13.91 | 76900 | 0.1724 | 1.0219 |
| 0.041 | 13.93 | 77000 | 0.1706 | 1.0236 |
| 0.041 | 13.95 | 77100 | 0.1689 | 1.0220 |
| 0.041 | 13.97 | 77200 | 0.1738 | 1.0230 |
| 0.041 | 13.98 | 77300 | 0.1727 | 1.0254 |
| 0.041 | 14.0 | 77400 | 0.1721 | 1.0261 |
| 0.041 | 14.02 | 77500 | 0.1760 | 1.0261 |
| 0.041 | 14.04 | 77600 | 0.1772 | 1.0202 |
| 0.041 | 14.06 | 77700 | 0.1782 | 1.0202 |
| 0.041 | 14.07 | 77800 | 0.1777 | 1.0222 |
| 0.041 | 14.09 | 77900 | 0.1787 | 1.0203 |
| 0.0383 | 14.11 | 78000 | 0.1790 | 1.0236 |
| 0.0383 | 14.13 | 78100 | 0.1812 | 1.0245 |
| 0.0383 | 14.15 | 78200 | 0.1778 | 1.0224 |
| 0.0383 | 14.16 | 78300 | 0.1771 | 1.0231 |
| 0.0383 | 14.18 | 78400 | 0.1782 | 1.0242 |
| 0.0391 | 14.2 | 78500 | 0.1785 | 1.0262 |
| 0.0391 | 14.22 | 78600 | 0.1791 | 1.0261 |
| 0.0391 | 14.24 | 78700 | 0.1770 | 1.0254 |
| 0.0391 | 14.25 | 78800 | 0.1810 | 1.0257 |
| 0.0391 | 14.27 | 78900 | 0.1794 | 1.0241 |
| 0.0387 | 14.29 | 79000 | 0.1774 | 1.0256 |
| 0.0387 | 14.31 | 79100 | 0.1774 | 1.0236 |
| 0.0387 | 14.33 | 79200 | 0.1759 | 1.0222 |
| 0.0387 | 14.35 | 79300 | 0.1787 | 1.0237 |
| 0.0387 | 14.36 | 79400 | 0.1788 | 1.0227 |
| 0.0372 | 14.38 | 79500 | 0.1789 | 1.0232 |
| 0.0372 | 14.4 | 79600 | 0.1771 | 1.0254 |
| 0.0372 | 14.42 | 79700 | 0.1777 | 1.0244 |
| 0.0372 | 14.44 | 79800 | 0.1791 | 1.0225 |
| 0.0372 | 14.45 | 79900 | 0.1786 | 1.0237 |
| 0.0385 | 14.47 | 80000 | 0.1782 | 1.0243 |
| 0.0385 | 14.49 | 80100 | 0.1770 | 1.0236 |
| 0.0385 | 14.51 | 80200 | 0.1782 | 1.0240 |
| 0.0385 | 14.53 | 80300 | 0.1764 | 1.0243 |
| 0.0385 | 14.54 | 80400 | 0.1748 | 1.0248 |
| 0.039 | 14.56 | 80500 | 0.1758 | 1.0232 |
| 0.039 | 14.58 | 80600 | 0.1763 | 1.0246 |
| 0.039 | 14.6 | 80700 | 0.1770 | 1.0220 |
| 0.039 | 14.62 | 80800 | 0.1788 | 1.0225 |
| 0.039 | 14.63 | 80900 | 0.1781 | 1.0230 |
| 0.039 | 14.65 | 81000 | 0.1779 | 1.0230 |
| 0.039 | 14.67 | 81100 | 0.1755 | 1.0212 |
| 0.039 | 14.69 | 81200 | 0.1765 | 1.0226 |
| 0.039 | 14.71 | 81300 | 0.1787 | 1.0241 |
| 0.039 | 14.72 | 81400 | 0.1782 | 1.0250 |
| 0.0368 | 14.74 | 81500 | 0.1780 | 1.0248 |
| 0.0368 | 14.76 | 81600 | 0.1782 | 1.0242 |
| 0.0368 | 14.78 | 81700 | 0.1782 | 1.0242 |
| 0.0368 | 14.8 | 81800 | 0.1792 | 1.0241 |
| 0.0368 | 14.82 | 81900 | 0.1796 | 1.0238 |
| 0.0378 | 14.83 | 82000 | 0.1795 | 1.0236 |
| 0.0378 | 14.85 | 82100 | 0.1796 | 1.0239 |
| 0.0378 | 14.87 | 82200 | 0.1792 | 1.0236 |
| 0.0378 | 14.89 | 82300 | 0.1789 | 1.0239 |
| 0.0378 | 14.91 | 82400 | 0.1788 | 1.0238 |
| 0.0386 | 14.92 | 82500 | 0.1787 | 1.0239 |
| 0.0386 | 14.94 | 82600 | 0.1786 | 1.0236 |
| 0.0386 | 14.96 | 82700 | 0.1786 | 1.0237 |
| 0.0386 | 14.98 | 82800 | 0.1787 | 1.0239 |
| 0.0386 | 15.0 | 82900 | 0.1788 | 1.0238 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"language": ["es"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-common_voice-es-demo", "results": []}]}
|
gabrieljg/wav2vec2-common_voice-es-demo
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"es",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #es #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-common\_voice-es-demo
==============================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the COMMON\_VOICE - ES dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1788
* Wer: 1.0239
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 15.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #es #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (https://huggingface.co/gabtan99/dialogpt-tagalog-medium).
This model is trained on 52K original conversations and 52K synthetic conversations, where 10% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
{"language": ["tl"], "tags": ["conversational", "tagalog", "filipino"]}
|
gabtan99/dialogpt-tagalog-medium-10
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"tagalog",
"filipino",
"tl",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tl"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #tagalog #filipino #tl #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (URL
This model is trained on 52K original conversations and 52K synthetic conversations, where 10% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
[
"# Tagalog DialoGPT\n\nThis is an extension of the base Tagalog DialoGPT model (URL \n\nThis model is trained on 52K original conversations and 52K synthetic conversations, where 10% of tokens in each utterance in the synthetic conversation are machine-generated tokens."
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #tagalog #filipino #tl #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tagalog DialoGPT\n\nThis is an extension of the base Tagalog DialoGPT model (URL \n\nThis model is trained on 52K original conversations and 52K synthetic conversations, where 10% of tokens in each utterance in the synthetic conversation are machine-generated tokens."
] |
text-generation
|
transformers
|
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (https://huggingface.co/gabtan99/dialogpt-tagalog-medium).
This model is trained on 52K original conversations and 52K synthetic conversations, where 20% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
{"language": ["tl"], "tags": ["conversational", "tagalog", "filipino"], "inference": false}
|
gabtan99/dialogpt-tagalog-medium-20
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"tagalog",
"filipino",
"tl",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tl"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #tagalog #filipino #tl #autotrain_compatible #text-generation-inference #region-us
|
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (URL
This model is trained on 52K original conversations and 52K synthetic conversations, where 20% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
[
"# Tagalog DialoGPT\n\nThis is an extension of the base Tagalog DialoGPT model (URL \n\nThis model is trained on 52K original conversations and 52K synthetic conversations, where 20% of tokens in each utterance in the synthetic conversation are machine-generated tokens."
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #tagalog #filipino #tl #autotrain_compatible #text-generation-inference #region-us \n",
"# Tagalog DialoGPT\n\nThis is an extension of the base Tagalog DialoGPT model (URL \n\nThis model is trained on 52K original conversations and 52K synthetic conversations, where 20% of tokens in each utterance in the synthetic conversation are machine-generated tokens."
] |
text-generation
|
transformers
|
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (https://huggingface.co/gabtan99/dialogpt-tagalog-medium).
This model is trained on 52K original conversations and 52K synthetic conversations, where 30% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
{"language": ["tl"], "tags": ["conversational", "tagalog", "filipino"], "inference": false}
|
gabtan99/dialogpt-tagalog-medium-30
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"tagalog",
"filipino",
"tl",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tl"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #tagalog #filipino #tl #autotrain_compatible #text-generation-inference #region-us
|
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (URL
This model is trained on 52K original conversations and 52K synthetic conversations, where 30% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
[
"# Tagalog DialoGPT\n\nThis is an extension of the base Tagalog DialoGPT model (URL \n\nThis model is trained on 52K original conversations and 52K synthetic conversations, where 30% of tokens in each utterance in the synthetic conversation are machine-generated tokens."
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #tagalog #filipino #tl #autotrain_compatible #text-generation-inference #region-us \n",
"# Tagalog DialoGPT\n\nThis is an extension of the base Tagalog DialoGPT model (URL \n\nThis model is trained on 52K original conversations and 52K synthetic conversations, where 30% of tokens in each utterance in the synthetic conversation are machine-generated tokens."
] |
text-generation
|
transformers
|
# Tagalog DialoGPT
A DialoGPT-medium model fine-tuned on Tagalog conversational data scraped from the web. This model is an output of a research on RoBERTa-based data augmentation for low resource languages. This is the baseline model which did not use any synthetic data in training.
# Latest release: July 25, 2021
* The model is currently only able to respond based on the history of 3 previous utterances before being limited. This is a result of the scarce amount of Tagalog conversations in our dataset.
# Dataset
[PEx Conversations Dataset](https://huggingface.co/datasets/gabtan99/pex-conversations)
# Usage
Here is an example of using beam search for model inference.
```
for step in range(2):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# we limit the generation to 512 tokens, each utterance in training had a maximum of 128 tokens
chat_history_ids = model.generate(
bot_input_ids, max_length=512,
pad_token_id=tokenizer.eos_token_id,
num_beams=5,
no_repeat_ngram_size=3
)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
# Training Script
[Fine-tuning script adapted from Spanish DialoGPT](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb)
# Research by
* [tyadrianpaule](https://huggingface.co/tyadrianpaule)
* [schuylerng](https://huggingface.co/schuylerng)
* [dcl127](https://huggingface.co/dcl127)
|
{"language": ["tl"], "tags": ["conversational", "tagalog", "filipino"], "datasets": ["gabtan99/pex-conversations"], "inference": false}
|
gabtan99/dialogpt-tagalog-medium
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"tagalog",
"filipino",
"tl",
"dataset:gabtan99/pex-conversations",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tl"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #tagalog #filipino #tl #dataset-gabtan99/pex-conversations #autotrain_compatible #has_space #text-generation-inference #region-us
|
# Tagalog DialoGPT
A DialoGPT-medium model fine-tuned on Tagalog conversational data scraped from the web. This model is an output of a research on RoBERTa-based data augmentation for low resource languages. This is the baseline model which did not use any synthetic data in training.
# Latest release: July 25, 2021
* The model is currently only able to respond based on the history of 3 previous utterances before being limited. This is a result of the scarce amount of Tagalog conversations in our dataset.
# Dataset
PEx Conversations Dataset
# Usage
Here is an example of using beam search for model inference.
# Training Script
Fine-tuning script adapted from Spanish DialoGPT
# Research by
* tyadrianpaule
* schuylerng
* dcl127
|
[
"# Tagalog DialoGPT\nA DialoGPT-medium model fine-tuned on Tagalog conversational data scraped from the web. This model is an output of a research on RoBERTa-based data augmentation for low resource languages. This is the baseline model which did not use any synthetic data in training.",
"# Latest release: July 25, 2021\n* The model is currently only able to respond based on the history of 3 previous utterances before being limited. This is a result of the scarce amount of Tagalog conversations in our dataset.",
"# Dataset\nPEx Conversations Dataset",
"# Usage\nHere is an example of using beam search for model inference.",
"# Training Script\nFine-tuning script adapted from Spanish DialoGPT",
"# Research by\n* tyadrianpaule\n* schuylerng\n* dcl127"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #tagalog #filipino #tl #dataset-gabtan99/pex-conversations #autotrain_compatible #has_space #text-generation-inference #region-us \n",
"# Tagalog DialoGPT\nA DialoGPT-medium model fine-tuned on Tagalog conversational data scraped from the web. This model is an output of a research on RoBERTa-based data augmentation for low resource languages. This is the baseline model which did not use any synthetic data in training.",
"# Latest release: July 25, 2021\n* The model is currently only able to respond based on the history of 3 previous utterances before being limited. This is a result of the scarce amount of Tagalog conversations in our dataset.",
"# Dataset\nPEx Conversations Dataset",
"# Usage\nHere is an example of using beam search for model inference.",
"# Training Script\nFine-tuning script adapted from Spanish DialoGPT",
"# Research by\n* tyadrianpaule\n* schuylerng\n* dcl127"
] |
null | null |
I am adding my first README in order to test the interface. How good is it really?
|
{}
|
gael1130/gael_first_model
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
I am adding my first README in order to test the interface. How good is it really?
|
[] |
[
"TAGS\n#region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.