pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text2text-generation
|
transformers
|
This model is an implementation of the paper [A Simple Recipe for Multilingual Grammatical Error Correction](https://arxiv.org/pdf/2106.03830.pdf) from Google where they report the State of the art score in the task of Grammatical Error Correction (GEC).
We implement the version with the T5-small with the reported F_0.5 score in the paper (60.70).
To effectively use the "Hosted inference API", write "gec: [YOUR SENTENCE HERE]".
In order to use the model, look at the following snippet:
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("Unbabel/gec-t5_small")
tokenizer = T5Tokenizer.from_pretrained('t5-small')
sentence = "I like to swimming"
tokenized_sentence = tokenizer('gec: ' + sentence, max_length=128, truncation=True, padding='max_length', return_tensors='pt')
corrected_sentence = tokenizer.decode(
model.generate(
input_ids = tokenized_sentence.input_ids,
attention_mask = tokenized_sentence.attention_mask,
max_length=128,
num_beams=5,
early_stopping=True,
)[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True
)
print(corrected_sentence) # -> I like swimming.
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["grammatical error correction", "text2text", "t5"], "datasets": ["clang-8", "conll-14", "conll-13"], "metrics": ["f0.5"]}
|
Unbabel/gec-t5_small
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"grammatical error correction",
"text2text",
"en",
"dataset:clang-8",
"dataset:conll-14",
"dataset:conll-13",
"arxiv:2106.03830",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.03830"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #grammatical error correction #text2text #en #dataset-clang-8 #dataset-conll-14 #dataset-conll-13 #arxiv-2106.03830 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
This model is an implementation of the paper A Simple Recipe for Multilingual Grammatical Error Correction from Google where they report the State of the art score in the task of Grammatical Error Correction (GEC).
We implement the version with the T5-small with the reported F_0.5 score in the paper (60.70).
To effectively use the "Hosted inference API", write "gec: [YOUR SENTENCE HERE]".
In order to use the model, look at the following snippet:
|
[] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #grammatical error correction #text2text #en #dataset-clang-8 #dataset-conll-14 #dataset-conll-13 #arxiv-2106.03830 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
feature-extraction
|
transformers
|
# Model
mMiniLM-L12xH384 XLM-R model proposed in [MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers](https://arxiv.org/abs/2012.15828) that we fine-tune using the direct assessment annotations collected in the Workshop on Statistical Machine Translation (WMT) 2015 to 2020.
This model is much more light weight than the traditional XLM-RoBERTa base and large.
|
{}
|
Unbabel/xlm-roberta-comet-small
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"arxiv:2012.15828",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2012.15828"
] |
[] |
TAGS
#transformers #pytorch #xlm-roberta #feature-extraction #arxiv-2012.15828 #endpoints_compatible #region-us
|
# Model
mMiniLM-L12xH384 XLM-R model proposed in MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers that we fine-tune using the direct assessment annotations collected in the Workshop on Statistical Machine Translation (WMT) 2015 to 2020.
This model is much more light weight than the traditional XLM-RoBERTa base and large.
|
[
"# Model\n\nmMiniLM-L12xH384 XLM-R model proposed in MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers that we fine-tune using the direct assessment annotations collected in the Workshop on Statistical Machine Translation (WMT) 2015 to 2020.\n\nThis model is much more light weight than the traditional XLM-RoBERTa base and large."
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #feature-extraction #arxiv-2012.15828 #endpoints_compatible #region-us \n",
"# Model\n\nmMiniLM-L12xH384 XLM-R model proposed in MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers that we fine-tune using the direct assessment annotations collected in the Workshop on Statistical Machine Translation (WMT) 2015 to 2020.\n\nThis model is much more light weight than the traditional XLM-RoBERTa base and large."
] |
text-generation
|
transformers
|
# Mourinhio
|
{"tags": ["conversational"]}
|
Username1/Mourinhio-medium
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mourinhio
|
[
"# Mourinhio"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mourinhio"
] |
text-generation
|
transformers
|
# Mourinhio
|
{"tags": ["conversational"]}
|
Username1/Mourinho
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mourinhio
|
[
"# Mourinhio"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mourinhio"
] |
text-generation
|
transformers
|
# Wenger
|
{"tags": ["conversational"]}
|
Username1/Wenger
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Wenger
|
[
"# Wenger"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Wenger"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8107
- Matthews Correlation: 0.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5261 | 1.0 | 535 | 0.5509 | 0.3827 |
| 0.3498 | 2.0 | 1070 | 0.4936 | 0.5295 |
| 0.2369 | 3.0 | 1605 | 0.6505 | 0.5248 |
| 0.1637 | 4.0 | 2140 | 0.8107 | 0.5396 |
| 0.1299 | 5.0 | 2675 | 0.8738 | 0.5387 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5396261051709696, "name": "Matthews Correlation"}]}]}]}
|
V3RX2000/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8107
* Matthews Correlation: 0.5396
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9272
- Recall: 0.9376
- F1: 0.9324
- Accuracy: 0.9842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2495 | 1.0 | 878 | 0.0701 | 0.9191 | 0.9229 | 0.9210 | 0.9815 |
| 0.0526 | 2.0 | 1756 | 0.0613 | 0.9216 | 0.9350 | 0.9283 | 0.9832 |
| 0.0312 | 3.0 | 2634 | 0.0612 | 0.9272 | 0.9376 | 0.9324 | 0.9842 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9272043367629162, "name": "Precision"}, {"type": "recall", "value": 0.9375769101689228, "name": "Recall"}, {"type": "f1", "value": 0.932361775503393, "name": "F1"}, {"type": "accuracy", "value": 0.984193051297123, "name": "Accuracy"}]}]}]}
|
V3RX2000/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0612
* Precision: 0.9272
* Recall: 0.9376
* F1: 0.9324
* Accuracy: 0.9842
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2246 | 1.0 | 5533 | 1.1484 |
| 0.9433 | 2.0 | 11066 | 1.1294 |
| 0.7625 | 3.0 | 16599 | 1.1580 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
V3RX2000/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1580
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# GGODMODEL
|
{"tags": ["conversational"]}
|
VLRevolution/DialogGPT-small-GGODMODEL
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GGODMODEL
|
[
"# GGODMODEL"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GGODMODEL"
] |
text-generation
|
transformers
|
# Dumb bot
|
{"tags": ["conversational"]}
|
VMET/DialoGPT-small-dumbassbot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Dumb bot
|
[
"# Dumb bot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Dumb bot"
] |
text-generation
|
transformers
|
#Rick Sanchez DiaploGPT Model
|
{"tags": ["conversational"]}
|
VaguelyCynical/DialoGPT-small-RickSanchez
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Rick Sanchez DiaploGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
feature-extraction
|
transformers
|
# 中文预训练Longformer模型 | Longformer_ZH with PyTorch
相比于Transformer的O(n^2)复杂度,Longformer提供了一种以线性复杂度处理最长4K字符级别文档序列的方法。Longformer Attention包括了标准的自注意力与全局注意力机制,方便模型更好地学习超长序列的信息。
Compared with O(n^2) complexity for Transformer model, Longformer provides an efficient method for processing long-document level sequence in Linear complexity. Longformer’s attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention.
我们注意到关于中文Longformer或超长序列任务的资源较少,因此在此开源了我们预训练的中文Longformer模型参数, 并提供了相应的加载方法,以及预训练脚本。
There are not so much resource for Chinese Longformer or long-sequence-level chinese task. Thus we open source our pretrained longformer model to help the researchers.
## 加载模型 | Load the model
您可以使用谷歌云盘或百度网盘下载我们的模型
You could get Longformer_zh from Google Drive or Baidu Yun.
- Google Drive: https://drive.google.com/file/d/1IDJ4aVTfSFUQLIqCYBtoRpnfbgHPoxB4/view?usp=sharing
- 百度云: 链接:https://pan.baidu.com/s/1HaVDENx52I7ryPFpnQmq1w 提取码:y601
我们同样提供了Huggingface的自动下载
We also provide auto load with HuggingFace.Transformers.
```
from Longformer_zh import LongformerZhForMaksedLM
LongformerZhForMaksedLM.from_pretrained('ValkyriaLenneth/longformer_zh')
```
## 注意事项 | Notice
- 直接使用 `transformers.LongformerModel.from_pretrained` 加载模型
- Please use `transformers.LongformerModel.from_pretrained` to load the model directly
- 以下内容已经被弃用
- The following notices are abondoned, please ignore them.
- 区别于英文原版Longformer, 中文Longformer的基础是Roberta_zh模型,其本质上属于 `Transformers.BertModel` 而非 `RobertaModel`, 因此无法使用原版代码直接加载。
- Different with origin English Longformer, Longformer_Zh is based on Roberta_zh which is a subclass of `Transformers.BertModel` not `RobertaModel`. Thus it is impossible to load it with origin code.
- 我们提供了修改后的中文Longformer文件,您可以使用其加载参数。
- We provide modified Longformer_zh class, you can use it directly to load the model.
- 如果您想将此参数用于更多任务,请参考`Longformer_zh.py`替换Attention Layer.
- If you want to use our model on more down-stream tasks, please refer to `Longformer_zh.py` and replace Attention layer with Longformer Attention layer.
## 关于预训练 | About Pretraining
- 我们的预训练语料来自 https://github.com/brightmart/nlp_chinese_corpus, 根据Longformer原文的设置,采用了多种语料混合的预训练数据。
- The corpus of pretraining is from https://github.com/brightmart/nlp_chinese_corpus. Based on the paper of Longformer, we use a mixture of 4 different chinese corpus for pretraining.
- 我们的模型是基于Roberta_zh_mid (https://github.com/brightmart/roberta_zh),训练脚本参考了https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb
- The basement of our model is Roberta_zh_mid (https://github.com/brightmart/roberta_zh). Pretraining scripts is modified from https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb.
- 同时我们在原版基础上,引入了 `Whole-Word-Masking` 机制,以便更好地适应中文特性。
- We introduce `Whole-Word-Masking` method into pretraining for better fitting Chinese language.
- `Whole-Word-Masking`代码改写自TensorFlow版本的Roberta_zh,据我们所知是第一个开源的Pytorch版本WWM.
- Our WWM scripts is refacted from Roberta_zh_Tensorflow, as far as we know, it is the first open source Whole-word-masking scripts in Pytorch.
- 模型 `max_seq_length = 4096`, 在 4 * Titan RTX 上预训练3K steps 大概用时4天。
- Max seuence length is 4096 and the pretraining took 4 days on 4 * Titan RTX.
- 我们使用了 `Nvidia.Apex` 引入了混合精度训练,以加速预训练。
- We use `Nvidia.Apex` to accelerate pretraining.
- 关于数据预处理, 我们采用 `Jieba` 分词与`JIONLP`进行数据清洗。
- We use `Jieba` Chinese tokenizer and `JIONLP` data cleaning.
- 更多细节可以参考我们的预训练脚本
- For more details, please check our pretraining scripts.
## 效果测试 | Evaluation
### CCF Sentiment Analysis
- 由于中文超长文本级别任务稀缺,我们采用了CCF-Sentiment-Analysis任务进行测试
- Since it is hard to acquire open-sourced long sequence level chinese NLP task, we use CCF-Sentiment-Analysis for evaluation.
|Model|Dev F|
|----|----|
|Bert|80.3|
|Bert-wwm-ext| 80.5|
|Roberta-mid|80.5|
|Roberta-large|81.25|
|Longformer_SC|79.37|
|Longformer_ZH|80.51|
### Pretraining BPC
- 我们提供了预训练BPC(bits-per-character), BPC越小,代表语言模型性能更优。可视作PPL.
- We also provide BPC scores of pretraining, the lower BPC score, the better performance Langugage Model has. You can also treat it as PPL.
|Model|BPC|
|---|---|
|Longformer before training| 14.78|
|Longformer after training| 3.10|
### CMRC(Chinese Machine Reading Comprehension)
|Model|F1|EM|
|---|---|---|
|Bert|85.87|64.90|
|Roberta|86.45|66.57|
|Longformer_zh|86.15|66.84|
### Chinese Coreference Resolution
|Model|Conll-F1|Precision|Recall|
|---|---|---|---|
|Bert|66.82|70.30|63.67|
|Roberta|67.77|69.28|66.32|
|Longformer_zh|67.81|70.13|65.64|
## 致谢
感谢东京工业大学 奥村·船越研究室 提供算力。
Thanks Okumula·Funakoshi Lab from Tokyo Institute of Technology who provides the devices and oppotunity for me to finish this project.
|
{}
|
ValkyriaLenneth/longformer_zh
| null |
[
"transformers",
"pytorch",
"longformer",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #longformer #feature-extraction #endpoints_compatible #region-us
|
中文预训练Longformer模型 | Longformer\_ZH with PyTorch
===============================================
相比于Transformer的O(n^2)复杂度,Longformer提供了一种以线性复杂度处理最长4K字符级别文档序列的方法。Longformer Attention包括了标准的自注意力与全局注意力机制,方便模型更好地学习超长序列的信息。
Compared with O(n^2) complexity for Transformer model, Longformer provides an efficient method for processing long-document level sequence in Linear complexity. Longformer’s attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention.
我们注意到关于中文Longformer或超长序列任务的资源较少,因此在此开源了我们预训练的中文Longformer模型参数, 并提供了相应的加载方法,以及预训练脚本。
There are not so much resource for Chinese Longformer or long-sequence-level chinese task. Thus we open source our pretrained longformer model to help the researchers.
加载模型 | Load the model
---------------------
您可以使用谷歌云盘或百度网盘下载我们的模型
You could get Longformer\_zh from Google Drive or Baidu Yun.
* Google Drive: URL
* 百度云: 链接:URL 提取码:y601
我们同样提供了Huggingface的自动下载
We also provide auto load with HuggingFace.Transformers.
注意事项 | Notice
-------------
* 直接使用 'transformers.LongformerModel.from\_pretrained' 加载模型
* Please use 'transformers.LongformerModel.from\_pretrained' to load the model directly
* 以下内容已经被弃用
* The following notices are abondoned, please ignore them.
* 区别于英文原版Longformer, 中文Longformer的基础是Roberta\_zh模型,其本质上属于 'Transformers.BertModel' 而非 'RobertaModel', 因此无法使用原版代码直接加载。
* Different with origin English Longformer, Longformer\_Zh is based on Roberta\_zh which is a subclass of 'Transformers.BertModel' not 'RobertaModel'. Thus it is impossible to load it with origin code.
* 我们提供了修改后的中文Longformer文件,您可以使用其加载参数。
* We provide modified Longformer\_zh class, you can use it directly to load the model.
* 如果您想将此参数用于更多任务,请参考'Longformer\_zh.py'替换Attention Layer.
* If you want to use our model on more down-stream tasks, please refer to 'Longformer\_zh.py' and replace Attention layer with Longformer Attention layer.
关于预训练 | About Pretraining
-------------------------
* 我们的预训练语料来自 URL, 根据Longformer原文的设置,采用了多种语料混合的预训练数据。
* The corpus of pretraining is from URL Based on the paper of Longformer, we use a mixture of 4 different chinese corpus for pretraining.
* 我们的模型是基于Roberta\_zh\_mid (URL,训练脚本参考了https://URL
* The basement of our model is Roberta\_zh\_mid (URL Pretraining scripts is modified from URL
* 同时我们在原版基础上,引入了 'Whole-Word-Masking' 机制,以便更好地适应中文特性。
* We introduce 'Whole-Word-Masking' method into pretraining for better fitting Chinese language.
* 'Whole-Word-Masking'代码改写自TensorFlow版本的Roberta\_zh,据我们所知是第一个开源的Pytorch版本WWM.
* Our WWM scripts is refacted from Roberta\_zh\_Tensorflow, as far as we know, it is the first open source Whole-word-masking scripts in Pytorch.
* 模型 'max\_seq\_length = 4096', 在 4 \* Titan RTX 上预训练3K steps 大概用时4天。
* Max seuence length is 4096 and the pretraining took 4 days on 4 \* Titan RTX.
* 我们使用了 'Nvidia.Apex' 引入了混合精度训练,以加速预训练。
* We use 'Nvidia.Apex' to accelerate pretraining.
* 关于数据预处理, 我们采用 'Jieba' 分词与'JIONLP'进行数据清洗。
* We use 'Jieba' Chinese tokenizer and 'JIONLP' data cleaning.
* 更多细节可以参考我们的预训练脚本
* For more details, please check our pretraining scripts.
效果测试 | Evaluation
-----------------
### CCF Sentiment Analysis
* 由于中文超长文本级别任务稀缺,我们采用了CCF-Sentiment-Analysis任务进行测试
* Since it is hard to acquire open-sourced long sequence level chinese NLP task, we use CCF-Sentiment-Analysis for evaluation.
### Pretraining BPC
* 我们提供了预训练BPC(bits-per-character), BPC越小,代表语言模型性能更优。可视作PPL.
* We also provide BPC scores of pretraining, the lower BPC score, the better performance Langugage Model has. You can also treat it as PPL.
### CMRC(Chinese Machine Reading Comprehension)
Model: Bert, F1: 85.87, EM: 64.90
Model: Roberta, F1: 86.45, EM: 66.57
Model: Longformer\_zh, F1: 86.15, EM: 66.84
### Chinese Coreference Resolution
致谢
--
感谢东京工业大学 奥村·船越研究室 提供算力。
Thanks Okumula·Funakoshi Lab from Tokyo Institute of Technology who provides the devices and oppotunity for me to finish this project.
|
[
"### CCF Sentiment Analysis\n\n\n* 由于中文超长文本级别任务稀缺,我们采用了CCF-Sentiment-Analysis任务进行测试\n* Since it is hard to acquire open-sourced long sequence level chinese NLP task, we use CCF-Sentiment-Analysis for evaluation.",
"### Pretraining BPC\n\n\n* 我们提供了预训练BPC(bits-per-character), BPC越小,代表语言模型性能更优。可视作PPL.\n* We also provide BPC scores of pretraining, the lower BPC score, the better performance Langugage Model has. You can also treat it as PPL.",
"### CMRC(Chinese Machine Reading Comprehension)\n\n\nModel: Bert, F1: 85.87, EM: 64.90\nModel: Roberta, F1: 86.45, EM: 66.57\nModel: Longformer\\_zh, F1: 86.15, EM: 66.84",
"### Chinese Coreference Resolution\n\n\n\n致谢\n--\n\n\n感谢东京工业大学 奥村·船越研究室 提供算力。\n\n\nThanks Okumula·Funakoshi Lab from Tokyo Institute of Technology who provides the devices and oppotunity for me to finish this project."
] |
[
"TAGS\n#transformers #pytorch #longformer #feature-extraction #endpoints_compatible #region-us \n",
"### CCF Sentiment Analysis\n\n\n* 由于中文超长文本级别任务稀缺,我们采用了CCF-Sentiment-Analysis任务进行测试\n* Since it is hard to acquire open-sourced long sequence level chinese NLP task, we use CCF-Sentiment-Analysis for evaluation.",
"### Pretraining BPC\n\n\n* 我们提供了预训练BPC(bits-per-character), BPC越小,代表语言模型性能更优。可视作PPL.\n* We also provide BPC scores of pretraining, the lower BPC score, the better performance Langugage Model has. You can also treat it as PPL.",
"### CMRC(Chinese Machine Reading Comprehension)\n\n\nModel: Bert, F1: 85.87, EM: 64.90\nModel: Roberta, F1: 86.45, EM: 66.57\nModel: Longformer\\_zh, F1: 86.15, EM: 66.84",
"### Chinese Coreference Resolution\n\n\n\n致谢\n--\n\n\n感谢东京工业大学 奥村·船越研究室 提供算力。\n\n\nThanks Okumula·Funakoshi Lab from Tokyo Institute of Technology who provides the devices and oppotunity for me to finish this project."
] |
text-generation
|
transformers
|
# Dante (DMC V) DialogGPT Model
|
{"tags": ["conversational"]}
|
Vampiro/DialoGPT-small-dante_b
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Dante (DMC V) DialogGPT Model
|
[
"# Dante (DMC V) DialogGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Dante (DMC V) DialogGPT Model"
] |
text-generation
|
transformers
|
# Dante - Devi May Cry V DialoGPT Model
|
{"tags": ["conversational"]}
|
Vampiro/DialoGPT-small-dante_c
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Dante - Devi May Cry V DialoGPT Model
|
[
"# Dante - Devi May Cry V DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Dante - Devi May Cry V DialoGPT Model"
] |
text-generation
|
transformers
|
# Paraphrase-Generation
## Model description
T5 Model for generating paraphrases of english sentences. Trained on the [Google PAWS](https://github.com/google-research-datasets/paws) dataset.
## How to use
## Requires sentencepiece: # !pip install sentencepiece
PyTorch and TF models available
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Vamsi/T5_Paraphrase_Paws")
model = AutoModelForSeq2SeqLM.from_pretrained("Vamsi/T5_Paraphrase_Paws").to('cuda')
sentence = "This is something which i cannot understand at all"
text = "paraphrase: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
do_sample=True,
top_k=120,
top_p=0.95,
early_stopping=True,
num_return_sequences=5
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
print(line)
```
For more reference on training your own T5 model or using this model, do check out [Paraphrase Generation](https://github.com/Vamsi995/Paraphrase-Generator).
|
{"language": "en", "tags": ["paraphrase-generation", "text-generation", "Conditional Generation"], "inference": false}
|
Vamsi/T5_Paraphrase_Paws
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"paraphrase-generation",
"text-generation",
"Conditional Generation",
"en",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #paraphrase-generation #text-generation #Conditional Generation #en #autotrain_compatible #has_space #text-generation-inference #region-us
|
# Paraphrase-Generation
## Model description
T5 Model for generating paraphrases of english sentences. Trained on the Google PAWS dataset.
## How to use
## Requires sentencepiece: # !pip install sentencepiece
PyTorch and TF models available
For more reference on training your own T5 model or using this model, do check out Paraphrase Generation.
|
[
"# Paraphrase-Generation\n",
"## Model description\n\nT5 Model for generating paraphrases of english sentences. Trained on the Google PAWS dataset.\n",
"## How to use\n## Requires sentencepiece: # !pip install sentencepiece\nPyTorch and TF models available\n\n\n\nFor more reference on training your own T5 model or using this model, do check out Paraphrase Generation."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #paraphrase-generation #text-generation #Conditional Generation #en #autotrain_compatible #has_space #text-generation-inference #region-us \n",
"# Paraphrase-Generation\n",
"## Model description\n\nT5 Model for generating paraphrases of english sentences. Trained on the Google PAWS dataset.\n",
"## How to use\n## Requires sentencepiece: # !pip install sentencepiece\nPyTorch and TF models available\n\n\n\nFor more reference on training your own T5 model or using this model, do check out Paraphrase Generation."
] |
question-answering
|
transformers
|
"hello"
|
{}
|
Vasanth/bert-base-uncased-qa-squad2
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #endpoints_compatible #has_space #region-us
|
"hello"
|
[] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #endpoints_compatible #has_space #region-us \n"
] |
sentence-similarity
|
sentence-transformers
|
# Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever')
model = AutoModel.from_pretrained('Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8144 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2443,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever
| null |
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever
This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8144 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8144 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8144 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tamil-sentiment-distilbert
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tamilmixsentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0230
- Accuracy: 0.665
## Dataset Information
- text: Tamil-English code-mixed comment.
- label: list of the possible sentiments
- LABEL_0: "Positive",
- LABEL_1: "Negative",
- LABEL_2: "Mixed_feelings",
- LABEL_3: "unknown_state",
- LABEL_4: "not-Tamil"
## Intended uses & limitations
This model was just created for doing classification task on tamilmixsentiment dataset
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0442 | 1.0 | 250 | 0.9883 | 0.674 |
| 0.9227 | 2.0 | 500 | 0.9782 | 0.673 |
| 0.7591 | 3.0 | 750 | 1.0230 | 0.665 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tamilmixsentiment"], "metrics": ["accuracy"], "model_index": [{"name": "tamil-sentiment-distilbert", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "tamilmixsentiment", "type": "tamilmixsentiment", "args": "default"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.665}}]}]}
|
Vasanth/tamil-sentiment-distilbert
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tamilmixsentiment",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-tamilmixsentiment #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
tamil-sentiment-distilbert
==========================
This model is a fine-tuned version of distilbert-base-cased on the tamilmixsentiment dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0230
* Accuracy: 0.665
Dataset Information
-------------------
* text: Tamil-English code-mixed comment.
* label: list of the possible sentiments
+ LABEL\_0: "Positive",
+ LABEL\_1: "Negative",
+ LABEL\_2: "Mixed\_feelings",
+ LABEL\_3: "unknown\_state",
+ LABEL\_4: "not-Tamil"
Intended uses & limitations
---------------------------
This model was just created for doing classification task on tamilmixsentiment dataset
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 0
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 0\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-tamilmixsentiment #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 0\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1628
- Accuracy: 0.9345
- F1: 0.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1674 | 1.0 | 250 | 0.1718 | 0.9265 | 0.9266 |
| 0.1091 | 2.0 | 500 | 0.1628 | 0.9345 | 0.9348 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": []}]}
|
Vassilis/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1628
* Accuracy: 0.9345
* F1: 0.9348
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Peter from Your Boyfriend Game.
|
{"tags": ["conversational"]}
|
Verge/Peterbot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Peter from Your Boyfriend Game.
|
[
"# Peter from Your Boyfriend Game."
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Peter from Your Boyfriend Game."
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0626
- Precision: 0.9193
- Recall: 0.9311
- F1: 0.9251
- Accuracy: 0.9824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2393 | 1.0 | 878 | 0.0732 | 0.9052 | 0.9207 | 0.9129 | 0.9801 |
| 0.0569 | 2.0 | 1756 | 0.0626 | 0.9193 | 0.9311 | 0.9251 | 0.9824 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9192622045504749, "name": "Precision"}, {"type": "recall", "value": 0.9310884886452623, "name": "Recall"}, {"type": "f1", "value": 0.9251375534930251, "name": "F1"}, {"type": "accuracy", "value": 0.9823820039080496, "name": "Accuracy"}]}]}]}
|
Vibharkchauhan/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0626
* Precision: 0.9193
* Recall: 0.9311
* F1: 0.9251
* Accuracy: 0.9824
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
# RoBERTa-base-finetuned-yelp-polarity
This is a [RoBERTa-base](https://huggingface.co/roberta-base) checkpoint fine-tuned on binary sentiment classifcation from [Yelp polarity](https://huggingface.co/nlp/viewer/?dataset=yelp_polarity).
It gets **98.08%** accuracy on the test set.
## Hyper-parameters
We used the following hyper-parameters to train the model on one GPU:
```python
num_train_epochs = 2.0
learning_rate = 1e-05
weight_decay = 0.0
adam_epsilon = 1e-08
max_grad_norm = 1.0
per_device_train_batch_size = 32
gradient_accumulation_steps = 1
warmup_steps = 3500
seed = 42
```
|
{"language": "en", "datasets": ["yelp_polarity"]}
|
VictorSanh/roberta-base-finetuned-yelp-polarity
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:yelp_polarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #safetensors #roberta #text-classification #en #dataset-yelp_polarity #autotrain_compatible #endpoints_compatible #region-us
|
# RoBERTa-base-finetuned-yelp-polarity
This is a RoBERTa-base checkpoint fine-tuned on binary sentiment classifcation from Yelp polarity.
It gets 98.08% accuracy on the test set.
## Hyper-parameters
We used the following hyper-parameters to train the model on one GPU:
|
[
"# RoBERTa-base-finetuned-yelp-polarity\n\nThis is a RoBERTa-base checkpoint fine-tuned on binary sentiment classifcation from Yelp polarity.\nIt gets 98.08% accuracy on the test set.",
"## Hyper-parameters\n\nWe used the following hyper-parameters to train the model on one GPU:"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #roberta #text-classification #en #dataset-yelp_polarity #autotrain_compatible #endpoints_compatible #region-us \n",
"# RoBERTa-base-finetuned-yelp-polarity\n\nThis is a RoBERTa-base checkpoint fine-tuned on binary sentiment classifcation from Yelp polarity.\nIt gets 98.08% accuracy on the test set.",
"## Hyper-parameters\n\nWe used the following hyper-parameters to train the model on one GPU:"
] |
text-generation
|
transformers
|
# GPT-J 6B on Vietnamese News
Details will be available soon.
For more information, please contact anhduongng.1001@gmail.com (Dương) / imthanhlv@gmail.com (Thành) / nguyenvulebinh@gmail.com (Bình).
### How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("VietAI/gpt-j-6B-vietnamese-news")
model = AutoModelForCausalLM.from_pretrained("VietAI/gpt-j-6B-vietnamese-news", low_cpu_mem_usage=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
prompt = "Tiềm năng của trí tuệ nhân tạo" # your input sentence
input_ids = tokenizer(prompt, return_tensors="pt")['input_ids'].to(device)
gen_tokens = model.generate(
input_ids,
max_length=max_length,
do_sample=True,
temperature=0.9,
top_k=20,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
```
|
{"language": ["vi"], "tags": ["pytorch", "causal-lm", "text-generation"]}
|
VietAI/gpt-j-6B-vietnamese-news
| null |
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"vi"
] |
TAGS
#transformers #pytorch #gptj #text-generation #causal-lm #vi #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# GPT-J 6B on Vietnamese News
Details will be available soon.
For more information, please contact anhduongng.1001@URL (Dương) / imthanhlv@URL (Thành) / nguyenvulebinh@URL (Bình).
### How to use
|
[
"# GPT-J 6B on Vietnamese News\n\nDetails will be available soon.\n\nFor more information, please contact anhduongng.1001@URL (Dương) / imthanhlv@URL (Thành) / nguyenvulebinh@URL (Bình).",
"### How to use"
] |
[
"TAGS\n#transformers #pytorch #gptj #text-generation #causal-lm #vi #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# GPT-J 6B on Vietnamese News\n\nDetails will be available soon.\n\nFor more information, please contact anhduongng.1001@URL (Dương) / imthanhlv@URL (Thành) / nguyenvulebinh@URL (Bình).",
"### How to use"
] |
text-generation
|
transformers
|
# GPT-Neo 1.3B on Vietnamese News
Details will be available soon.
For more information, please contact anhduongng.1001@gmail.com (Dương) / imthanhlv@gmail.com (Thành) / nguyenvulebinh@gmail.com (Bình).
### How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("VietAI/gpt-neo-1.3B-vietnamese-news")
model = AutoModelForCausalLM.from_pretrained("VietAI/gpt-neo-1.3B-vietnamese-news", low_cpu_mem_usage=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
prompt = "Tiềm năng của trí tuệ nhân tạo" # your input sentence
input_ids = tokenizer(prompt, return_tensors="pt")['input_ids'].to(device)
gen_tokens = model.generate(
input_ids,
max_length=max_length,
do_sample=True,
temperature=0.9,
top_k=20,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
```
|
{"language": ["vi"], "tags": ["pytorch", "causal-lm", "gpt"]}
|
VietAI/gpt-neo-1.3B-vietnamese-news
| null |
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"causal-lm",
"gpt",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"vi"
] |
TAGS
#transformers #pytorch #gpt_neo #text-generation #causal-lm #gpt #vi #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# GPT-Neo 1.3B on Vietnamese News
Details will be available soon.
For more information, please contact anhduongng.1001@URL (Dương) / imthanhlv@URL (Thành) / nguyenvulebinh@URL (Bình).
### How to use
|
[
"# GPT-Neo 1.3B on Vietnamese News\n\nDetails will be available soon.\n\nFor more information, please contact anhduongng.1001@URL (Dương) / imthanhlv@URL (Thành) / nguyenvulebinh@URL (Bình).",
"### How to use"
] |
[
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #causal-lm #gpt #vi #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# GPT-Neo 1.3B on Vietnamese News\n\nDetails will be available soon.\n\nFor more information, please contact anhduongng.1001@URL (Dương) / imthanhlv@URL (Thành) / nguyenvulebinh@URL (Bình).",
"### How to use"
] |
null |
transformers
|
# Norwegian Electra

Trained on Oscar + wikipedia + opensubtitles + some other data I had with the awesome power of TPUs(V3-8)
Use with caution. I have no downstream tasks in Norwegian to test on so I have no idea of its performance yet.
# Model
## Electra: Pre-training Text Encoders as Discriminators Rather Than Generators
Kevin Clark and Minh-Thang Luong and Quoc V. Le and Christopher D. Manning
- https://openreview.net/pdf?id=r1xMH1BtvB
- https://github.com/google-research/electra
# Acknowledgments
### TensorFlow Research Cloud
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
- https://www.tensorflow.org/tfrc
#### OSCAR corpus
- https://oscar-corpus.com/
#### OPUS
- http://opus.nlpl.eu/
- http://www.opensubtitles.org/
|
{"language": false, "thumbnail": "https://i.imgur.com/QqSEC5I.png"}
|
ViktorAlm/electra-base-norwegian-uncased-discriminator
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"no",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"no"
] |
TAGS
#transformers #pytorch #tf #electra #pretraining #no #endpoints_compatible #region-us
|
# Norwegian Electra
!Image of norwegian electra
Trained on Oscar + wikipedia + opensubtitles + some other data I had with the awesome power of TPUs(V3-8)
Use with caution. I have no downstream tasks in Norwegian to test on so I have no idea of its performance yet.
# Model
## Electra: Pre-training Text Encoders as Discriminators Rather Than Generators
Kevin Clark and Minh-Thang Luong and Quoc V. Le and Christopher D. Manning
- URL
- URL
# Acknowledgments
### TensorFlow Research Cloud
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ️
- URL
#### OSCAR corpus
- URL
#### OPUS
- URL
- URL
|
[
"# Norwegian Electra\n!Image of norwegian electra\n\nTrained on Oscar + wikipedia + opensubtitles + some other data I had with the awesome power of TPUs(V3-8)\n\nUse with caution. I have no downstream tasks in Norwegian to test on so I have no idea of its performance yet.",
"# Model",
"## Electra: Pre-training Text Encoders as Discriminators Rather Than Generators\nKevin Clark and Minh-Thang Luong and Quoc V. Le and Christopher D. Manning\n- URL\n- URL",
"# Acknowledgments",
"### TensorFlow Research Cloud\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ️\n- URL",
"#### OSCAR corpus\n- URL",
"#### OPUS\n- URL\n- URL"
] |
[
"TAGS\n#transformers #pytorch #tf #electra #pretraining #no #endpoints_compatible #region-us \n",
"# Norwegian Electra\n!Image of norwegian electra\n\nTrained on Oscar + wikipedia + opensubtitles + some other data I had with the awesome power of TPUs(V3-8)\n\nUse with caution. I have no downstream tasks in Norwegian to test on so I have no idea of its performance yet.",
"# Model",
"## Electra: Pre-training Text Encoders as Discriminators Rather Than Generators\nKevin Clark and Minh-Thang Luong and Quoc V. Le and Christopher D. Manning\n- URL\n- URL",
"# Acknowledgments",
"### TensorFlow Research Cloud\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ️\n- URL",
"#### OSCAR corpus\n- URL",
"#### OPUS\n- URL\n- URL"
] |
fill-mask
|
transformers
|
# Albumin-15s
## Model description
This is a version of [Albert-base-v2](https://huggingface.co/albert-base-v2) for 15's long aptamers comparison to determine which one is more affine to target protein Albumin.
The Albert model was pretrained in the English language, it has many similarities with language or proteins and aptamers which is why we had to fine-tune it to help the model learn embedded positioning for aptamers to be able to distinguish better sequences.
More information can be found in our [github]() and our iGEMs [wiki]().
## Intended uses & limitations
You can use the fine-tuned model for either masked aptamer pair sequence classification, which one is more affine for target protein Albumin, prediction, but it's mostly intended to be fine-tuned again on a different length aptamer or simply expanded datasets.
#### How to use
This model can be used to predict compared affinity with dataset preprocessing function which encodes the specific type of data (Sequence1, Sequence2, Label) where Label indicates binary if Sequence1 is more affine to target protein Albumin.
```python
from transformers import AutoTokenizer, BertModel
mname = "Vilnius-Lithuania-iGEM/Albumin"
model = BertModel.from_pretrained(mname)
```
To predict batches of sequences you have to employ custom functions shown in [git/prediction.ipynb]()
#### Limitations and bias
It seems that fine-tuned Albert model for this kind of task has limition of 90 % accuracy predicting which aptamer is more suitable for a target protein, also Albert-large or immense dataset of 15s aptamer could increase accuracy few %, however extrapolation case is not studied and we cannot confirm this model is state-of-The-art when one of aptamers is SUPER good (has almost maximum entropy to the Albumin).
## Eval results
accuracy : 0.8601
precision: 0.8515
recall : 0.8725
f1 : 0.8618
roc_auc : 0.9388
The score was calculated using sklearn.metrics.
|
{}
|
Vilnius-Lithuania-iGEM/Albumin
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
# Albumin-15s
## Model description
This is a version of Albert-base-v2 for 15's long aptamers comparison to determine which one is more affine to target protein Albumin.
The Albert model was pretrained in the English language, it has many similarities with language or proteins and aptamers which is why we had to fine-tune it to help the model learn embedded positioning for aptamers to be able to distinguish better sequences.
More information can be found in our [github]() and our iGEMs [wiki]().
## Intended uses & limitations
You can use the fine-tuned model for either masked aptamer pair sequence classification, which one is more affine for target protein Albumin, prediction, but it's mostly intended to be fine-tuned again on a different length aptamer or simply expanded datasets.
#### How to use
This model can be used to predict compared affinity with dataset preprocessing function which encodes the specific type of data (Sequence1, Sequence2, Label) where Label indicates binary if Sequence1 is more affine to target protein Albumin.
To predict batches of sequences you have to employ custom functions shown in [git/URL]()
#### Limitations and bias
It seems that fine-tuned Albert model for this kind of task has limition of 90 % accuracy predicting which aptamer is more suitable for a target protein, also Albert-large or immense dataset of 15s aptamer could increase accuracy few %, however extrapolation case is not studied and we cannot confirm this model is state-of-The-art when one of aptamers is SUPER good (has almost maximum entropy to the Albumin).
## Eval results
accuracy : 0.8601
precision: 0.8515
recall : 0.8725
f1 : 0.8618
roc_auc : 0.9388
The score was calculated using sklearn.metrics.
|
[
"# Albumin-15s",
"## Model description\n\nThis is a version of Albert-base-v2 for 15's long aptamers comparison to determine which one is more affine to target protein Albumin.\n\nThe Albert model was pretrained in the English language, it has many similarities with language or proteins and aptamers which is why we had to fine-tune it to help the model learn embedded positioning for aptamers to be able to distinguish better sequences.\n\nMore information can be found in our [github]() and our iGEMs [wiki]().",
"## Intended uses & limitations\n\nYou can use the fine-tuned model for either masked aptamer pair sequence classification, which one is more affine for target protein Albumin, prediction, but it's mostly intended to be fine-tuned again on a different length aptamer or simply expanded datasets.",
"#### How to use\n\nThis model can be used to predict compared affinity with dataset preprocessing function which encodes the specific type of data (Sequence1, Sequence2, Label) where Label indicates binary if Sequence1 is more affine to target protein Albumin.\n\n\n\nTo predict batches of sequences you have to employ custom functions shown in [git/URL]()",
"#### Limitations and bias\n\nIt seems that fine-tuned Albert model for this kind of task has limition of 90 % accuracy predicting which aptamer is more suitable for a target protein, also Albert-large or immense dataset of 15s aptamer could increase accuracy few %, however extrapolation case is not studied and we cannot confirm this model is state-of-The-art when one of aptamers is SUPER good (has almost maximum entropy to the Albumin).",
"## Eval results\n\naccuracy : 0.8601\n\nprecision: 0.8515\n\nrecall : 0.8725\n\nf1 : 0.8618\n\nroc_auc : 0.9388\n\nThe score was calculated using sklearn.metrics."
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# Albumin-15s",
"## Model description\n\nThis is a version of Albert-base-v2 for 15's long aptamers comparison to determine which one is more affine to target protein Albumin.\n\nThe Albert model was pretrained in the English language, it has many similarities with language or proteins and aptamers which is why we had to fine-tune it to help the model learn embedded positioning for aptamers to be able to distinguish better sequences.\n\nMore information can be found in our [github]() and our iGEMs [wiki]().",
"## Intended uses & limitations\n\nYou can use the fine-tuned model for either masked aptamer pair sequence classification, which one is more affine for target protein Albumin, prediction, but it's mostly intended to be fine-tuned again on a different length aptamer or simply expanded datasets.",
"#### How to use\n\nThis model can be used to predict compared affinity with dataset preprocessing function which encodes the specific type of data (Sequence1, Sequence2, Label) where Label indicates binary if Sequence1 is more affine to target protein Albumin.\n\n\n\nTo predict batches of sequences you have to employ custom functions shown in [git/URL]()",
"#### Limitations and bias\n\nIt seems that fine-tuned Albert model for this kind of task has limition of 90 % accuracy predicting which aptamer is more suitable for a target protein, also Albert-large or immense dataset of 15s aptamer could increase accuracy few %, however extrapolation case is not studied and we cannot confirm this model is state-of-The-art when one of aptamers is SUPER good (has almost maximum entropy to the Albumin).",
"## Eval results\n\naccuracy : 0.8601\n\nprecision: 0.8515\n\nrecall : 0.8725\n\nf1 : 0.8618\n\nroc_auc : 0.9388\n\nThe score was calculated using sklearn.metrics."
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
VincentButterfield/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
null |
pytorch
|
Ce modèle est développé pour KARA.
Ce modèle est :
- Un outil d'analyse de sentiment associé à un commentaire de sondage RH
- Entrainé pour être utilisé en ANGLAIS (les commentaires doivent êtres traduits)
- Spécialisé pour des commentaires entre 10 et 512 charactères
Ce modèle n'est pas :
- Utilisable pour détecter un discours haineux ou bien une lettre de suicide
Étiquettes :
- Label_0 = Négatif
- Label_1 = Positif
version 1.1.0
Performances sur le jeux de données du HRM : 91.5% de précision
|
{"language": ["en"], "library_name": "pytorch", "tags": ["sentiment-analysis"], "metrics": ["negative", "positive"], "widget": [{"text": "Thank you for listening to the recommendations of the telephone team for teleworking. we have a strong expertise in this field and accurate listening to Our management!!!!", "example_title": "Exemple positif"}, {"text": "working conditions and wages are less than average more part of the time it is not a hierarchical system Our opinion counts", "example_title": "Exemple n\u00e9gatif"}]}
|
VincentC12/sentiment_analysis_kara
| null |
[
"pytorch",
"distilbert",
"sentiment-analysis",
"en",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#pytorch #distilbert #sentiment-analysis #en #region-us
|
Ce modèle est développé pour KARA.
Ce modèle est :
- Un outil d'analyse de sentiment associé à un commentaire de sondage RH
- Entrainé pour être utilisé en ANGLAIS (les commentaires doivent êtres traduits)
- Spécialisé pour des commentaires entre 10 et 512 charactères
Ce modèle n'est pas :
- Utilisable pour détecter un discours haineux ou bien une lettre de suicide
Étiquettes :
- Label_0 = Négatif
- Label_1 = Positif
version 1.1.0
Performances sur le jeux de données du HRM : 91.5% de précision
|
[] |
[
"TAGS\n#pytorch #distilbert #sentiment-analysis #en #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7809
- Matthews Correlation: 0.5286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5299 | 1.0 | 535 | 0.5040 | 0.4383 |
| 0.3472 | 2.0 | 1070 | 0.5284 | 0.4911 |
| 0.2333 | 3.0 | 1605 | 0.6633 | 0.5091 |
| 0.1733 | 4.0 | 2140 | 0.7809 | 0.5286 |
| 0.1255 | 5.0 | 2675 | 0.8894 | 0.5282 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5286324175580216, "name": "Matthews Correlation"}]}]}]}
|
VirenS13117/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7809
* Matthews Correlation: 0.5286
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
VishalArun/DialoGPT-medium-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
image-classification
| null |
# VAN-Base
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network).
## Description
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
## Evaluation Results
| Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download |
| :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: |
| VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) |
| VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) |
| VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), |
| VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) |
### BibTeX entry and citation info
```bibtex
@article{guo2022visual,
title={Visual Attention Network},
author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
journal={arXiv preprint arXiv:2202.09741},
year={2022}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
|
Visual-Attention-Network/VAN-Base-original
| null |
[
"image-classification",
"dataset:imagenet",
"arxiv:2202.09741",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.09741"
] |
[] |
TAGS
#image-classification #dataset-imagenet #arxiv-2202.09741 #license-apache-2.0 #region-us
|
VAN-Base
========
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper Visual Attention Network and first released in here.
Description
-----------
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
Evaluation Results
------------------
### BibTeX entry and citation info
|
[
"### BibTeX entry and citation info"
] |
[
"TAGS\n#image-classification #dataset-imagenet #arxiv-2202.09741 #license-apache-2.0 #region-us \n",
"### BibTeX entry and citation info"
] |
image-classification
| null |
# VAN-Large
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network).
## Description
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
## Evaluation Results
| Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download |
| :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: |
| VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) |
| VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) |
| VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), |
| VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) |
### BibTeX entry and citation info
```bibtex
@article{guo2022visual,
title={Visual Attention Network},
author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
journal={arXiv preprint arXiv:2202.09741},
year={2022}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
|
Visual-Attention-Network/VAN-Large-original
| null |
[
"image-classification",
"dataset:imagenet",
"arxiv:2202.09741",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.09741"
] |
[] |
TAGS
#image-classification #dataset-imagenet #arxiv-2202.09741 #license-apache-2.0 #region-us
|
VAN-Large
=========
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper Visual Attention Network and first released in here.
Description
-----------
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
Evaluation Results
------------------
### BibTeX entry and citation info
|
[
"### BibTeX entry and citation info"
] |
[
"TAGS\n#image-classification #dataset-imagenet #arxiv-2202.09741 #license-apache-2.0 #region-us \n",
"### BibTeX entry and citation info"
] |
image-classification
| null |
# VAN-Small
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network).
## Description
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
## Evaluation Results
| Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download |
| :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: |
| VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) |
| VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) |
| VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), |
| VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) |
### BibTeX entry and citation info
```bibtex
@article{guo2022visual,
title={Visual Attention Network},
author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
journal={arXiv preprint arXiv:2202.09741},
year={2022}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
|
Visual-Attention-Network/VAN-Small-original
| null |
[
"image-classification",
"dataset:imagenet",
"arxiv:2202.09741",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.09741"
] |
[] |
TAGS
#image-classification #dataset-imagenet #arxiv-2202.09741 #license-apache-2.0 #region-us
|
VAN-Small
=========
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper Visual Attention Network and first released in here.
Description
-----------
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
Evaluation Results
------------------
### BibTeX entry and citation info
|
[
"### BibTeX entry and citation info"
] |
[
"TAGS\n#image-classification #dataset-imagenet #arxiv-2202.09741 #license-apache-2.0 #region-us \n",
"### BibTeX entry and citation info"
] |
image-classification
| null |
# VAN-Tiny
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network).
## Description
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
## Evaluation Results
| Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download |
| :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: |
| VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) |
| VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) |
| VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), |
| VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) |
### BibTeX entry and citation info
```bibtex
@article{guo2022visual,
title={Visual Attention Network},
author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
journal={arXiv preprint arXiv:2202.09741},
year={2022}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
|
Visual-Attention-Network/VAN-Tiny-original
| null |
[
"image-classification",
"dataset:imagenet",
"arxiv:2202.09741",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.09741"
] |
[] |
TAGS
#image-classification #dataset-imagenet #arxiv-2202.09741 #license-apache-2.0 #region-us
|
VAN-Tiny
========
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper Visual Attention Network and first released in here.
Description
-----------
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
Evaluation Results
------------------
### BibTeX entry and citation info
|
[
"### BibTeX entry and citation info"
] |
[
"TAGS\n#image-classification #dataset-imagenet #arxiv-2202.09741 #license-apache-2.0 #region-us \n",
"### BibTeX entry and citation info"
] |
text-generation
|
transformers
|
# Rick Sanchez DialoGPT Model
|
{"tags": ["conversational"]}
|
Vitafeu/DialoGPT-medium-ricksanchez
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Sanchez DialoGPT Model
|
[
"# Rick Sanchez DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Sanchez DialoGPT Model"
] |
null | null |
This is to test the common sense reasoning of a GPT-2 model.To assess how intelligent or it is adapted to this datasets which requires not only big models but also a little common sense also.
|
{}
|
Vivek/flax-gpt2-common-sense-reasoning
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
This is to test the common sense reasoning of a GPT-2 model.To assess how intelligent or it is adapted to this datasets which requires not only big models but also a little common sense also.
|
[] |
[
"TAGS\n#region-us \n"
] |
null |
transformers
|
This is to test the common sense reasoning of a GPT-2 model.To assess how intelligent or it is adapted to this datasets which requires not only big models but also a little common sense also.
|
{}
|
Vivek/gpt2-common-sense-reasoning
| null |
[
"transformers",
"jax",
"tensorboard",
"gpt2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #gpt2 #endpoints_compatible #text-generation-inference #region-us
|
This is to test the common sense reasoning of a GPT-2 model.To assess how intelligent or it is adapted to this datasets which requires not only big models but also a little common sense also.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #gpt2 #endpoints_compatible #text-generation-inference #region-us \n"
] |
sentence-similarity
|
transformers
|
#### Table of contents
1. [Introduction](#introduction)
2. [Pretrain model](#models)
3. [Using SimeCSE_Vietnamese with `sentences-transformers`](#sentences-transformers)
- [Installation](#install1)
- [Example usage](#usage1)
4. [Using SimeCSE_Vietnamese with `transformers`](#transformers)
- [Installation](#install2)
- [Example usage](#usage2)
# <a name="introduction"></a> SimeCSE_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese
Pre-trained SimeCSE_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese :
- SimeCSE_Vietnamese pre-training approach is based on [SimCSE](https://arxiv.org/abs/2104.08821) which optimizes the SimeCSE_Vietnamese pre-training procedure for more robust performance.
- SimeCSE_Vietnamese encode input sentences using a pre-trained language model such as [PhoBert](https://www.aclweb.org/anthology/2020.findings-emnlp.92/)
- SimeCSE_Vietnamese works with both unlabeled and labeled data.
## Pre-trained models <a name="models"></a>
Model | #params | Arch.
---|---|---
[`VoVanPhuc/sup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) | 135M | base
[`VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base) | 135M | base
## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `sentences-transformers`
### Installation <a name="install1"></a>
- Install `sentence-transformers`:
- `pip install -U sentence-transformers`
- Install `pyvi` to word segment:
- `pip install pyvi`
### Example usage <a name="usage1"></a>
```python
from sentence_transformers import SentenceTransformer
from pyvi.ViTokenizer import tokenize
model = SentenceTransformer('VoVanPhuc/sup-SimCSE-VietNamese-phobert-base')
sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.',
'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.',
'Bắc Giang tăng khả năng điều trị và xét nghiệm.',
'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.',
'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.',
'20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.',
'Thái Lan thua giao hữu trước vòng loại World Cup.',
'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam',
'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.',
'Bắn chết người trong cuộc rượt đuổi trên sông.'
]
sentences = [tokenize(sentence) for sentence in sentences]
embeddings = model.encode(sentences)
```
## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `transformers`
### Installation <a name="install2"></a>
- Install `transformers`:
- `pip install -U transformers`
- Install `pyvi` to word segment:
- `pip install pyvi`
### Example usage <a name="usage2"></a>
```python
import torch
from transformers import AutoModel, AutoTokenizer
from pyvi.ViTokenizer import tokenize
PhobertTokenizer = AutoTokenizer.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base")
model = AutoModel.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base")
sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.',
'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.',
'Bắc Giang tăng khả năng điều trị và xét nghiệm.',
'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.',
'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.',
'20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.',
'Thái Lan thua giao hữu trước vòng loại World Cup.',
'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam',
'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.',
'Bắn chết người trong cuộc rượt đuổi trên sông.'
]
sentences = [tokenize(sentence) for sentence in sentences]
inputs = PhobertTokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output
```
## Quick Start
[Open In Colab](https://colab.research.google.com/drive/12__EXJoQYHe9nhi4aXLTf9idtXT8yr7H?usp=sharing)
## Citation
@article{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
journal={arXiv preprint arXiv:2104.08821},
year={2021}
}
@inproceedings{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
year = {2020},
pages = {1037--1042}
}
|
{"language": ["vi"], "pipeline_tag": "sentence-similarity"}
|
VoVanPhuc/sup-SimCSE-VietNamese-phobert-base
| null |
[
"transformers",
"pytorch",
"roberta",
"sentence-similarity",
"vi",
"arxiv:2104.08821",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08821"
] |
[
"vi"
] |
TAGS
#transformers #pytorch #roberta #sentence-similarity #vi #arxiv-2104.08821 #endpoints_compatible #has_space #region-us
|
#### Table of contents
1. Introduction
2. Pretrain model
3. Using SimeCSE\_Vietnamese with 'sentences-transformers'
* Installation
* Example usage
4. Using SimeCSE\_Vietnamese with 'transformers'
* Installation
* Example usage
SimeCSE\_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese
========================================================================================
Pre-trained SimeCSE\_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese :
* SimeCSE\_Vietnamese pre-training approach is based on SimCSE which optimizes the SimeCSE\_Vietnamese pre-training procedure for more robust performance.
* SimeCSE\_Vietnamese encode input sentences using a pre-trained language model such as PhoBert
* SimeCSE\_Vietnamese works with both unlabeled and labeled data.
Pre-trained models
------------------
Model: 'VoVanPhuc/sup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base
Model: 'VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base
Using SimeCSE\_Vietnamese with 'sentences-transformers'
--------------------------------------------------------
### Installation
* Install 'sentence-transformers':
+ 'pip install -U sentence-transformers'
* Install 'pyvi' to word segment:
+ 'pip install pyvi'
### Example usage
Using SimeCSE\_Vietnamese with 'transformers'
----------------------------------------------
### Installation
* Install 'transformers':
+ 'pip install -U transformers'
* Install 'pyvi' to word segment:
+ 'pip install pyvi'
### Example usage
Quick Start
-----------
Open In Colab
@article{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
journal={arXiv preprint arXiv:2104.08821},
year={2021}
}
```
@inproceedings{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
year = {2020},
pages = {1037--1042}
}
```
|
[
"#### Table of contents\n\n\n1. Introduction\n2. Pretrain model\n3. Using SimeCSE\\_Vietnamese with 'sentences-transformers'\n\t* Installation\n\t* Example usage\n4. Using SimeCSE\\_Vietnamese with 'transformers'\n\t* Installation\n\t* Example usage\n\n\n SimeCSE\\_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese\n========================================================================================\n\n\nPre-trained SimeCSE\\_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese :\n\n\n* SimeCSE\\_Vietnamese pre-training approach is based on SimCSE which optimizes the SimeCSE\\_Vietnamese pre-training procedure for more robust performance.\n* SimeCSE\\_Vietnamese encode input sentences using a pre-trained language model such as PhoBert\n* SimeCSE\\_Vietnamese works with both unlabeled and labeled data.\n\n\nPre-trained models\n------------------\n\n\nModel: 'VoVanPhuc/sup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base\nModel: 'VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base\n\n\n Using SimeCSE\\_Vietnamese with 'sentences-transformers'\n--------------------------------------------------------",
"### Installation\n\n\n* Install 'sentence-transformers':\n\n\n\t+ 'pip install -U sentence-transformers'\n* Install 'pyvi' to word segment:\n\n\n\t+ 'pip install pyvi'",
"### Example usage\n\n\n Using SimeCSE\\_Vietnamese with 'transformers'\n----------------------------------------------",
"### Installation\n\n\n* Install 'transformers':\n\n\n\t+ 'pip install -U transformers'\n* Install 'pyvi' to word segment:\n\n\n\t+ 'pip install pyvi'",
"### Example usage\n\n\nQuick Start\n-----------\n\n\nOpen In Colab\n\n\n@article{gao2021simcse,\ntitle={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},\nauthor={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},\njournal={arXiv preprint arXiv:2104.08821},\nyear={2021}\n}\n\n\n\n```\n@inproceedings{phobert,\ntitle = {{PhoBERT: Pre-trained language models for Vietnamese}},\nauthor = {Dat Quoc Nguyen and Anh Tuan Nguyen},\nbooktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},\nyear = {2020},\npages = {1037--1042}\n}\n\n```"
] |
[
"TAGS\n#transformers #pytorch #roberta #sentence-similarity #vi #arxiv-2104.08821 #endpoints_compatible #has_space #region-us \n",
"#### Table of contents\n\n\n1. Introduction\n2. Pretrain model\n3. Using SimeCSE\\_Vietnamese with 'sentences-transformers'\n\t* Installation\n\t* Example usage\n4. Using SimeCSE\\_Vietnamese with 'transformers'\n\t* Installation\n\t* Example usage\n\n\n SimeCSE\\_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese\n========================================================================================\n\n\nPre-trained SimeCSE\\_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese :\n\n\n* SimeCSE\\_Vietnamese pre-training approach is based on SimCSE which optimizes the SimeCSE\\_Vietnamese pre-training procedure for more robust performance.\n* SimeCSE\\_Vietnamese encode input sentences using a pre-trained language model such as PhoBert\n* SimeCSE\\_Vietnamese works with both unlabeled and labeled data.\n\n\nPre-trained models\n------------------\n\n\nModel: 'VoVanPhuc/sup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base\nModel: 'VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base\n\n\n Using SimeCSE\\_Vietnamese with 'sentences-transformers'\n--------------------------------------------------------",
"### Installation\n\n\n* Install 'sentence-transformers':\n\n\n\t+ 'pip install -U sentence-transformers'\n* Install 'pyvi' to word segment:\n\n\n\t+ 'pip install pyvi'",
"### Example usage\n\n\n Using SimeCSE\\_Vietnamese with 'transformers'\n----------------------------------------------",
"### Installation\n\n\n* Install 'transformers':\n\n\n\t+ 'pip install -U transformers'\n* Install 'pyvi' to word segment:\n\n\n\t+ 'pip install pyvi'",
"### Example usage\n\n\nQuick Start\n-----------\n\n\nOpen In Colab\n\n\n@article{gao2021simcse,\ntitle={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},\nauthor={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},\njournal={arXiv preprint arXiv:2104.08821},\nyear={2021}\n}\n\n\n\n```\n@inproceedings{phobert,\ntitle = {{PhoBERT: Pre-trained language models for Vietnamese}},\nauthor = {Dat Quoc Nguyen and Anh Tuan Nguyen},\nbooktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},\nyear = {2020},\npages = {1037--1042}\n}\n\n```"
] |
null |
transformers
|
#### Table of contents
1. [Introduction](#introduction)
2. [Pretrain model](#models)
3. [Using SimeCSE_Vietnamese with `sentences-transformers`](#sentences-transformers)
- [Installation](#install1)
- [Example usage](#usage1)
4. [Using SimeCSE_Vietnamese with `transformers`](#transformers)
- [Installation](#install2)
- [Example usage](#usage2)
# <a name="introduction"></a> SimeCSE_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese
Pre-trained SimeCSE_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese :
- SimeCSE_Vietnamese pre-training approach is based on [SimCSE](https://arxiv.org/abs/2104.08821) which optimizes the SimeCSE_Vietnamese pre-training procedure for more robust performance.
- SimeCSE_Vietnamese encode input sentences using a pre-trained language model such as [PhoBert](https://www.aclweb.org/anthology/2020.findings-emnlp.92/)
- SimeCSE_Vietnamese works with both unlabeled and labeled data.
## Pre-trained models <a name="models"></a>
Model | #params | Arch.
---|---|---
[`VoVanPhuc/sup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) | 135M | base
[`VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base) | 135M | base
## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `sentences-transformers`
### Installation <a name="install1"></a>
- Install `sentence-transformers`:
- `pip install -U sentence-transformers`
- Install `pyvi` to word segment:
- `pip install pyvi`
### Example usage <a name="usage1"></a>
```python
from sentence_transformers import SentenceTransformer
from pyvi.ViTokenizer import tokenize
model = SentenceTransformer('VoVanPhuc/sup-SimCSE-VietNamese-phobert-base')
sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.',
'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.',
'Bắc Giang tăng khả năng điều trị và xét nghiệm.',
'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.',
'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.',
'20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.',
'Thái Lan thua giao hữu trước vòng loại World Cup.',
'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam',
'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.',
'Bắn chết người trong cuộc rượt đuổi trên sông.'
]
sentences = [tokenize(sentence) for sentence in sentences]
embeddings = model.encode(sentences)
```
## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `transformers`
### Installation <a name="install2"></a>
- Install `transformers`:
- `pip install -U transformers`
- Install `pyvi` to word segment:
- `pip install pyvi`
### Example usage <a name="usage2"></a>
```python
import torch
from transformers import AutoModel, AutoTokenizer
from pyvi.ViTokenizer import tokenize
PhobertTokenizer = AutoTokenizer.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base")
model = AutoModel.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base")
sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.',
'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.',
'Bắc Giang tăng khả năng điều trị và xét nghiệm.',
'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.',
'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.',
'20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.',
'Thái Lan thua giao hữu trước vòng loại World Cup.',
'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam',
'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.',
'Bắn chết người trong cuộc rượt đuổi trên sông.'
]
sentences = [tokenize(sentence) for sentence in sentences]
inputs = PhobertTokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output
```
## Quick Start
[Open In Colab](https://colab.research.google.com/drive/12__EXJoQYHe9nhi4aXLTf9idtXT8yr7H?usp=sharing)
## Citation
@article{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
journal={arXiv preprint arXiv:2104.08821},
year={2021}
}
@inproceedings{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
year = {2020},
pages = {1037--1042}
}
|
{}
|
VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base
| null |
[
"transformers",
"pytorch",
"roberta",
"arxiv:2104.08821",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08821"
] |
[] |
TAGS
#transformers #pytorch #roberta #arxiv-2104.08821 #endpoints_compatible #region-us
|
#### Table of contents
1. Introduction
2. Pretrain model
3. Using SimeCSE\_Vietnamese with 'sentences-transformers'
* Installation
* Example usage
4. Using SimeCSE\_Vietnamese with 'transformers'
* Installation
* Example usage
SimeCSE\_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese
========================================================================================
Pre-trained SimeCSE\_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese :
* SimeCSE\_Vietnamese pre-training approach is based on SimCSE which optimizes the SimeCSE\_Vietnamese pre-training procedure for more robust performance.
* SimeCSE\_Vietnamese encode input sentences using a pre-trained language model such as PhoBert
* SimeCSE\_Vietnamese works with both unlabeled and labeled data.
Pre-trained models
------------------
Model: 'VoVanPhuc/sup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base
Model: 'VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base
Using SimeCSE\_Vietnamese with 'sentences-transformers'
--------------------------------------------------------
### Installation
* Install 'sentence-transformers':
+ 'pip install -U sentence-transformers'
* Install 'pyvi' to word segment:
+ 'pip install pyvi'
### Example usage
Using SimeCSE\_Vietnamese with 'transformers'
----------------------------------------------
### Installation
* Install 'transformers':
+ 'pip install -U transformers'
* Install 'pyvi' to word segment:
+ 'pip install pyvi'
### Example usage
Quick Start
-----------
Open In Colab
@article{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
journal={arXiv preprint arXiv:2104.08821},
year={2021}
}
```
@inproceedings{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
year = {2020},
pages = {1037--1042}
}
```
|
[
"#### Table of contents\n\n\n1. Introduction\n2. Pretrain model\n3. Using SimeCSE\\_Vietnamese with 'sentences-transformers'\n\t* Installation\n\t* Example usage\n4. Using SimeCSE\\_Vietnamese with 'transformers'\n\t* Installation\n\t* Example usage\n\n\n SimeCSE\\_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese\n========================================================================================\n\n\nPre-trained SimeCSE\\_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese :\n\n\n* SimeCSE\\_Vietnamese pre-training approach is based on SimCSE which optimizes the SimeCSE\\_Vietnamese pre-training procedure for more robust performance.\n* SimeCSE\\_Vietnamese encode input sentences using a pre-trained language model such as PhoBert\n* SimeCSE\\_Vietnamese works with both unlabeled and labeled data.\n\n\nPre-trained models\n------------------\n\n\nModel: 'VoVanPhuc/sup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base\nModel: 'VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base\n\n\n Using SimeCSE\\_Vietnamese with 'sentences-transformers'\n--------------------------------------------------------",
"### Installation\n\n\n* Install 'sentence-transformers':\n\n\n\t+ 'pip install -U sentence-transformers'\n* Install 'pyvi' to word segment:\n\n\n\t+ 'pip install pyvi'",
"### Example usage\n\n\n Using SimeCSE\\_Vietnamese with 'transformers'\n----------------------------------------------",
"### Installation\n\n\n* Install 'transformers':\n\n\n\t+ 'pip install -U transformers'\n* Install 'pyvi' to word segment:\n\n\n\t+ 'pip install pyvi'",
"### Example usage\n\n\nQuick Start\n-----------\n\n\nOpen In Colab\n\n\n@article{gao2021simcse,\ntitle={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},\nauthor={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},\njournal={arXiv preprint arXiv:2104.08821},\nyear={2021}\n}\n\n\n\n```\n@inproceedings{phobert,\ntitle = {{PhoBERT: Pre-trained language models for Vietnamese}},\nauthor = {Dat Quoc Nguyen and Anh Tuan Nguyen},\nbooktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},\nyear = {2020},\npages = {1037--1042}\n}\n\n```"
] |
[
"TAGS\n#transformers #pytorch #roberta #arxiv-2104.08821 #endpoints_compatible #region-us \n",
"#### Table of contents\n\n\n1. Introduction\n2. Pretrain model\n3. Using SimeCSE\\_Vietnamese with 'sentences-transformers'\n\t* Installation\n\t* Example usage\n4. Using SimeCSE\\_Vietnamese with 'transformers'\n\t* Installation\n\t* Example usage\n\n\n SimeCSE\\_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese\n========================================================================================\n\n\nPre-trained SimeCSE\\_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese :\n\n\n* SimeCSE\\_Vietnamese pre-training approach is based on SimCSE which optimizes the SimeCSE\\_Vietnamese pre-training procedure for more robust performance.\n* SimeCSE\\_Vietnamese encode input sentences using a pre-trained language model such as PhoBert\n* SimeCSE\\_Vietnamese works with both unlabeled and labeled data.\n\n\nPre-trained models\n------------------\n\n\nModel: 'VoVanPhuc/sup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base\nModel: 'VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base', #params: 135M, Arch.: base\n\n\n Using SimeCSE\\_Vietnamese with 'sentences-transformers'\n--------------------------------------------------------",
"### Installation\n\n\n* Install 'sentence-transformers':\n\n\n\t+ 'pip install -U sentence-transformers'\n* Install 'pyvi' to word segment:\n\n\n\t+ 'pip install pyvi'",
"### Example usage\n\n\n Using SimeCSE\\_Vietnamese with 'transformers'\n----------------------------------------------",
"### Installation\n\n\n* Install 'transformers':\n\n\n\t+ 'pip install -U transformers'\n* Install 'pyvi' to word segment:\n\n\n\t+ 'pip install pyvi'",
"### Example usage\n\n\nQuick Start\n-----------\n\n\nOpen In Colab\n\n\n@article{gao2021simcse,\ntitle={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},\nauthor={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},\njournal={arXiv preprint arXiv:2104.08821},\nyear={2021}\n}\n\n\n\n```\n@inproceedings{phobert,\ntitle = {{PhoBERT: Pre-trained language models for Vietnamese}},\nauthor = {Dat Quoc Nguyen and Anh Tuan Nguyen},\nbooktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},\nyear = {2020},\npages = {1037--1042}\n}\n\n```"
] |
text-generation
|
transformers
|
#Cortana DialoGPT Model
|
{"tags": ["conversational"]}
|
VulcanBin/DialoGPT-small-cortana
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Cortana DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null |
transformers
|
# Deberta-Chinese
本项目,基于微软开源的Deberta模型,在中文领域进行预训练。开源本模型,旨在为其他人提供更多预训练语言模型选择。
本预训练模型,基于WuDaoCorpora语料库预训练而成。WuDaoCorpora是北京智源人工智能研究院(智源研究院)构建的大规模、高质量数据集,用于支撑“悟道”大模型项目研究。
使用WWM与n-gramMLM 等预训练方法进行预训练。
| 预训练模型 | 学习率 | batchsize | 设备 | 语料库 | 时间 | 优化器 |
| --------------------- | ------ | --------- | ------ | ------ | ---- | ------ |
| Deberta-Chinese-Large | 1e-5 | 512 | 2*3090 | 200G | 14天 | AdamW |
### 加载与使用
依托于huggingface-transformers
```
tokenizer = BertTokenizer.from_pretrained("WENGSYX/Deberta-Chinese-Large")
model = AutoModel.from_pretrained("WENGSYX/Deberta-Chinese-Large")
```
#### 注意,请使用BertTokenizer加载中文词表
|
{}
|
WENGSYX/Deberta-Chinese-Large
| null |
[
"transformers",
"pytorch",
"deberta",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #deberta #endpoints_compatible #region-us
|
Deberta-Chinese
===============
本项目,基于微软开源的Deberta模型,在中文领域进行预训练。开源本模型,旨在为其他人提供更多预训练语言模型选择。
本预训练模型,基于WuDaoCorpora语料库预训练而成。WuDaoCorpora是北京智源人工智能研究院(智源研究院)构建的大规模、高质量数据集,用于支撑“悟道”大模型项目研究。
使用WWM与n-gramMLM 等预训练方法进行预训练。
### 加载与使用
依托于huggingface-transformers
#### 注意,请使用BertTokenizer加载中文词表
|
[
"### 加载与使用\n\n\n依托于huggingface-transformers",
"#### 注意,请使用BertTokenizer加载中文词表"
] |
[
"TAGS\n#transformers #pytorch #deberta #endpoints_compatible #region-us \n",
"### 加载与使用\n\n\n依托于huggingface-transformers",
"#### 注意,请使用BertTokenizer加载中文词表"
] |
feature-extraction
|
transformers
|
# Multilingual SimCSE
#### A contrastive learning model using parallel language pair training
##### By using parallel sentence pairs in different languages, the text is mapped to the same vector space for pre-training similar to Simcse
##### Firstly, the [mDeBERTa](https://huggingface.co/microsoft/mdeberta-v3-base) model is used to load the pre-training parameters, and then the pre-training is carried out based on the [CCMatrix](https://github.com/facebookresearch/LASER/tree/main/tasks/CCMatrix) data set.
##### Training data: 100 million parallel pairs
##### Taining equipment: 4 * 3090
## Pipline Code
```
from transformers import AutoModel,AutoTokenizer
model = AutoModel.from_pretrained('WENGSYX/Multilingual_SimCSE')
tokenizer = AutoTokenizer.from_pretrained('WENGSYX/Multilingual_SimCSE')
word1 = tokenizer('Hello,world.',return_tensors='pt')
word2 = tokenizer('你好,世界',return_tensors='pt')
out1 = model(**word1).last_hidden_state.mean(1)
out2 = model(**word2).last_hidden_state.mean(1)
print(F.cosine_similarity(out1,out2))
----------------------------------------------------
tensor([0.8758], grad_fn=<DivBackward0>)
```
## Train Code
```
from transformers import AutoModel,AutoTokenizer,AdamW
model = AutoModel.from_pretrained('WENGSYX/Multilingual_SimCSE')
tokenizer = AutoTokenizer.from_pretrained('WENGSYX/Multilingual_SimCSE')
optimizer = AdamW(model.parameters(),lr=1e-5)
def compute_loss(y_pred, t=0.05, device="cuda"):
idxs = torch.arange(0, y_pred.shape[0], device=device)
y_true = idxs + 1 - idxs % 2 * 2
similarities = F.cosine_similarity(y_pred.unsqueeze(1), y_pred.unsqueeze(0), dim=2)
similarities = similarities - torch.eye(y_pred.shape[0], device=device) * 1e12
similarities = similarities / t
loss = F.cross_entropy(similarities, y_true)
return torch.mean(loss)
wordlist = [['Hello,world','你好,世界'],['Pensa che il bianco rappresenti la purezza.','Он думает, что белые символизируют чистоту.']]
input_ids, attention_mask, token_type_ids = [], [], []
for x in wordlist:
text1 = tokenizer(x[0], padding='max_length', truncation=True, max_length=512)
input_ids.append(text1['input_ids'])
attention_mask.append(text1['attention_mask'])
text2 = tokenizer(x[1], padding='max_length', truncation=True, max_length=512)
input_ids.append(text2['input_ids'])
attention_mask.append(text2['attention_mask'])
input_ids = torch.tensor(input_ids,device=device)
attention_mask = torch.tensor(attention_mask,device=device)
output = model(input_ids=input_ids,attention_mask=attention_mask)
output = output.last_hidden_state.mean(1)
loss = compute_loss(output)
loss.backward()
optimizer.step()
optimizer.zero_grad()
```
|
{}
|
WENGSYX/Multilingual_SimCSE
| null |
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #deberta-v2 #feature-extraction #endpoints_compatible #region-us
|
# Multilingual SimCSE
#### A contrastive learning model using parallel language pair training
##### By using parallel sentence pairs in different languages, the text is mapped to the same vector space for pre-training similar to Simcse
##### Firstly, the mDeBERTa model is used to load the pre-training parameters, and then the pre-training is carried out based on the CCMatrix data set.
##### Training data: 100 million parallel pairs
##### Taining equipment: 4 * 3090
## Pipline Code
## Train Code
|
[
"# Multilingual SimCSE",
"#### A contrastive learning model using parallel language pair training",
"##### By using parallel sentence pairs in different languages, the text is mapped to the same vector space for pre-training similar to Simcse",
"##### Firstly, the mDeBERTa model is used to load the pre-training parameters, and then the pre-training is carried out based on the CCMatrix data set.",
"##### Training data: 100 million parallel pairs",
"##### Taining equipment: 4 * 3090",
"## Pipline Code",
"## Train Code"
] |
[
"TAGS\n#transformers #pytorch #safetensors #deberta-v2 #feature-extraction #endpoints_compatible #region-us \n",
"# Multilingual SimCSE",
"#### A contrastive learning model using parallel language pair training",
"##### By using parallel sentence pairs in different languages, the text is mapped to the same vector space for pre-training similar to Simcse",
"##### Firstly, the mDeBERTa model is used to load the pre-training parameters, and then the pre-training is carried out based on the CCMatrix data set.",
"##### Training data: 100 million parallel pairs",
"##### Taining equipment: 4 * 3090",
"## Pipline Code",
"## Train Code"
] |
automatic-speech-recognition
|
transformers
|
"Hello"
|
{}
|
WSS/wav2vec2-large-xlsr-53-vietnamese
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
|
"Hello"
|
[] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n"
] |
null |
transformers
|
https://github.com/zejunwang1/bert4vec
|
{}
|
WangZeJun/roformer-sim-base-chinese
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #endpoints_compatible #region-us
|
URL
|
[] |
[
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n"
] |
null |
transformers
|
https://github.com/zejunwang1/bert4vec
|
{}
|
WangZeJun/roformer-sim-small-chinese
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #endpoints_compatible #region-us
|
URL
|
[] |
[
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n"
] |
null |
transformers
|
https://github.com/zejunwang1/bert4vec
|
{}
|
WangZeJun/simbert-base-chinese
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #endpoints_compatible #has_space #region-us
|
URL
|
[] |
[
"TAGS\n#transformers #pytorch #endpoints_compatible #has_space #region-us \n"
] |
text-generation
|
transformers
|
# Rick Sanchez DialoGPT Model
|
{"tags": ["conversational"]}
|
WarrenK-Design/DialoGPT-small-Rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Sanchez DialoGPT Model
|
[
"# Rick Sanchez DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Sanchez DialoGPT Model"
] |
null | null |
Testing a new model
|
{}
|
WayScriptDerrick/SampleModel
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
Testing a new model
|
[] |
[
"TAGS\n#region-us \n"
] |
text-classification
|
transformers
|
# WellcomeBertMesh
WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings ([Mesh](https://www.nlm.nih.gov/mesh/meshhome.html)). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications.
# Model description
The model is inspired from [BertMesh](https://pubmed.ncbi.nlm.nih.gov/32976559/) which is trained on the full text of biomedical publications and uses BioBert as its pretrained model.
WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is [PubMedBert](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies.
We train the model using data from the [BioASQ](http://bioasq.org) competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs.
The model achieves 63% micro f1 with a 0.5 threshold for all labels.
The code for developing the model is open source and can be found in https://github.com/wellcometrust/grants_tagger
# How to use
⚠️ You need transformers 4.17+ for the example to work due to its recent support for custom models.
You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass `trust_remote_code=True`. You can get access to the probabilities for all labels by omitting `return_labels=True`.
```
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Wellcome/WellcomeBertMesh"
)
model = AutoModel.from_pretrained(
"Wellcome/WellcomeBertMesh",
trust_remote_code=True
)
text = "This grant is about malaria and not about HIV."
inputs = tokenizer([text], padding="max_length")
labels = model(**inputs, return_labels=True)
print(labels)
```
You can inspect the model code if you navigate to the files and see `model.py`.
|
{"license": "apache-2.0", "pipeline_tag": "text-classification"}
|
Wellcome/WellcomeBertMesh
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"text-classification",
"custom_code",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #text-classification #custom_code #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# WellcomeBertMesh
WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings (Mesh). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications.
# Model description
The model is inspired from BertMesh which is trained on the full text of biomedical publications and uses BioBert as its pretrained model.
WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is PubMedBert from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies.
We train the model using data from the BioASQ competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs.
The model achieves 63% micro f1 with a 0.5 threshold for all labels.
The code for developing the model is open source and can be found in URL
# How to use
️ You need transformers 4.17+ for the example to work due to its recent support for custom models.
You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass 'trust_remote_code=True'. You can get access to the probabilities for all labels by omitting 'return_labels=True'.
You can inspect the model code if you navigate to the files and see 'URL'.
|
[
"# WellcomeBertMesh\n\nWellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings (Mesh). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications.",
"# Model description\n\nThe model is inspired from BertMesh which is trained on the full text of biomedical publications and uses BioBert as its pretrained model.\n\nWellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is PubMedBert from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies.\n\nWe train the model using data from the BioASQ competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs. \n\nThe model achieves 63% micro f1 with a 0.5 threshold for all labels.\n\nThe code for developing the model is open source and can be found in URL",
"# How to use\n\n️ You need transformers 4.17+ for the example to work due to its recent support for custom models.\n\nYou can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass 'trust_remote_code=True'. You can get access to the probabilities for all labels by omitting 'return_labels=True'.\n\n\n\nYou can inspect the model code if you navigate to the files and see 'URL'."
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #text-classification #custom_code #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# WellcomeBertMesh\n\nWellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings (Mesh). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications.",
"# Model description\n\nThe model is inspired from BertMesh which is trained on the full text of biomedical publications and uses BioBert as its pretrained model.\n\nWellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is PubMedBert from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies.\n\nWe train the model using data from the BioASQ competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs. \n\nThe model achieves 63% micro f1 with a 0.5 threshold for all labels.\n\nThe code for developing the model is open source and can be found in URL",
"# How to use\n\n️ You need transformers 4.17+ for the example to work due to its recent support for custom models.\n\nYou can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass 'trust_remote_code=True'. You can get access to the probabilities for all labels by omitting 'return_labels=True'.\n\n\n\nYou can inspect the model code if you navigate to the files and see 'URL'."
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0584
- Precision: 0.9286
- Recall: 0.9475
- F1: 0.9379
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2183 | 1.0 | 878 | 0.0753 | 0.9087 | 0.9291 | 0.9188 | 0.9800 |
| 0.0462 | 2.0 | 1756 | 0.0614 | 0.9329 | 0.9470 | 0.9399 | 0.9858 |
| 0.0244 | 3.0 | 2634 | 0.0584 | 0.9286 | 0.9475 | 0.9379 | 0.9859 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.2+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner1", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9285832096321953, "name": "Precision"}, {"type": "recall", "value": 0.9474924267923258, "name": "Recall"}, {"type": "f1", "value": 0.9379425239483548, "name": "F1"}, {"type": "accuracy", "value": 0.9859009831047272, "name": "Accuracy"}]}]}]}
|
Wende/bert-finetuned-ner1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-finetuned-ner1
===================
This model is a fine-tuned version of bert-base-cased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0584
* Precision: 0.9286
* Recall: 0.9475
* F1: 0.9379
* Accuracy: 0.9859
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.8.2+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.8.2+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.8.2+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Harry Potter DaibloGPT Model
|
{"tags": ["conversational"]}
|
Wessel/DiabloGPT-medium-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Harry Potter DaibloGPT Model
|
[
"# Harry Potter DaibloGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Harry Potter DaibloGPT Model"
] |
text-generation
|
transformers
|
# White's Bot
|
{"tags": ["conversational"]}
|
White/white-bot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# White's Bot
|
[
"# White's Bot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# White's Bot"
] |
text-generation
|
transformers
|
# Twety DialoGPT Model
|
{"tags": ["conversational"]}
|
Whitez/DialoGPT-small-twety
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Twety DialoGPT Model
|
[
"# Twety DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Twety DialoGPT Model"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-arabic-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-arabic-demo-colab", "results": []}]}
|
Wiam/wav2vec2-large-xlsr-arabic-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xlsr-arabic-demo-colab
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-large-xlsr-arabic-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu102\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xlsr-arabic-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu102\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
feature-extraction
|
transformers
|
# IndoConvBERT Base Model
IndoConvBERT is a ConvBERT model pretrained on Indo4B.
## Pretraining details
We follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128 sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-8 TPU.
The current version of the model is trained on Indo4B and small Twitter dump.
## Acknowledgement
Big thanks to TFRC (TensorFlow Research Cloud) for providing free TPU.
|
{"language": "id", "inference": false}
|
Wikidepia/IndoConvBERT-base
| null |
[
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"id",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #convbert #feature-extraction #id #region-us
|
# IndoConvBERT Base Model
IndoConvBERT is a ConvBERT model pretrained on Indo4B.
## Pretraining details
We follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128 sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-8 TPU.
The current version of the model is trained on Indo4B and small Twitter dump.
## Acknowledgement
Big thanks to TFRC (TensorFlow Research Cloud) for providing free TPU.
|
[
"# IndoConvBERT Base Model\n\nIndoConvBERT is a ConvBERT model pretrained on Indo4B.",
"## Pretraining details\n\nWe follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128 sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-8 TPU.\n\nThe current version of the model is trained on Indo4B and small Twitter dump.",
"## Acknowledgement\n\nBig thanks to TFRC (TensorFlow Research Cloud) for providing free TPU."
] |
[
"TAGS\n#transformers #pytorch #tf #convbert #feature-extraction #id #region-us \n",
"# IndoConvBERT Base Model\n\nIndoConvBERT is a ConvBERT model pretrained on Indo4B.",
"## Pretraining details\n\nWe follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128 sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-8 TPU.\n\nThe current version of the model is trained on Indo4B and small Twitter dump.",
"## Acknowledgement\n\nBig thanks to TFRC (TensorFlow Research Cloud) for providing free TPU."
] |
text2text-generation
|
transformers
|
# Paraphrase Generation with IndoT5 Base
IndoT5-base trained on translated PAWS.
## Model in action
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Wikidepia/IndoT5-base-paraphrase")
model = AutoModelForSeq2SeqLM.from_pretrained("Wikidepia/IndoT5-base-paraphrase")
sentence = "Anak anak melakukan piket kelas agar kebersihan kelas terjaga"
text = "paraphrase: " + sentence + " </s>"
encoding = tokenizer(text, padding='longest', return_tensors="pt")
outputs = model.generate(
input_ids=encoding["input_ids"], attention_mask=encoding["attention_mask"],
max_length=512,
do_sample=True,
top_k=200,
top_p=0.95,
early_stopping=True,
num_return_sequences=5
)
```
## Limitations
Sometimes paraphrase contain date which doesnt exists in the original text :/
## Acknowledgement
Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
|
{"language": ["id"]}
|
Wikidepia/IndoT5-base-paraphrase
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"id",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #tensorboard #t5 #text2text-generation #id #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Paraphrase Generation with IndoT5 Base
IndoT5-base trained on translated PAWS.
## Model in action
## Limitations
Sometimes paraphrase contain date which doesnt exists in the original text :/
## Acknowledgement
Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
|
[
"# Paraphrase Generation with IndoT5 Base\n\nIndoT5-base trained on translated PAWS.",
"## Model in action",
"## Limitations\n\nSometimes paraphrase contain date which doesnt exists in the original text :/",
"## Acknowledgement\n\nThanks to Tensorflow Research Cloud for providing TPU v3-8s."
] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #t5 #text2text-generation #id #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Paraphrase Generation with IndoT5 Base\n\nIndoT5-base trained on translated PAWS.",
"## Model in action",
"## Limitations\n\nSometimes paraphrase contain date which doesnt exists in the original text :/",
"## Acknowledgement\n\nThanks to Tensorflow Research Cloud for providing TPU v3-8s."
] |
text2text-generation
|
transformers
|
# Indonesian T5 Base
T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with [extra filtering](https://github.com/Wikidepia/indonesian_datasets/tree/master/dump/mc4). This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.
## Pretraining Details
Trained for 1M steps following [`google/t5-v1_1-base`](https://huggingface.co/google/t5-v1_1-base).
## Model Performance
TBD
## Limitations and bias
This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.
## Acknowledgement
Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
|
{"language": ["id"], "datasets": ["allenai/c4"]}
|
Wikidepia/IndoT5-base
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"id",
"dataset:allenai/c4",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #id #dataset-allenai/c4 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Indonesian T5 Base
T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with extra filtering. This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.
## Pretraining Details
Trained for 1M steps following 'google/t5-v1_1-base'.
## Model Performance
TBD
## Limitations and bias
This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.
## Acknowledgement
Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
|
[
"# Indonesian T5 Base\n\n\nT5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with extra filtering. This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.",
"## Pretraining Details\n\nTrained for 1M steps following 'google/t5-v1_1-base'.",
"## Model Performance\n\nTBD",
"## Limitations and bias\n\nThis model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.",
"## Acknowledgement\n\nThanks to Tensorflow Research Cloud for providing TPU v3-8s."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #id #dataset-allenai/c4 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Indonesian T5 Base\n\n\nT5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with extra filtering. This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.",
"## Pretraining Details\n\nTrained for 1M steps following 'google/t5-v1_1-base'.",
"## Model Performance\n\nTBD",
"## Limitations and bias\n\nThis model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.",
"## Acknowledgement\n\nThanks to Tensorflow Research Cloud for providing TPU v3-8s."
] |
text2text-generation
|
transformers
|
**NOTE** : This model might be broken :/
# Indonesian T5 Large
T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with [extra filtering](https://github.com/Wikidepia/indonesian_datasets/tree/master/dump/mc4). This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.
## Pretraining Details
Trained for 500K steps following [`google/t5-v1_1-large`](https://huggingface.co/google/t5-v1_1-large).
## Model Performance
TBD
## Limitations and bias
This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.
## Acknowledgement
Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
|
{"language": ["id"], "datasets": ["allenai/c4"]}
|
Wikidepia/IndoT5-large
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"id",
"dataset:allenai/c4",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #id #dataset-allenai/c4 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
NOTE : This model might be broken :/
# Indonesian T5 Large
T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with extra filtering. This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.
## Pretraining Details
Trained for 500K steps following 'google/t5-v1_1-large'.
## Model Performance
TBD
## Limitations and bias
This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.
## Acknowledgement
Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
|
[
"# Indonesian T5 Large\n\nT5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with extra filtering. This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.",
"## Pretraining Details\n\nTrained for 500K steps following 'google/t5-v1_1-large'.",
"## Model Performance\n\nTBD",
"## Limitations and bias\n\nThis model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.",
"## Acknowledgement\n\nThanks to Tensorflow Research Cloud for providing TPU v3-8s."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #id #dataset-allenai/c4 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Indonesian T5 Large\n\nT5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with extra filtering. This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.",
"## Pretraining Details\n\nTrained for 500K steps following 'google/t5-v1_1-large'.",
"## Model Performance\n\nTBD",
"## Limitations and bias\n\nThis model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.",
"## Acknowledgement\n\nThanks to Tensorflow Research Cloud for providing TPU v3-8s."
] |
text2text-generation
|
transformers
|
# Indonesian T5 Small
T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with [extra filtering](https://github.com/Wikidepia/indonesian_datasets/tree/master/dump/mc4). This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.
## Pretraining Details
Trained for 1M steps following [`google/t5-v1_1-small`](https://huggingface.co/google/t5-v1_1-small).
## Model Performance
TBD
## Limitations and bias
This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.
## Acknowledgement
Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
|
{"language": ["id"], "datasets": ["allenai/c4"]}
|
Wikidepia/IndoT5-small
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"id",
"dataset:allenai/c4",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #id #dataset-allenai/c4 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Indonesian T5 Small
T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with extra filtering. This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.
## Pretraining Details
Trained for 1M steps following 'google/t5-v1_1-small'.
## Model Performance
TBD
## Limitations and bias
This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.
## Acknowledgement
Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
|
[
"# Indonesian T5 Small\n\n\nT5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with extra filtering. This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.",
"## Pretraining Details\n\nTrained for 1M steps following 'google/t5-v1_1-small'.",
"## Model Performance\n\nTBD",
"## Limitations and bias\n\nThis model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.",
"## Acknowledgement\n\nThanks to Tensorflow Research Cloud for providing TPU v3-8s."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #id #dataset-allenai/c4 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Indonesian T5 Small\n\n\nT5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with extra filtering. This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.",
"## Pretraining Details\n\nTrained for 1M steps following 'google/t5-v1_1-small'.",
"## Model Performance\n\nTBD",
"## Limitations and bias\n\nThis model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.",
"## Acknowledgement\n\nThanks to Tensorflow Research Cloud for providing TPU v3-8s."
] |
token-classification
|
flair
|
# SponsorBlock Auto Segment
|
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"]}
|
Wikidepia/SB-AutoSegment
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#flair #pytorch #token-classification #sequence-tagger-model #en #region-us
|
# SponsorBlock Auto Segment
|
[
"# SponsorBlock Auto Segment"
] |
[
"TAGS\n#flair #pytorch #token-classification #sequence-tagger-model #en #region-us \n",
"# SponsorBlock Auto Segment"
] |
question-answering
|
transformers
|
# SQuAD IndoBERT-Lite Base Model
Fine-tuned IndoBERT-Lite from IndoBenchmark using Translated SQuAD datasets.
## How to use
### Using pipeline
```python
from transformers import BertTokenizerFast, pipeline
tokenizer = BertTokenizerFast.from_pretrained(
'Wikidepia/albert-bahasa-uncased-squad'
)
nlp = pipeline('question-answering', model="Wikidepia/albert-bahasa-uncased-squad", tokenizer=tokenizer)
QA_input = {
'question': 'Kapan orang Normandia berada di Normandia?',
'context': 'The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) adalah orang-orang yang pada abad ke-10 dan ke-11 memberikan nama mereka ke Normandia, sebuah wilayah di Prancis. Mereka adalah keturunan dari Norse (\ "Norman \" berasal dari \ "Norseman \") perampok dan perompak dari Denmark, Islandia dan Norwegia yang, di bawah pemimpin mereka Rollo, setuju untuk bersumpah setia kepada Raja Charles III dari Francia Barat. Melalui generasi asimilasi dan pencampuran dengan penduduk asli Franka dan Romawi-Gaul, keturunan mereka secara bertahap akan bergabung dengan budaya Francia Barat yang berbasis di Karoling. Identitas budaya dan etnis orang Normandia yang berbeda awalnya muncul pada paruh pertama abad ke-10, dan terus berkembang selama abad-abad berikutnya.'
}
res = nlp(QA_input)
print(res)
```
|
{"language": "id", "inference": false}
|
Wikidepia/albert-bahasa-uncased-squad
| null |
[
"transformers",
"pytorch",
"albert",
"question-answering",
"id",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #albert #question-answering #id #region-us
|
# SQuAD IndoBERT-Lite Base Model
Fine-tuned IndoBERT-Lite from IndoBenchmark using Translated SQuAD datasets.
## How to use
### Using pipeline
|
[
"# SQuAD IndoBERT-Lite Base Model\n\nFine-tuned IndoBERT-Lite from IndoBenchmark using Translated SQuAD datasets.",
"## How to use",
"### Using pipeline"
] |
[
"TAGS\n#transformers #pytorch #albert #question-answering #id #region-us \n",
"# SQuAD IndoBERT-Lite Base Model\n\nFine-tuned IndoBERT-Lite from IndoBenchmark using Translated SQuAD datasets.",
"## How to use",
"### Using pipeline"
] |
question-answering
|
transformers
|
# IndoBERT-Lite base fine-tuned on Translated SQuAD v2
[IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/SQuAD) for **Q&A** downstream task.
## Model in action
Fast usage with **pipelines**:
```python
from transformers import BertTokenizerFast, pipeline
tokenizer = BertTokenizerFast.from_pretrained(
'Wikidepia/indobert-lite-squad'
)
qa_pipeline = pipeline(
"question-answering",
model="Wikidepia/indobert-lite-squad",
tokenizer=tokenizer
)
qa_pipeline({
'context': "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900.",
'question': "Kapan Einstein melepas kewarganegaraan Jerman?"
})
```
# Output:
```json
{
"score":0.9799205660820007,
"start":147,
"end":151,
"answer":"1896"
}
```
README copied from [mrm8488's repository](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2)
|
{"language": "id", "widget": [{"text": "Kapan Einstein melepas kewarganegaraan Jerman?", "context": "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgen\u00f6ssische Technische Hochschule, ETH) di Z\u00fcrich pada tahun 1900."}]}
|
Wikidepia/indobert-lite-squad
| null |
[
"transformers",
"pytorch",
"albert",
"question-answering",
"id",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #albert #question-answering #id #endpoints_compatible #region-us
|
# IndoBERT-Lite base fine-tuned on Translated SQuAD v2
IndoBERT-Lite trained by Indo Benchmark and fine-tuned on Translated SQuAD 2.0 for Q&A downstream task.
## Model in action
Fast usage with pipelines:
# Output:
README copied from mrm8488's repository
|
[
"# IndoBERT-Lite base fine-tuned on Translated SQuAD v2\n\nIndoBERT-Lite trained by Indo Benchmark and fine-tuned on Translated SQuAD 2.0 for Q&A downstream task.",
"## Model in action\n\nFast usage with pipelines:",
"# Output:\n\n\n\nREADME copied from mrm8488's repository"
] |
[
"TAGS\n#transformers #pytorch #albert #question-answering #id #endpoints_compatible #region-us \n",
"# IndoBERT-Lite base fine-tuned on Translated SQuAD v2\n\nIndoBERT-Lite trained by Indo Benchmark and fine-tuned on Translated SQuAD 2.0 for Q&A downstream task.",
"## Model in action\n\nFast usage with pipelines:",
"# Output:\n\n\n\nREADME copied from mrm8488's repository"
] |
question-answering
|
transformers
|
# IndoBERT-Lite-SQuAD base fine-tuned on Full Translated SQuAD v2
[IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/squad) for **Q&A** downstream task.
## Model in action
Fast usage with **pipelines**:
```python
from transformers import BertTokenizerFast, pipeline
tokenizer = BertTokenizerFast.from_pretrained(
'Wikidepia/indobert-lite-squad'
)
qa_pipeline = pipeline(
"question-answering",
model="Wikidepia/indobert-lite-squad",
tokenizer=tokenizer
)
qa_pipeline({
'context': "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900.",
'question': "Kapan Einstein melepas kewarganegaraan Jerman?"
})
```
# Output:
```json
{
"score": 0.9169162511825562,
"start": 147,
"end": 151,
"answer": "1896"
}
```
README copied from [mrm8488's repository](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2)
|
{"language": "id", "widget": [{"text": "Kapan Einstein melepas kewarganegaraan Jerman?", "context": "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgen\u00f6ssische Technische Hochschule, ETH) di Z\u00fcrich pada tahun 1900."}]}
|
Wikidepia/indobert-lite-squadx
| null |
[
"transformers",
"pytorch",
"albert",
"question-answering",
"id",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #albert #question-answering #id #endpoints_compatible #region-us
|
# IndoBERT-Lite-SQuAD base fine-tuned on Full Translated SQuAD v2
IndoBERT-Lite trained by Indo Benchmark and fine-tuned on Translated SQuAD 2.0 for Q&A downstream task.
## Model in action
Fast usage with pipelines:
# Output:
README copied from mrm8488's repository
|
[
"# IndoBERT-Lite-SQuAD base fine-tuned on Full Translated SQuAD v2\n\nIndoBERT-Lite trained by Indo Benchmark and fine-tuned on Translated SQuAD 2.0 for Q&A downstream task.",
"## Model in action\n\nFast usage with pipelines:",
"# Output:\n\n\n\nREADME copied from mrm8488's repository"
] |
[
"TAGS\n#transformers #pytorch #albert #question-answering #id #endpoints_compatible #region-us \n",
"# IndoBERT-Lite-SQuAD base fine-tuned on Full Translated SQuAD v2\n\nIndoBERT-Lite trained by Indo Benchmark and fine-tuned on Translated SQuAD 2.0 for Q&A downstream task.",
"## Model in action\n\nFast usage with pipelines:",
"# Output:\n\n\n\nREADME copied from mrm8488's repository"
] |
text2text-generation
|
transformers
|
# NMT Model for English-Indonesian
|
{}
|
Wikidepia/marian-nmt-enid
| null |
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
|
# NMT Model for English-Indonesian
|
[
"# NMT Model for English-Indonesian"
] |
[
"TAGS\n#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n",
"# NMT Model for English-Indonesian"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2 XLS-R-300M - Indonesian
This model is a fine-tuned version of `facebook/wav2vec2-xls-r-300m` on the `mozilla-foundation/common_voice_8_0` and [MagicHub Indonesian Conversational Speech Corpus](https://magichub.com/datasets/indonesian-conversational-speech-corpus/).
|
{"language": ["id"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "id", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLS-R-300M - Indonesian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "id"}, "metrics": [{"type": "wer", "value": 5.046, "name": "Test WER"}, {"type": "cer", "value": 1.699, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "id"}, "metrics": [{"type": "wer", "value": 41.31, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "id"}, "metrics": [{"type": "wer", "value": 52.23, "name": "Test WER"}]}]}]}
|
Wikidepia/wav2vec2-xls-r-300m-indonesian
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"id",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #id #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2 XLS-R-300M - Indonesian
This model is a fine-tuned version of 'facebook/wav2vec2-xls-r-300m' on the 'mozilla-foundation/common_voice_8_0' and MagicHub Indonesian Conversational Speech Corpus.
|
[
"# Wav2Vec2 XLS-R-300M - Indonesian\n\nThis model is a fine-tuned version of 'facebook/wav2vec2-xls-r-300m' on the 'mozilla-foundation/common_voice_8_0' and MagicHub Indonesian Conversational Speech Corpus."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #id #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2 XLS-R-300M - Indonesian\n\nThis model is a fine-tuned version of 'facebook/wav2vec2-xls-r-300m' on the 'mozilla-foundation/common_voice_8_0' and MagicHub Indonesian Conversational Speech Corpus."
] |
image-classification
|
transformers
|
Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the [timm repository](https://github.com/rwightman/pytorch-image-models). This model is used in the same way as [ViT-base](https://huggingface.co/google/vit-base-patch16-224).
Note that [safetensors] model requires torch 2.0 environment.
|
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
|
WinKawaks/vit-small-patch16-224
| null |
[
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"vision",
"dataset:imagenet",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #vit #image-classification #vision #dataset-imagenet #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the timm repository. This model is used in the same way as ViT-base.
Note that [safetensors] model requires torch 2.0 environment.
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #vit #image-classification #vision #dataset-imagenet #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
image-classification
|
transformers
|
Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the [timm repository](https://github.com/rwightman/pytorch-image-models). This model is used in the same way as [ViT-base](https://huggingface.co/google/vit-base-patch16-224).
Note that [safetensors] model requires torch 2.0 environment.
|
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
|
WinKawaks/vit-tiny-patch16-224
| null |
[
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"vision",
"dataset:imagenet",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #vit #image-classification #vision #dataset-imagenet #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the timm repository. This model is used in the same way as ViT-base.
Note that [safetensors] model requires torch 2.0 environment.
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #vit #image-classification #vision #dataset-imagenet #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# JC DialogGPT Model
|
{"tags": ["conversational"]}
|
Wise/DialogGPT-small-JC
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# JC DialogGPT Model
|
[
"# JC DialogGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# JC DialogGPT Model"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2162
- Accuracy: 0.9225
- F1: 0.9227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8437 | 1.0 | 250 | 0.3153 | 0.903 | 0.9005 |
| 0.2467 | 2.0 | 500 | 0.2162 | 0.9225 | 0.9227 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cpu
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9225, "name": "Accuracy"}, {"type": "f1", "value": 0.9227046184638882, "name": "F1"}]}]}]}
|
Worldman/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2162
* Accuracy: 0.9225
* F1: 0.9227
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2+cpu
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cpu\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cpu\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-generation
|
transformers
|
# waaaa
|
{"tags": ["conversational"]}
|
WoutN2001/james3
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# waaaa
|
[
"# waaaa"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# waaaa"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-fakenews-discriminator
The dataset: Fake and real news dataset https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset
I use title and label to train the classifier
label_0 : Fake news
label_1 : Real news
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0910
- Accuracy: 0.9758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0452 | 1.0 | 1768 | 0.0910 | 0.9758 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-fakenews-discriminator", "results": []}]}
|
XSY/albert-base-v2-fakenews-discriminator
| null |
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #albert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
albert-base-v2-fakenews-discriminator
=====================================
The dataset: Fake and real news dataset URL
I use title and label to train the classifier
label\_0 : Fake news
label\_1 : Real news
This model is a fine-tuned version of albert-base-v2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0910
* Accuracy: 0.9758
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-imdb-calssification
label_0: negative
label_1: positive
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1983
- Accuracy: 0.9361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.26 | 1.0 | 1563 | 0.1983 | 0.9361 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-imdb-calssification", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.93612, "name": "Accuracy"}]}]}]}
|
XSY/albert-base-v2-imdb-calssification
| null |
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #albert #text-classification #generated_from_trainer #dataset-imdb #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
albert-base-v2-imdb-calssification
==================================
label\_0: negative
label\_1: positive
This model is a fine-tuned version of albert-base-v2 on the imdb dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1983
* Accuracy: 0.9361
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #dataset-imdb #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-scarcasm-discriminator
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2379
- Accuracy: 0.8996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2111 | 1.0 | 2179 | 0.2379 | 0.8996 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-scarcasm-discriminator", "results": []}]}
|
XSY/albert-base-v2-scarcasm-discriminator
| null |
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #albert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
albert-base-v2-scarcasm-discriminator
=====================================
This model is a fine-tuned version of albert-base-v2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2379
* Accuracy: 0.8996
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.0+cu111
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-scarcasm-discriminator
roberta-base
label0: unsarcasitic
label1: sarcastic
The fine tune method in my github https://github.com/yangyangxusheng/Fine-tune-use-transformers
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1844
- Accuracy: 0.9698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.144 | 1.0 | 2179 | 0.2522 | 0.9215 |
| 0.116 | 2.0 | 4358 | 0.2105 | 0.9530 |
| 0.0689 | 3.0 | 6537 | 0.2015 | 0.9610 |
| 0.028 | 4.0 | 8716 | 0.1844 | 0.9698 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-scarcasm-discriminator", "results": []}]}
|
XSY/roberta-scarcasm-discriminator
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #roberta #text-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
roberta-scarcasm-discriminator
==============================
roberta-base
label0: unsarcasitic
label1: sarcastic
The fine tune method in my github URL
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1844
* Accuracy: 0.9698
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
这个模型是根据这个一步一步完成的,如果想自己微调,请参考https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb
This model is completed step by step according to this, if you want to fine-tune yourself, please refer to https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.6901
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4500
- Rouge1: 28.6901
- Rouge2: 8.0102
- Rougel: 22.6087
- Rougelsum: 22.6105
- Gen Len: 18.824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6799 | 1.0 | 25506 | 2.4500 | 28.6901 | 8.0102 | 22.6087 | 22.6105 | 18.824 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{}
|
XSY/t5-small-finetuned-xsum
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
这个模型是根据这个一步一步完成的,如果想自己微调,请参考https://URL
This model is completed step by step according to this, if you want to fine-tune yourself, please refer to URL
---
license: apache-2.0
tags:
* generated\_from\_trainer
datasets:
* xsum
metrics:
* rouge
model-index:
* name: t5-small-finetuned-xsum
results:
+ task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.6901
---
t5-small-finetuned-xsum
=======================
This model is a fine-tuned version of t5-small on the xsum dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4500
* Rouge1: 28.6901
* Rouge2: 8.0102
* Rougel: 22.6087
* Rougelsum: 22.6105
* Gen Len: 18.824
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 478412765
- CO2 Emissions (in grams): 69.86520391863117
## Validation Metrics
- Loss: 0.186362624168396
- Accuracy: 0.9539955699437723
- Precision: 0.9527454242928453
- Recall: 0.9572049481778669
- AUC: 0.9903929997079495
- F1: 0.9549699799866577
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/XYHY/autonlp-123-478412765
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("XYHY/autonlp-123-478412765", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("XYHY/autonlp-123-478412765", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["XYHY/autonlp-data-123"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 69.86520391863117}
|
XYHY/autonlp-123-478412765
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"unk",
"dataset:XYHY/autonlp-data-123",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #roberta #text-classification #autonlp #unk #dataset-XYHY/autonlp-data-123 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 478412765
- CO2 Emissions (in grams): 69.86520391863117
## Validation Metrics
- Loss: 0.186362624168396
- Accuracy: 0.9539955699437723
- Precision: 0.9527454242928453
- Recall: 0.9572049481778669
- AUC: 0.9903929997079495
- F1: 0.9549699799866577
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 478412765\n- CO2 Emissions (in grams): 69.86520391863117",
"## Validation Metrics\n\n- Loss: 0.186362624168396\n- Accuracy: 0.9539955699437723\n- Precision: 0.9527454242928453\n- Recall: 0.9572049481778669\n- AUC: 0.9903929997079495\n- F1: 0.9549699799866577",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #unk #dataset-XYHY/autonlp-data-123 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 478412765\n- CO2 Emissions (in grams): 69.86520391863117",
"## Validation Metrics\n\n- Loss: 0.186362624168396\n- Accuracy: 0.9539955699437723\n- Precision: 0.9527454242928453\n- Recall: 0.9572049481778669\n- AUC: 0.9903929997079495\n- F1: 0.9549699799866577",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-generation
|
transformers
|
# Ultron Small
|
{"tags": ["conversational"]}
|
Xeouz/Ultron-Small
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Ultron Small
|
[
"# Ultron Small"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Ultron Small"
] |
null | null |
A VQGAN-compatible model trained on screenshots of cityscapes from 90s anime. To use, direct vqgan to the model as you would vqgan_imagenet_f16_1024, faceshq, etc.
|
{}
|
Xibanya/AestheticCities
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
A VQGAN-compatible model trained on screenshots of cityscapes from 90s anime. To use, direct vqgan to the model as you would vqgan_imagenet_f16_1024, faceshq, etc.
|
[] |
[
"TAGS\n#region-us \n"
] |
text-to-image
| null |
# Sunset Cities
This is the [Malevich](https://huggingface.co/sberbank-ai/rudalle-Malevich) ruDALL-E model finetuned on anime screenshots of big cities at sunset.
<img style="text-align:center; display:block;" src="https://huggingface.co/Xibanya/sunset_city/resolve/main/citysunset.png" width="256">
### installation
```
pip install rudalle
```
### How to use
Basic implementation to get a list of image data objects.
```python
from translate import Translator
from rudalle import get_rudalle_model, get_tokenizer, get_vae
from rudalle.pipelines import generate_images
model = get_rudalle_model('Malevich', pretrained=True, fp16=True, device='cuda')
model.load_state_dict(torch.load(CHECKPOINT_PATH))
vae = get_vae().to('cuda')
tokenizer = get_tokenizer()
input_text = Translator(to_lang='ru').translate('city at sunset')
images, _ = generate_images(
text=input_text,
tokenizer=tokenizer, dalle=model, vae=vae,
images_num=1,
top_k=2048,
top_p=0.95,
temperature=1.0
)
```
the Malevich model only recognizes input in Russian. If you're going to paste Cyrillic directly into the code rather than filter an English prompt through the translate API, you will need to put this at the top of the file:
```python
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
```
|
{"language": ["ru", "en"], "license": "cc-by-sa-4.0", "tags": ["PyTorch", "Transformers"], "pipeline_tag": "text-to-image"}
|
Xibanya/sunset_city
| null |
[
"PyTorch",
"Transformers",
"text-to-image",
"ru",
"en",
"license:cc-by-sa-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru",
"en"
] |
TAGS
#PyTorch #Transformers #text-to-image #ru #en #license-cc-by-sa-4.0 #region-us
|
# Sunset Cities
This is the Malevich ruDALL-E model finetuned on anime screenshots of big cities at sunset.
<img style="text-align:center; display:block;" src="URL width="256">
### installation
### How to use
Basic implementation to get a list of image data objects.
the Malevich model only recognizes input in Russian. If you're going to paste Cyrillic directly into the code rather than filter an English prompt through the translate API, you will need to put this at the top of the file:
|
[
"# Sunset Cities\r\nThis is the Malevich ruDALL-E model finetuned on anime screenshots of big cities at sunset.\r\n<img style=\"text-align:center; display:block;\" src=\"URL width=\"256\">",
"### installation",
"### How to use\r\nBasic implementation to get a list of image data objects.\r\n\r\n\r\n\r\nthe Malevich model only recognizes input in Russian. If you're going to paste Cyrillic directly into the code rather than filter an English prompt through the translate API, you will need to put this at the top of the file:"
] |
[
"TAGS\n#PyTorch #Transformers #text-to-image #ru #en #license-cc-by-sa-4.0 #region-us \n",
"# Sunset Cities\r\nThis is the Malevich ruDALL-E model finetuned on anime screenshots of big cities at sunset.\r\n<img style=\"text-align:center; display:block;\" src=\"URL width=\"256\">",
"### installation",
"### How to use\r\nBasic implementation to get a list of image data objects.\r\n\r\n\r\n\r\nthe Malevich model only recognizes input in Russian. If you're going to paste Cyrillic directly into the code rather than filter an English prompt through the translate API, you will need to put this at the top of the file:"
] |
text-generation
|
transformers
|
# Harry
|
{"tags": ["conversational"]}
|
XuguangAi/DialoGPT-small-Harry
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry
|
[
"# Harry"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry"
] |
text-generation
|
transformers
|
# Leslie
|
{"tags": ["conversational"]}
|
XuguangAi/DialoGPT-small-Leslie
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Leslie
|
[
"# Leslie"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Leslie"
] |
text-generation
|
transformers
|
# Rick
|
{"tags": ["conversational"]}
|
XuguangAi/DialoGPT-small-Rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick
|
[
"# Rick"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick"
] |
text-classification
|
transformers
|
# Toxic language detection
## Model description
A toxic language detection model trained on tweets. The base model is Roberta-large. For more information,
including the **training data**, **limitations and bias**, please refer to the [paper](https://arxiv.org/pdf/2102.00086.pdf) and
Github [repo](https://github.com/XuhuiZhou/Toxic_Debias) for more details.
#### How to use
Note that LABEL_1 means toxic and LABEL_0 means non-toxic in the output.
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='Xuhui/ToxDect-roberta-large', return_all_scores=True)
prediction = classifier("You are f**king stupid!", )
print(prediction)
"""
Output:
[[{'label': 'LABEL_0', 'score': 0.002632011892274022}, {'label': 'LABEL_1', 'score': 0.9973680377006531}]]
"""
```
## Training procedure
The random seed for this model is 22. For other details, please refer to the Github [repo](https://github.com/XuhuiZhou/Toxic_Debias) for more details.
### BibTeX entry and citation info
```bibtex
@inproceedings{zhou-etal-2020-debiasing,
title = {Challenges in Automated Debiasing for Toxic Language Detection},
author = {Zhou, Xuhui and Sap, Maarten and Swayamdipta, Swabha and Choi, Yejin and Smith, Noah A.},
booktitle = {EACL},
abbr = {EACL},
html = {https://www.aclweb.org/anthology/2021.eacl-main.274.pdf},
code = {https://github.com/XuhuiZhou/Toxic_Debias},
year = {2021},
bibtex_show = {true},
selected = {true}
}
```
|
{"language": [], "tags": [], "datasets": [], "metrics": []}
|
Xuhui/ToxDect-roberta-large
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:2102.00086",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2102.00086"
] |
[] |
TAGS
#transformers #pytorch #roberta #text-classification #arxiv-2102.00086 #autotrain_compatible #endpoints_compatible #region-us
|
# Toxic language detection
## Model description
A toxic language detection model trained on tweets. The base model is Roberta-large. For more information,
including the training data, limitations and bias, please refer to the paper and
Github repo for more details.
#### How to use
Note that LABEL_1 means toxic and LABEL_0 means non-toxic in the output.
## Training procedure
The random seed for this model is 22. For other details, please refer to the Github repo for more details.
### BibTeX entry and citation info
|
[
"# Toxic language detection",
"## Model description\n\nA toxic language detection model trained on tweets. The base model is Roberta-large. For more information, \nincluding the training data, limitations and bias, please refer to the paper and\nGithub repo for more details.",
"#### How to use\nNote that LABEL_1 means toxic and LABEL_0 means non-toxic in the output.",
"## Training procedure\nThe random seed for this model is 22. For other details, please refer to the Github repo for more details.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #arxiv-2102.00086 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Toxic language detection",
"## Model description\n\nA toxic language detection model trained on tweets. The base model is Roberta-large. For more information, \nincluding the training data, limitations and bias, please refer to the paper and\nGithub repo for more details.",
"#### How to use\nNote that LABEL_1 means toxic and LABEL_0 means non-toxic in the output.",
"## Training procedure\nThe random seed for this model is 22. For other details, please refer to the Github repo for more details.",
"### BibTeX entry and citation info"
] |
text-generation
|
transformers
|
# 经典昆曲欣赏 期末作业
## KunquChat
Author: 1900012921 俞跃江
|
{}
|
YYJ/KunquChat
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 经典昆曲欣赏 期末作业
## KunquChat
Author: 1900012921 俞跃江
|
[
"# 经典昆曲欣赏 期末作业",
"## KunquChat\nAuthor: 1900012921 俞跃江"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 经典昆曲欣赏 期末作业",
"## KunquChat\nAuthor: 1900012921 俞跃江"
] |
text-classification
|
transformers
|
# Model description
This model is an Arabic language sentiment analysis pretrained model.
The model is built on top of the CAMelBERT_msa_sixteenth BERT-based model.
We used the HARD dataset of hotels review to fine tune the model.
The dataset original labels based on a five-star rating were modified to a 3 label data:
- POSITIVE: for ratings > 3 stars
- NEUTRAL: for a 3 star rating
- NEGATIVE: for ratings < 3 stars
This first prototype was trained on 3 epochs for 1 hours using Colab and a TPU acceleration.
# Examples
Here are some examples in Arabic to test :
- Excellent -> ممتاز(Happy)
- I'am sad -> أنا حزين (Sad)
- Nothing -> لا شيء (Neutral)
# Contact
If you have questions or improvement remarks, feel free to contact me on my LinkedIn profile: https://www.linkedin.com/in/yahya-ghrab/
|
{"language": "ar", "widget": [{"text": "\u0645\u0645\u062a\u0627\u0632"}, {"text": "\u0623\u0646\u0627 \u062d\u0632\u064a\u0646"}, {"text": "\u0644\u0627 \u0634\u064a\u0621"}]}
|
Yah216/Sentiment_Analysis_CAMelBERT_msa_sixteenth_HARD
| null |
[
"transformers",
"tf",
"bert",
"text-classification",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar"
] |
TAGS
#transformers #tf #bert #text-classification #ar #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Model description
This model is an Arabic language sentiment analysis pretrained model.
The model is built on top of the CAMelBERT_msa_sixteenth BERT-based model.
We used the HARD dataset of hotels review to fine tune the model.
The dataset original labels based on a five-star rating were modified to a 3 label data:
- POSITIVE: for ratings > 3 stars
- NEUTRAL: for a 3 star rating
- NEGATIVE: for ratings < 3 stars
This first prototype was trained on 3 epochs for 1 hours using Colab and a TPU acceleration.
# Examples
Here are some examples in Arabic to test :
- Excellent -> ممتاز(Happy)
- I'am sad -> أنا حزين (Sad)
- Nothing -> لا شيء (Neutral)
# Contact
If you have questions or improvement remarks, feel free to contact me on my LinkedIn profile: URL
|
[
"# Model description\n\nThis model is an Arabic language sentiment analysis pretrained model.\nThe model is built on top of the CAMelBERT_msa_sixteenth BERT-based model.\nWe used the HARD dataset of hotels review to fine tune the model.\nThe dataset original labels based on a five-star rating were modified to a 3 label data: \n- POSITIVE: for ratings > 3 stars\n- NEUTRAL: for a 3 star rating\n- NEGATIVE: for ratings < 3 stars\n\nThis first prototype was trained on 3 epochs for 1 hours using Colab and a TPU acceleration.",
"# Examples\n\nHere are some examples in Arabic to test :\n- Excellent -> ممتاز(Happy)\n- I'am sad -> أنا حزين (Sad)\n- Nothing -> لا شيء (Neutral)",
"# Contact\nIf you have questions or improvement remarks, feel free to contact me on my LinkedIn profile: URL"
] |
[
"TAGS\n#transformers #tf #bert #text-classification #ar #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Model description\n\nThis model is an Arabic language sentiment analysis pretrained model.\nThe model is built on top of the CAMelBERT_msa_sixteenth BERT-based model.\nWe used the HARD dataset of hotels review to fine tune the model.\nThe dataset original labels based on a five-star rating were modified to a 3 label data: \n- POSITIVE: for ratings > 3 stars\n- NEUTRAL: for a 3 star rating\n- NEGATIVE: for ratings < 3 stars\n\nThis first prototype was trained on 3 epochs for 1 hours using Colab and a TPU acceleration.",
"# Examples\n\nHere are some examples in Arabic to test :\n- Excellent -> ممتاز(Happy)\n- I'am sad -> أنا حزين (Sad)\n- Nothing -> لا شيء (Neutral)",
"# Contact\nIf you have questions or improvement remarks, feel free to contact me on my LinkedIn profile: URL"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2086
- Accuracy: 0.9255
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8249 | 1.0 | 250 | 0.3042 | 0.9085 | 0.9068 |
| 0.2437 | 2.0 | 500 | 0.2086 | 0.9255 | 0.9257 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9255, "name": "Accuracy"}, {"type": "f1", "value": 0.9257196896784097, "name": "F1"}]}]}]}
|
Yaia/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2086
* Accuracy: 0.9255
* F1: 0.9257
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
null | null |
ONNX version of message-intent model. Will be used on GPU machine.
|
{}
|
Yanjie/message-intent-onnx
| null |
[
"onnx",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#onnx #region-us
|
ONNX version of message-intent model. Will be used on GPU machine.
|
[] |
[
"TAGS\n#onnx #region-us \n"
] |
text-classification
|
transformers
|
This is the concierge intent model. Fined tuned on DistilBert uncased model.
|
{}
|
Yanjie/message-intent
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
This is the concierge intent model. Fined tuned on DistilBert uncased model.
|
[] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
This is the concierge preamble model. Fined tuned on DistilBert uncased model.
|
{}
|
Yanjie/message-preamble
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
This is the concierge preamble model. Fined tuned on DistilBert uncased model.
|
[] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
#test
|
{"tags": ["conversational"]}
|
Yankee/test1234
| null |
[
"transformers",
"pytorch",
"conversational",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #conversational #endpoints_compatible #region-us
|
#test
|
[] |
[
"TAGS\n#transformers #pytorch #conversational #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Domain-adaptive pretraining of camembert-base using 15 GB of French Tweets
|
{"language": "fr"}
|
Yanzhu/bertweetfr-base
| null |
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #camembert #fill-mask #fr #autotrain_compatible #endpoints_compatible #region-us
|
Domain-adaptive pretraining of camembert-base using 15 GB of French Tweets
|
[] |
[
"TAGS\n#transformers #pytorch #camembert #fill-mask #fr #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
French NER model for tweets. Fine-tuned on the CAP2017 dataset.
label_list = ['O',
'B-person',
'I-person',
'B-musicartist',
'I-musicartist',
'B-org',
'I-org',
'B-geoloc',
'I-geoloc',
'B-product',
'I-product',
'B-transportLine',
'I-transportLine',
'B-media',
'I-media',
'B-sportsteam',
'I-sportsteam',
'B-event',
'I-event',
'B-tvshow',
'I-tvshow',
'B-movie',
'I-movie',
'B-facility',
'I-facility',
'B-other',
'I-other']
|
{}
|
Yanzhu/bertweetfr_ner
| null |
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #camembert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
French NER model for tweets. Fine-tuned on the CAP2017 dataset.
label_list = ['O',
'B-person',
'I-person',
'B-musicartist',
'I-musicartist',
'B-org',
'I-org',
'B-geoloc',
'I-geoloc',
'B-product',
'I-product',
'B-transportLine',
'I-transportLine',
'B-media',
'I-media',
'B-sportsteam',
'I-sportsteam',
'B-event',
'I-event',
'B-tvshow',
'I-tvshow',
'B-movie',
'I-movie',
'B-facility',
'I-facility',
'B-other',
'I-other']
|
[] |
[
"TAGS\n#transformers #pytorch #camembert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null |
French roBERTa-base model fine-tuned for Offensive Language Identification on COVID-19 tweets.
|
{}
|
Yanzhu/bertweetfr_offensiveness
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
French roBERTa-base model fine-tuned for Offensive Language Identification on COVID-19 tweets.
|
[] |
[
"TAGS\n#region-us \n"
] |
automatic-speech-recognition
| null |
# Wav2Vec2-Large-XLSR-Bengali
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using a subset of 40,000 utterances from [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/). Tested WER using ~4200 held out from training.
When using this model, make sure that your speech input is sampled at 16kHz.
Train Script can be Found at : train.py
Data Prep Notebook : https://colab.research.google.com/drive/1JMlZPU-DrezXjZ2t7sOVqn7CJjZhdK2q?usp=sharing
Inference Notebook : https://colab.research.google.com/drive/1uKC2cK9JfUPDTUHbrNdOYqKtNozhxqgZ?usp=sharing
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
# model = model.to("cuda")
resampler = torchaudio.transforms.Resample(TEST_AUDIO_SR, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch)
speech = resampler(speech_array).squeeze().numpy()
return speech
speech_array = speech_file_to_array_fn("test_file.wav")
inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
preds = processor.batch_decode(predicted_ids)[0]
print(preds.replace("[PAD]",""))
```
**Test Result**: WER on ~4200 utterance : 32.45 %
|
{"language": "Bengali", "license": "cc-by-sa-4.0", "tags": ["bn", "audio", "automatic-speech-recognition", "speech"], "datasets": ["OpenSLR"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Bengali by Arijit", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "OpenSLR", "type": "OpenSLR", "args": "ben"}, "metrics": [{"type": "wer", "value": 32.45, "name": "Test WER"}]}]}]}
|
YasinShihab/asr-en-bn-test
| null |
[
"bn",
"audio",
"automatic-speech-recognition",
"speech",
"dataset:OpenSLR",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"Bengali"
] |
TAGS
#bn #audio #automatic-speech-recognition #speech #dataset-OpenSLR #license-cc-by-sa-4.0 #model-index #region-us
|
# Wav2Vec2-Large-XLSR-Bengali
Fine-tuned facebook/wav2vec2-large-xlsr-53 Bengali using a subset of 40,000 utterances from Bengali ASR training data set containing ~196K utterances. Tested WER using ~4200 held out from training.
When using this model, make sure that your speech input is sampled at 16kHz.
Train Script can be Found at : URL
Data Prep Notebook : URL
Inference Notebook : URL
## Usage
The model can be used directly (without a language model) as follows:
Test Result: WER on ~4200 utterance : 32.45 %
|
[
"# Wav2Vec2-Large-XLSR-Bengali\nFine-tuned facebook/wav2vec2-large-xlsr-53 Bengali using a subset of 40,000 utterances from Bengali ASR training data set containing ~196K utterances. Tested WER using ~4200 held out from training.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\nTrain Script can be Found at : URL \n Data Prep Notebook : URL\n Inference Notebook : URL",
"## Usage\n\nThe model can be used directly (without a language model) as follows:\n\nTest Result: WER on ~4200 utterance : 32.45 %"
] |
[
"TAGS\n#bn #audio #automatic-speech-recognition #speech #dataset-OpenSLR #license-cc-by-sa-4.0 #model-index #region-us \n",
"# Wav2Vec2-Large-XLSR-Bengali\nFine-tuned facebook/wav2vec2-large-xlsr-53 Bengali using a subset of 40,000 utterances from Bengali ASR training data set containing ~196K utterances. Tested WER using ~4200 held out from training.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\nTrain Script can be Found at : URL \n Data Prep Notebook : URL\n Inference Notebook : URL",
"## Usage\n\nThe model can be used directly (without a language model) as follows:\n\nTest Result: WER on ~4200 utterance : 32.45 %"
] |
automatic-speech-recognition
|
transformers
|
# Ukrainian STT model (with Language Model)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
It achieves the following results on the evaluation set without the language model:
- Loss: 0.1875
- Wer: 0.2033
- Cer: 0.0384
## Model description
On 100 test example the model shows the following results:
Without LM:
- WER: 0.1862
- CER: 0.0277
With LM:
- WER: 0.1218
- CER: 0.0190
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2815 | 7.93 | 500 | 0.3536 | 0.4753 | 0.1009 |
| 1.0869 | 15.86 | 1000 | 0.2317 | 0.3111 | 0.0614 |
| 0.9984 | 23.8 | 1500 | 0.2022 | 0.2676 | 0.0521 |
| 0.975 | 31.74 | 2000 | 0.1948 | 0.2469 | 0.0487 |
| 0.9306 | 39.67 | 2500 | 0.1916 | 0.2377 | 0.0464 |
| 0.8868 | 47.61 | 3000 | 0.1903 | 0.2257 | 0.0439 |
| 0.8424 | 55.55 | 3500 | 0.1786 | 0.2206 | 0.0423 |
| 0.8126 | 63.49 | 4000 | 0.1849 | 0.2160 | 0.0416 |
| 0.7901 | 71.42 | 4500 | 0.1869 | 0.2138 | 0.0413 |
| 0.7671 | 79.36 | 5000 | 0.1855 | 0.2075 | 0.0394 |
| 0.7467 | 87.3 | 5500 | 0.1884 | 0.2049 | 0.0389 |
| 0.731 | 95.24 | 6000 | 0.1877 | 0.2060 | 0.0387 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --dataset mozilla-foundation/common_voice_7_0 --config uk --split test
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 21.52 | 14.62 |
|
{"language": ["uk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "uk"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "wav2vec2-xls-r-1b-uk-with-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "uk"}, "metrics": [{"type": "wer", "value": 14.62, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 48.72, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 40.66, "name": "Test WER"}]}]}]}
|
Yehor/wav2vec2-xls-r-1b-uk-with-lm
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"uk",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"uk"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #uk #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
Ukrainian STT model (with Language Model)
=========================================
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech\_recognition\_uk
⭐ See other Ukrainian models - URL
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - UK dataset.
It achieves the following results on the evaluation set without the language model:
* Loss: 0.1875
* Wer: 0.2033
* Cer: 0.0384
Model description
-----------------
On 100 test example the model shows the following results:
Without LM:
* WER: 0.1862
* CER: 0.0277
With LM:
* WER: 0.1218
* CER: 0.0190
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 20
* total\_train\_batch\_size: 160
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.18.1.dev0
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_7\_0' with split 'test'
### Eval results on Common Voice 7 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 20\n* total\\_train\\_batch\\_size: 160\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.1.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Eval results on Common Voice 7 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #uk #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 20\n* total\\_train\\_batch\\_size: 160\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.1.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Eval results on Common Voice 7 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
# Ukrainian STT model (with the Big Language Model formed on News Dataset)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
Attribution to the dataset of Language Model:
- Chaplynskyi, D. et al. (2021) lang-uk Ukrainian Ubercorpus [Data set]. https://lang.org.ua/uk/corpora/#anchor4
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2815 | 7.93 | 500 | 0.3536 | 0.4753 | 0.1009 |
| 1.0869 | 15.86 | 1000 | 0.2317 | 0.3111 | 0.0614 |
| 0.9984 | 23.8 | 1500 | 0.2022 | 0.2676 | 0.0521 |
| 0.975 | 31.74 | 2000 | 0.1948 | 0.2469 | 0.0487 |
| 0.9306 | 39.67 | 2500 | 0.1916 | 0.2377 | 0.0464 |
| 0.8868 | 47.61 | 3000 | 0.1903 | 0.2257 | 0.0439 |
| 0.8424 | 55.55 | 3500 | 0.1786 | 0.2206 | 0.0423 |
| 0.8126 | 63.49 | 4000 | 0.1849 | 0.2160 | 0.0416 |
| 0.7901 | 71.42 | 4500 | 0.1869 | 0.2138 | 0.0413 |
| 0.7671 | 79.36 | 5000 | 0.1855 | 0.2075 | 0.0394 |
| 0.7467 | 87.3 | 5500 | 0.1884 | 0.2049 | 0.0389 |
| 0.731 | 95.24 | 6000 | 0.1877 | 0.2060 | 0.0387 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
{"language": ["uk"], "license": "cc-by-nc-sa-4.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "uk"], "xdatasets": ["mozilla-foundation/common_voice_7_0"]}
|
Yehor/wav2vec2-xls-r-1b-uk-with-news-lm
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"uk",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"uk"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #uk #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
|
Ukrainian STT model (with the Big Language Model formed on News Dataset)
========================================================================
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech\_recognition\_uk
⭐ See other Ukrainian models - URL
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - UK dataset.
Attribution to the dataset of Language Model:
* Chaplynskyi, D. et al. (2021) lang-uk Ukrainian Ubercorpus [Data set]. URL
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 20
* total\_train\_batch\_size: 160
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.18.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 20\n* total\\_train\\_batch\\_size: 160\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #uk #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 20\n* total\\_train\\_batch\\_size: 160\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
# Ukrainian STT model (with Language Model)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
- Have a look on an updated 300m model: https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-small-lm
- Have a look on a better model with more parameters: https://huggingface.co/Yehor/wav2vec2-xls-r-1b-uk-with-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3015
- Wer: 0.3377
- Cer: 0.0708
The above results present evaluation without the language model.
## Model description
On 100 test example the model shows the following results:
Without LM:
- WER: 0.2647
- CER: 0.0469
With LM:
- WER: 0.1568
- CER: 0.0289
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.0255 | 7.93 | 500 | 2.5514 | 0.9921 | 0.9047 |
| 1.3809 | 15.86 | 1000 | 0.4065 | 0.5361 | 0.1201 |
| 1.2355 | 23.8 | 1500 | 0.3474 | 0.4618 | 0.1033 |
| 1.1956 | 31.74 | 2000 | 0.3617 | 0.4580 | 0.1005 |
| 1.1416 | 39.67 | 2500 | 0.3182 | 0.4074 | 0.0891 |
| 1.0996 | 47.61 | 3000 | 0.3166 | 0.3985 | 0.0875 |
| 1.0427 | 55.55 | 3500 | 0.3116 | 0.3835 | 0.0828 |
| 0.9961 | 63.49 | 4000 | 0.3137 | 0.3757 | 0.0807 |
| 0.9575 | 71.42 | 4500 | 0.2992 | 0.3632 | 0.0771 |
| 0.9154 | 79.36 | 5000 | 0.3015 | 0.3502 | 0.0740 |
| 0.8994 | 87.3 | 5500 | 0.3004 | 0.3425 | 0.0723 |
| 0.871 | 95.24 | 6000 | 0.3016 | 0.3394 | 0.0713 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
{"language": ["uk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "uk"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "wav2vec2-xls-r-300m-uk-with-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "uk"}, "metrics": [{"type": "wer", "value": 26.47, "name": "Test WER"}, {"type": "cer", "value": 2.9, "name": "Test CER"}]}]}]}
|
Yehor/wav2vec2-xls-r-300m-uk-with-lm
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"uk",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"uk"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #uk #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
Ukrainian STT model (with Language Model)
=========================================
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech\_recognition\_uk
⭐ See other Ukrainian models - URL
* Have a look on an updated 300m model: URL
* Have a look on a better model with more parameters: URL
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - UK dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3015
* Wer: 0.3377
* Cer: 0.0708
The above results present evaluation without the language model.
Model description
-----------------
On 100 test example the model shows the following results:
Without LM:
* WER: 0.2647
* CER: 0.0469
With LM:
* WER: 0.1568
* CER: 0.0289
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 20
* total\_train\_batch\_size: 160
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.18.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 20\n* total\\_train\\_batch\\_size: 160\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #uk #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 20\n* total\\_train\\_batch\\_size: 160\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.1.dev0\n* Tokenizers 0.11.0"
] |
null | null |
# ProteinLM
|
{}
|
Yijia-Xiao/ProteinLM
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# ProteinLM
|
[
"# ProteinLM"
] |
[
"TAGS\n#region-us \n",
"# ProteinLM"
] |
question-answering
|
transformers
|
# Question Answering model for Hindi and Tamil
This model is part of the ensemble that ranked 4/943 in the [Hindi and Tamil Question Answering](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition held by Google Research India at Kaggle.
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Yuchen/muril-large-cased-hita-qa")
model = AutoModelForQuestionAnswering.from_pretrained("Yuchen/muril-large-cased-hita-qa")
```
|
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
|
Yuchen/muril-large-cased-hita-qa
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #license-apache-2.0 #endpoints_compatible #region-us
|
# Question Answering model for Hindi and Tamil
This model is part of the ensemble that ranked 4/943 in the Hindi and Tamil Question Answering competition held by Google Research India at Kaggle.
|
[
"# Question Answering model for Hindi and Tamil\n\nThis model is part of the ensemble that ranked 4/943 in the Hindi and Tamil Question Answering competition held by Google Research India at Kaggle."
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Question Answering model for Hindi and Tamil\n\nThis model is part of the ensemble that ranked 4/943 in the Hindi and Tamil Question Answering competition held by Google Research India at Kaggle."
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9825
- Mae: 0.4956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1432 | 1.0 | 308 | 1.0559 | 0.5133 |
| 0.9883 | 2.0 | 616 | 0.9825 | 0.4956 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc", "results": []}]}
|
Yuri/xlm-roberta-base-finetuned-marc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
xlm-roberta-base-finetuned-marc
===============================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9825
* Mae: 0.4956
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.