pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
summarization
|
transformers
|
# Indonesian T5 Summarization Base Model
Finetuned T5 base summarization model for Indonesian.
## Finetuning Corpus
`t5-base-indonesian-summarization-cased` model is based on `t5-base-bahasa-summarization-cased` by [huseinzol05](https://huggingface.co/huseinzol05), finetuned using [id_liputan6](https://huggingface.co/datasets/id_liputan6) dataset.
## Load Finetuned Model
```python
from transformers import T5Tokenizer, T5Model, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
model = T5ForConditionalGeneration.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
```
## Code Sample
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
model = T5ForConditionalGeneration.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
#
ARTICLE_TO_SUMMARIZE = ""
# generate summary
input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt')
summary_ids = model.generate(input_ids,
min_length=20,
max_length=80,
num_beams=10,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True,
no_repeat_ngram_size=2,
use_cache=True,
do_sample = True,
temperature = 0.8,
top_k = 50,
top_p = 0.95)
summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary_text)
```
Output:
```
```
|
{"language": "id", "tags": ["pipeline:summarization", "summarization", "t5"], "datasets": ["id_liputan6"]}
|
cahya/t5-base-indonesian-summarization-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"pipeline:summarization",
"summarization",
"id",
"dataset:id_liputan6",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #pipeline-summarization #summarization #id #dataset-id_liputan6 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Indonesian T5 Summarization Base Model
Finetuned T5 base summarization model for Indonesian.
## Finetuning Corpus
't5-base-indonesian-summarization-cased' model is based on 't5-base-bahasa-summarization-cased' by huseinzol05, finetuned using id_liputan6 dataset.
## Load Finetuned Model
## Code Sample
Output:
|
[
"# Indonesian T5 Summarization Base Model\n\nFinetuned T5 base summarization model for Indonesian.",
"## Finetuning Corpus\n\n't5-base-indonesian-summarization-cased' model is based on 't5-base-bahasa-summarization-cased' by huseinzol05, finetuned using id_liputan6 dataset.",
"## Load Finetuned Model",
"## Code Sample\n\n\n\nOutput:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #pipeline-summarization #summarization #id #dataset-id_liputan6 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Indonesian T5 Summarization Base Model\n\nFinetuned T5 base summarization model for Indonesian.",
"## Finetuning Corpus\n\n't5-base-indonesian-summarization-cased' model is based on 't5-base-bahasa-summarization-cased' by huseinzol05, finetuned using id_liputan6 dataset.",
"## Load Finetuned Model",
"## Code Sample\n\n\n\nOutput:"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Base-Turkish-Artificial-CV, a fine-tuned
[cahya/wav2vec2-base-turkish-artificial](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial)
model on [Turkish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 13.70 %
## Training
The Common Voice `train`, `validation`, other and invalidated
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Base Turkish by Cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"type": "wer", "value": 13.7, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-base-turkish-artificial-cv
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Base-Turkish-Artificial-CV, a fine-tuned
cahya/wav2vec2-base-turkish-artificial
model on Turkish Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
Test Result: 13.70 %
## Training
The Common Voice 'train', 'validation', other and invalidated
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-Turkish\n\nThis is the model for Wav2Vec2-Base-Turkish-Artificial-CV, a fine-tuned \ncahya/wav2vec2-base-turkish-artificial\nmodel on Turkish Common Voice dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 13.70 %",
"## Training\n\nThe Common Voice 'train', 'validation', other and invalidated \n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Turkish\n\nThis is the model for Wav2Vec2-Base-Turkish-Artificial-CV, a fine-tuned \ncahya/wav2vec2-base-turkish-artificial\nmodel on Turkish Common Voice dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 13.70 %",
"## Training\n\nThe Common Voice 'train', 'validation', other and invalidated \n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Turkish
Fine-tuned [ceyda/wav2vec2-base-760](https://huggingface.co/ceyda/wav2vec2-base-760)
on the [Turkish Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 57.60 %
## Training
The Artificial Common Voice `train`, `validation` is used to fine tune the model
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Base Turkish with Artificial Voices by Cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"type": "wer", "value": 57.6, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-base-turkish-artificial
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Turkish
Fine-tuned ceyda/wav2vec2-base-760
on the Turkish Artificial Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
Test Result: 57.60 %
## Training
The Artificial Common Voice 'train', 'validation' is used to fine tune the model
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-Turkish\nFine-tuned ceyda/wav2vec2-base-760\non the Turkish Artificial Common Voice dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 57.60 %",
"## Training\n\nThe Artificial Common Voice 'train', 'validation' is used to fine tune the model\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Turkish\nFine-tuned ceyda/wav2vec2-base-760\non the Turkish Artificial Common Voice dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 57.60 %",
"## Training\n\nThe Artificial Common Voice 'train', 'validation' is used to fine tune the model\n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2893
- Wer: 0.2713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.8647 | 14.28 | 200 | 0.2758 | 0.2568 |
| 1.3376 | 28.56 | 400 | 0.2754 | 0.2722 |
| 1.1975 | 42.84 | 600 | 0.2929 | 0.2901 |
| 1.1024 | 57.14 | 800 | 0.2904 | 0.2928 |
| 1.0257 | 71.42 | 1000 | 0.2915 | 0.2823 |
| 0.9628 | 85.7 | 1200 | 0.2936 | 0.2749 |
| 0.9109 | 99.98 | 1400 | 0.2893 | 0.2713 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["tr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
cahya/wav2vec2-base-turkish-cv7
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #tr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of cahya/wav2vec2-base-turkish-artificial on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - TR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2893
* Wer: 0.2713
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 128
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 512
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 512\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #tr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 512\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-1000](https://huggingface.co/./checkpoint-1000) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3282
- Wer: 0.2836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0671 | 2.04 | 200 | 0.3079 | 0.2752 |
| 0.6433 | 4.08 | 400 | 0.2728 | 0.2848 |
| 0.5687 | 6.12 | 600 | 0.2882 | 0.3036 |
| 0.5355 | 8.16 | 800 | 0.2778 | 0.2920 |
| 0.5116 | 10.2 | 1000 | 0.2906 | 0.3014 |
| 0.5313 | 9.16 | 1200 | 0.2984 | 0.3273 |
| 0.4996 | 10.69 | 1400 | 0.3170 | 0.3344 |
| 0.4845 | 12.21 | 1600 | 0.3202 | 0.3634 |
| 0.5092 | 13.74 | 1800 | 0.3167 | 0.3373 |
| 0.4777 | 15.27 | 2000 | 0.3292 | 0.3386 |
| 0.4651 | 16.79 | 2200 | 0.3070 | 0.3427 |
| 0.461 | 18.32 | 2400 | 0.3149 | 0.3561 |
| 0.4481 | 19.85 | 2600 | 0.3292 | 0.3441 |
| 0.4479 | 21.37 | 2800 | 0.3142 | 0.3209 |
| 0.4305 | 22.9 | 3000 | 0.3525 | 0.3547 |
| 0.4254 | 24.43 | 3200 | 0.3414 | 0.3400 |
| 0.4066 | 25.95 | 3400 | 0.3118 | 0.3207 |
| 0.4043 | 27.48 | 3600 | 0.3418 | 0.3483 |
| 0.3985 | 29.01 | 3800 | 0.3254 | 0.3166 |
| 0.3982 | 30.53 | 4000 | 0.3306 | 0.3453 |
| 0.3929 | 32.06 | 4200 | 0.3262 | 0.3229 |
| 0.378 | 33.59 | 4400 | 0.3546 | 0.3336 |
| 0.4062 | 35.11 | 4600 | 0.3174 | 0.3457 |
| 0.3648 | 36.64 | 4800 | 0.3377 | 0.3357 |
| 0.3609 | 38.17 | 5000 | 0.3346 | 0.3520 |
| 0.3483 | 39.69 | 5200 | 0.3350 | 0.3526 |
| 0.3548 | 41.22 | 5400 | 0.3330 | 0.3406 |
| 0.3446 | 42.75 | 5600 | 0.3398 | 0.3372 |
| 0.3346 | 44.27 | 5800 | 0.3449 | 0.3288 |
| 0.3309 | 45.8 | 6000 | 0.3320 | 0.3144 |
| 0.326 | 47.33 | 6200 | 0.3400 | 0.3279 |
| 0.3189 | 48.85 | 6400 | 0.3400 | 0.3150 |
| 0.3165 | 50.38 | 6600 | 0.3359 | 0.2995 |
| 0.3132 | 51.91 | 6800 | 0.3343 | 0.3096 |
| 0.3092 | 53.44 | 7000 | 0.3224 | 0.3029 |
| 0.2995 | 54.96 | 7200 | 0.3205 | 0.2985 |
| 0.304 | 56.49 | 7400 | 0.3523 | 0.3034 |
| 0.2952 | 58.02 | 7600 | 0.3289 | 0.2934 |
| 0.2875 | 59.54 | 7800 | 0.3350 | 0.3008 |
| 0.2868 | 61.07 | 8000 | 0.3537 | 0.3227 |
| 0.2875 | 62.6 | 8200 | 0.3389 | 0.2970 |
| 0.2778 | 64.12 | 8400 | 0.3370 | 0.2960 |
| 0.2706 | 65.65 | 8600 | 0.3250 | 0.2802 |
| 0.2669 | 67.18 | 8800 | 0.3351 | 0.2903 |
| 0.2615 | 68.7 | 9000 | 0.3382 | 0.2989 |
| 0.2563 | 70.23 | 9200 | 0.3312 | 0.2975 |
| 0.2546 | 71.76 | 9400 | 0.3212 | 0.3003 |
| 0.2482 | 73.28 | 9600 | 0.3337 | 0.3091 |
| 0.2504 | 74.81 | 9800 | 0.3308 | 0.3110 |
| 0.2456 | 76.34 | 10000 | 0.3157 | 0.3118 |
| 0.2363 | 77.86 | 10200 | 0.3251 | 0.3144 |
| 0.2319 | 79.39 | 10400 | 0.3253 | 0.3038 |
| 0.2266 | 80.92 | 10600 | 0.3374 | 0.3038 |
| 0.2279 | 82.44 | 10800 | 0.3268 | 0.2964 |
| 0.2231 | 83.97 | 11000 | 0.3278 | 0.2950 |
| 0.2185 | 85.5 | 11200 | 0.3462 | 0.2981 |
| 0.2245 | 87.02 | 11400 | 0.3311 | 0.2895 |
| 0.223 | 88.55 | 11600 | 0.3325 | 0.2877 |
| 0.2121 | 90.08 | 11800 | 0.3337 | 0.2828 |
| 0.2126 | 91.6 | 12000 | 0.3325 | 0.2808 |
| 0.2027 | 93.13 | 12200 | 0.3277 | 0.2820 |
| 0.2058 | 94.66 | 12400 | 0.3308 | 0.2827 |
| 0.1991 | 96.18 | 12600 | 0.3279 | 0.2820 |
| 0.1991 | 97.71 | 12800 | 0.3300 | 0.2822 |
| 0.1986 | 99.24 | 13000 | 0.3285 | 0.2835 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["tr"], "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
cahya/wav2vec2-base-turkish-cv8
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #tr #dataset-common_voice #endpoints_compatible #region-us
|
This model is a fine-tuned version of ./checkpoint-1000 on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - TR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3282
* Wer: 0.2836
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 96
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 192
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 96\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 192\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #tr #dataset-common_voice #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 96\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 192\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
#
This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial-cv](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial-cv) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
| | Dataset | WER | CER |
|---|-------------------------------|---------|----------|
| 1 | Common Voice 6.1 | 9.437 | 3.325 |
| 2 | Common Voice 7.0 | 8.147 | 2.802 |
| 3 | Common Voice 8.0 | 8.335 | 2.336 |
| 4 | Speech Recognition Community | 28.011 | 10.66 |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
The following datasets were used for finetuning:
- [Common Voice 7.0 TR](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) 'train', 'validation' and 'other' split were used for training.
- [Media Speech](https://www.openslr.org/108/)
- [Magic Hub](https://magichub.com/)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-06
- train_batch_size: 6
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1224 | 3.45 | 500 | 0.1641 | 0.1396 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"language": ["tr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "tr"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "Wav2Vec2 Base Turkish by Cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 6.1", "type": "mozilla-foundation/common_voice_7_0", "args": "tr"}, "metrics": [{"type": "wer", "value": 9.437, "name": "Test WER"}, {"type": "cer", "value": 3.325, "name": "Test CER"}, {"type": "wer", "value": 8.147, "name": "Test WER"}, {"type": "cer", "value": 2.802, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "tr"}, "metrics": [{"type": "wer", "value": 28.011, "name": "Test WER"}, {"type": "cer", "value": 10.66, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "tr"}, "metrics": [{"type": "wer", "value": 33.62, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-base-turkish
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"tr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #tr #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of cahya/wav2vec2-base-turkish-artificial-cv on the COMMON\_VOICE - TR dataset.
It achieves the following results on the evaluation set:
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
The following datasets were used for finetuning:
* Common Voice 7.0 TR 'train', 'validation' and 'other' split were used for training.
* Media Speech
* Magic Hub
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-06
* train\_batch\_size: 6
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 24
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 5.0
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-06\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 5.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #tr #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-06\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 5.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Basque
This is the model for Wav2Vec2-Large-XLSR-Basque, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Basque Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eu", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-basque")
model = Wav2Vec2ForCTC.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-basque")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Basque test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "eu", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-basque")
model = Wav2Vec2ForCTC.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-basque")
model.to("cuda")
chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 12.44 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
{"language": "eu", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Basque by Cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice eu", "type": "common_voice", "args": "eu"}, "metrics": [{"type": "wer", "value": 12.44, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-large-xlsr-basque
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"eu",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"eu"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #eu #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Wav2Vec2-Large-XLSR-Basque
This is the model for Wav2Vec2-Large-XLSR-Basque, a fine-tuned
facebook/wav2vec2-large-xlsr-53
model on the Basque Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Basque test data of Common Voice.
Test Result: 12.44 %
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-Basque\n\nThis is the model for Wav2Vec2-Large-XLSR-Basque, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Basque Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Basque test data of Common Voice.\n\n\n\nTest Result: 12.44 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #eu #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Wav2Vec2-Large-XLSR-Basque\n\nThis is the model for Wav2Vec2-Large-XLSR-Basque, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Basque Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Basque test data of Common Voice.\n\n\n\nTest Result: 12.44 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Breton
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Breton Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "br", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-breton")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-breton")
chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
batch["sentence"] = batch["sentence"].replace("ʼ", "'")
batch["sentence"] = batch["sentence"].replace("’", "'")
batch["sentence"] = batch["sentence"].replace('‘', "'")
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
The above code leads to the following prediction for the first two samples:
```
Prediction: ["ne' ler ket don a-benn us netra pa vez zer nic'hed evel-si", 'an eil hag egile']
Reference: ['"n\'haller ket dont a-benn eus netra pa vezer nec\'het evel-se." ', 'an eil hag egile. ']
```
## Evaluation
The model can be evaluated as follows on the Breton test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "br", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-breton")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-breton")
model.to("cuda")
chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
batch["sentence"] = batch["sentence"].replace("ʼ", "'")
batch["sentence"] = batch["sentence"].replace("’", "'")
batch["sentence"] = batch["sentence"].replace('‘', "'")
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 41.71 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
(will be available soon)
|
{"language": "br", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Breton by Cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice br", "type": "common_voice", "args": "br"}, "metrics": [{"type": "wer", "value": 41.71, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-large-xlsr-breton
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"br",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"br"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #br #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Breton
Fine-tuned facebook/wav2vec2-large-xlsr-53
on the Breton Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
The above code leads to the following prediction for the first two samples:
## Evaluation
The model can be evaluated as follows on the Breton test data of Common Voice.
Test Result: 41.71 %
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found here
(will be available soon)
|
[
"# Wav2Vec2-Large-XLSR-Breton\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Breton Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:\n\n\nThe above code leads to the following prediction for the first two samples:",
"## Evaluation\n\nThe model can be evaluated as follows on the Breton test data of Common Voice.\n\n\n\nTest Result: 41.71 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #br #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Breton\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Breton Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:\n\n\nThe above code leads to the following prediction for the first two samples:",
"## Evaluation\n\nThe model can be evaluated as follows on the Breton test data of Common Voice.\n\n\n\nTest Result: 41.71 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Indonesian Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 51.69 %
## Training
The Artificial Common Voice `train`, `validation`, and ... datasets were used for training.
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
(will be available soon)
|
{"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesian with Artificial Voice by Cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": 51.69, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-large-xlsr-indonesian-artificial
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned facebook/wav2vec2-large-xlsr-53
on the Indonesian Artificial Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
Test Result: 51.69 %
## Training
The Artificial Common Voice 'train', 'validation', and ... datasets were used for training.
The script used for training can be found here
(will be available soon)
|
[
"# Wav2Vec2-Large-XLSR-Indonesian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Indonesian Artificial Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 51.69 %",
"## Training\n\nThe Artificial Common Voice 'train', 'validation', and ... datasets were used for training.\n\nThe script used for training can be found here \n(will be available soon)"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Indonesian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Indonesian Artificial Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 51.69 %",
"## Training\n\nThe Artificial Common Voice 'train', 'validation', and ... datasets were used for training.\n\nThe script used for training can be found here \n(will be available soon)"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice) and synthetic voices
generated using [Artificial Common Voicer](https://github.com/cahya-wirawan/artificial-commonvoice), which
again based on Google Text To Speech.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian-mix")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian-mix")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian-mix")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian-mix")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 19.36 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
{"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesian Mix by Cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": 19.36, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-large-xlsr-indonesian-mix
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned facebook/wav2vec2-large-xlsr-53
on the Indonesian Common Voice dataset and synthetic voices
generated using Artificial Common Voicer, which
again based on Google Text To Speech.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
Test Result: 19.36 %
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-Indonesian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Indonesian Common Voice dataset and synthetic voices\ngenerated using Artificial Common Voicer, which\nagain based on Google Text To Speech.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 19.36 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Indonesian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Indonesian Common Voice dataset and synthetic voices\ngenerated using Artificial Common Voicer, which\nagain based on Google Text To Speech.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 19.36 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.86 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
(will be available soon)
|
{"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesian by cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": 25.86, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-large-xlsr-indonesian
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned facebook/wav2vec2-large-xlsr-53
on the Indonesian Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
Test Result: 25.86 %
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found here
(will be available soon)
|
[
"# Wav2Vec2-Large-XLSR-Indonesian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Indonesian Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 25.86 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Indonesian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Indonesian Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 25.86 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Javanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [OpenSLR High quality TTS data for Javanese](https://openslr.org/41/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets.utils.download_manager import DownloadManager
from pathlib import Path
import pandas as pd
def load_dataset_javanese():
urls = [
"https://www.openslr.org/resources/41/jv_id_female.zip",
"https://www.openslr.org/resources/41/jv_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"jv_id_female/wavs",
Path(download_dirs[1])/"jv_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"jv_id_female/line_index.tsv",
Path(download_dirs[1])/"jv_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t', names=["path", "client_id", "sentence"]))
dfs[1] = dfs[1].drop(["client_id"], axis=1)
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_javanese()
test_dataset = dataset['test']
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows or using this
[notebook](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Javanese.ipynb)
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from datasets.utils.download_manager import DownloadManager
from pathlib import Path
import pandas as pd
def load_dataset_javanese():
urls = [
"https://www.openslr.org/resources/41/jv_id_female.zip",
"https://www.openslr.org/resources/41/jv_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"jv_id_female/wavs",
Path(download_dirs[1])/"jv_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"jv_id_female/line_index.tsv",
Path(download_dirs[1])/"jv_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t', names=["path", "client_id", "sentence"]))
dfs[1] = dfs[1].drop(["client_id"], axis=1)
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_javanese()
test_dataset = dataset['test']
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”_\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.61 %
## Training
[OpenSLR High quality TTS data for Javanese](https://openslr.org/41/) was used for training.
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Javanese.ipynb)
and to [evaluate it](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Javanese.ipynb)
|
{"language": "jv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["openslr"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Javanese by cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "OpenSLR High quality TTS data for Javanese", "type": "OpenSLR", "args": "jv"}, "metrics": [{"type": "wer", "value": 17.61, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-large-xlsr-javanese
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"jv",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"jv"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #jv #dataset-openslr #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Wav2Vec2-Large-XLSR-Javanese
Fine-tuned facebook/wav2vec2-large-xlsr-53
on the OpenSLR High quality TTS data for Javanese.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows or using this
notebook
Test Result: 17.61 %
## Training
OpenSLR High quality TTS data for Javanese was used for training.
The script used for training can be found here
and to evaluate it
|
[
"# Wav2Vec2-Large-XLSR-Javanese\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the OpenSLR High quality TTS data for Javanese.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows or using this\nnotebook\n\n\n\nTest Result: 17.61 %",
"## Training\n\nOpenSLR High quality TTS data for Javanese was used for training.\nThe script used for training can be found here \nand to evaluate it"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #jv #dataset-openslr #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Wav2Vec2-Large-XLSR-Javanese\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the OpenSLR High quality TTS data for Javanese.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows or using this\nnotebook\n\n\n\nTest Result: 17.61 %",
"## Training\n\nOpenSLR High quality TTS data for Javanese was used for training.\nThe script used for training can be found here \nand to evaluate it"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Sundanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [OpenSLR High quality TTS data for Sundanese](https://openslr.org/44/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from datasets.utils.download_manager import DownloadManager
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from pathlib import Path
import pandas as pd
def load_dataset_sundanese():
urls = [
"https://www.openslr.org/resources/44/su_id_female.zip",
"https://www.openslr.org/resources/44/su_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"su_id_female/wavs",
Path(download_dirs[1])/"su_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"su_id_female/line_index.tsv",
Path(download_dirs[1])/"su_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t4?\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t\t', names=["path", "sentence"]))
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_sundanese()
test_dataset = dataset['test']
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows or using the [notebook](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb).
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets.utils.download_manager import DownloadManager
import re
from pathlib import Path
import pandas as pd
def load_dataset_sundanese():
urls = [
"https://www.openslr.org/resources/44/su_id_female.zip",
"https://www.openslr.org/resources/44/su_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"su_id_female/wavs",
Path(download_dirs[1])/"su_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"su_id_female/line_index.tsv",
Path(download_dirs[1])/"su_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t4?\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t\t', names=["path", "sentence"]))
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_sundanese()
test_dataset = dataset['test']
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”_\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 6.19 %
## Training
[OpenSLR High quality TTS data for Sundanese](https://openslr.org/44/) was used for training.
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb)
and to [evaluate it](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb)
|
{"language": "su", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["openslr"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Sundanese by cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "OpenSLR High quality TTS data for Sundanese", "type": "OpenSLR", "args": "su"}, "metrics": [{"type": "wer", "value": 6.19, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-large-xlsr-sundanese
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"su",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"su"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #su #dataset-openslr #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Sundanese
Fine-tuned facebook/wav2vec2-large-xlsr-53
on the OpenSLR High quality TTS data for Sundanese.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows or using the notebook.
Test Result: 6.19 %
## Training
OpenSLR High quality TTS data for Sundanese was used for training.
The script used for training can be found here
and to evaluate it
|
[
"# Wav2Vec2-Large-XLSR-Sundanese\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the OpenSLR High quality TTS data for Sundanese.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows or using the notebook.\n\n\n\nTest Result: 6.19 %",
"## Training\n\nOpenSLR High quality TTS data for Sundanese was used for training.\nThe script used for training can be found here \nand to evaluate it"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #su #dataset-openslr #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Sundanese\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the OpenSLR High quality TTS data for Sundanese.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows or using the notebook.\n\n\n\nTest Result: 6.19 %",
"## Training\n\nOpenSLR High quality TTS data for Sundanese was used for training.\nThe script used for training can be found here \nand to evaluate it"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Large-XLSR-Turkish-Artificial-CV, a fine-tuned
[cahya/wav2vec2-large-xlsr-turkish-artificial](https://huggingface.co/cahya/wav2vec2-large-xlsr-turkish-artificial)
model on [Turkish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 14.61 %
## Training
The Common Voice `train`, `validation`, other and invalidated
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Turkish by Cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"type": "wer", "value": 14.61, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-large-xlsr-turkish-artificial-cv
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Large-XLSR-Turkish-Artificial-CV, a fine-tuned
cahya/wav2vec2-large-xlsr-turkish-artificial
model on Turkish Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
Test Result: 14.61 %
## Training
The Common Voice 'train', 'validation', other and invalidated
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-Turkish\n\nThis is the model for Wav2Vec2-Large-XLSR-Turkish-Artificial-CV, a fine-tuned \ncahya/wav2vec2-large-xlsr-turkish-artificial\nmodel on Turkish Common Voice dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 14.61 %",
"## Training\n\nThe Common Voice 'train', 'validation', other and invalidated \n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Turkish\n\nThis is the model for Wav2Vec2-Large-XLSR-Turkish-Artificial-CV, a fine-tuned \ncahya/wav2vec2-large-xlsr-turkish-artificial\nmodel on Turkish Common Voice dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 14.61 %",
"## Training\n\nThe Common Voice 'train', 'validation', other and invalidated \n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Turkish Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 66.98 %
## Training
The Artificial Common Voice `train`, `validation` is used to fine tune the model
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Turkish with Artificial Voices by Cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"type": "wer", "value": 66.98, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-large-xlsr-turkish-artificial
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Turkish
Fine-tuned facebook/wav2vec2-large-xlsr-53
on the Turkish Artificial Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
Test Result: 66.98 %
## Training
The Artificial Common Voice 'train', 'validation' is used to fine tune the model
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-Turkish\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Turkish Artificial Common Voice dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 66.98 %",
"## Training\n\nThe Artificial Common Voice 'train', 'validation' is used to fine tune the model\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Turkish\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Turkish Artificial Common Voice dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 66.98 %",
"## Training\n\nThe Artificial Common Voice 'train', 'validation' is used to fine tune the model\n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Large-XLSR-Turkish, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Turkish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 21.13 %
## Training
The Common Voice `train`, `validation`, other and invalidated
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Turkish by Cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"type": "wer", "value": 21.13, "name": "Test WER"}]}]}]}
|
cahya/wav2vec2-large-xlsr-turkish
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Large-XLSR-Turkish, a fine-tuned
facebook/wav2vec2-large-xlsr-53
model on the Turkish Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
Test Result: 21.13 %
## Training
The Common Voice 'train', 'validation', other and invalidated
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-Turkish\n\nThis is the model for Wav2Vec2-Large-XLSR-Turkish, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Turkish Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 21.13 %",
"## Training\n\nThe Common Voice 'train', 'validation', other and invalidated \n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Turkish\n\nThis is the model for Wav2Vec2-Large-XLSR-Turkish, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Turkish Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 21.13 %",
"## Training\n\nThe Common Voice 'train', 'validation', other and invalidated \n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Automatic Speech Recognition for Luganda
This is the model built for the
[Mozilla Luganda Automatic Speech Recognition competition](https://zindi.africa/competitions/mozilla-luganda-automatic-speech-recognition).
It is a fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Luganda Common Voice dataset](https://huggingface.co/datasets/common_voice) version 7.0.
We also provide a [live demo](https://huggingface.co/spaces/indonesian-nlp/luganda-asr) to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lg", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "‘", "’", "’"]
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
WER without KenLM: 15.38 %
WER With KenLM:
**Test Result**: 7.53 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/indonesian-nlp/luganda-asr)
|
{"language": "lg", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "common_voice", "hf-asr-leaderboard", "lg", "robust-speech-event", "speech"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Luganda by Indonesian-NLP", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice lg", "type": "common_voice", "args": "lg"}, "metrics": [{"type": "wer", "value": 9.332, "name": "Test WER"}, {"type": "cer", "value": 1.987, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "lg"}, "metrics": [{"type": "wer", "value": 13.844, "name": "Test WER"}, {"type": "cer", "value": 2.68, "name": "Test CER"}]}]}]}
|
cahya/wav2vec2-luganda
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"common_voice",
"hf-asr-leaderboard",
"lg",
"robust-speech-event",
"speech",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"lg"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #common_voice #hf-asr-leaderboard #lg #robust-speech-event #speech #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Automatic Speech Recognition for Luganda
This is the model built for the
Mozilla Luganda Automatic Speech Recognition competition.
It is a fine-tuned facebook/wav2vec2-large-xlsr-53
model on the Luganda Common Voice dataset version 7.0.
We also provide a live demo to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
WER without KenLM: 15.38 %
WER With KenLM:
Test Result: 7.53 %
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found here
|
[
"# Automatic Speech Recognition for Luganda\n\nThis is the model built for the \nMozilla Luganda Automatic Speech Recognition competition.\nIt is a fine-tuned facebook/wav2vec2-large-xlsr-53\nmodel on the Luganda Common Voice dataset version 7.0.\n\nWe also provide a live demo to test the model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nWER without KenLM: 15.38 %\n\nWER With KenLM:\n\nTest Result: 7.53 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #common_voice #hf-asr-leaderboard #lg #robust-speech-event #speech #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Automatic Speech Recognition for Luganda\n\nThis is the model built for the \nMozilla Luganda Automatic Speech Recognition competition.\nIt is a fine-tuned facebook/wav2vec2-large-xlsr-53\nmodel on the Luganda Common Voice dataset version 7.0.\n\nWe also provide a live demo to test the model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nWER without KenLM: 15.38 %\n\nWER With KenLM:\n\nTest Result: 7.53 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 135.4675
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.10.3
|
{"language": ["ab"], "tags": ["ab", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "", "results": []}]}
|
cahya/xls-r-ab-test
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ab"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #ab #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #endpoints_compatible #region-us
|
#
This model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 135.4675
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.10.3
|
[
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 135.4675\n- Wer: 1.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 100",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #ab #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #endpoints_compatible #region-us \n",
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 135.4675\n- Wer: 1.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 100",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-md
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2415 | 1.0 | 1044 | 0.2084 |
| 0.1244 | 2.0 | 2088 | 0.2903 |
| 0.0427 | 3.0 | 3132 | 0.3329 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-md", "results": []}]}
|
caioamb/bert-base-uncased-finetuned-md
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-md
==============================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3329
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7647
- Matthews Correlation: 0.5167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5294 | 1.0 | 535 | 0.5029 | 0.4356 |
| 0.3507 | 2.0 | 1070 | 0.5285 | 0.4884 |
| 0.2406 | 3.0 | 1605 | 0.6550 | 0.5138 |
| 0.1825 | 4.0 | 2140 | 0.7647 | 0.5167 |
| 0.1282 | 5.0 | 2675 | 0.8664 | 0.5074 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5166623535745778, "name": "Matthews Correlation"}]}]}]}
|
caioamb/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7647
* Matthews Correlation: 0.5167
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitexts
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-finetuned-wikitexts", "results": []}]}
|
calebcsjm/distilgpt2-finetuned-wikitexts
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
distilgpt2-finetuned-wikitexts
==============================
This model is a fine-tuned version of distilgpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.6424
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-vi-finetuned-eng-to-vie
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 219 | 0.3771 | 73.2405 | 8.274 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "opus-mt-en-vi-finetuned-eng-to-vie", "results": []}]}
|
callmeJ/opus-mt-en-vi-finetuned-eng-to-vie
| null |
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
opus-mt-en-vi-finetuned-eng-to-vie
==================================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-vi on an unknown dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
feature-extraction
|
transformers
|
# BioRedditBERT
## Model description
BioRedditBERT is a BERT model initialised from BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) and further pre-trained on health-related Reddit posts. Please view our paper [COMETA: A Corpus for Medical Entity Linking in the Social Media](https://arxiv.org/pdf/2010.03295.pdf) (EMNLP 2020) for more details.
## Training data
We crawled all threads from 68 health themed subreddits such as `r/AskDocs`, `r/health` and etc. starting from the beginning of 2015 to the end of 2018, obtaining a collection of more than
800K discussions. This collection was then pruned by removing deleted posts, comments from bots or moderators, and so on. In the end, we obtained the training corpus with ca. 300 million tokens and a vocabulary
size of ca. 780,000 words.
## Training procedure
We use the same pre-training script in the original [google-research/bert](https://github.com/google-research/bert) repo. The model is initialised with [`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`](https://github.com/dmis-lab/biobert).
We train with a batch size of 64, a max sequence length of 64, a learning rate of `2e-5` for 100k steps on two GeForce GTX 1080Ti (11 GB) GPUs. Other hyper-parameters are the same as default.
## Eval results
To show the benefit from further pre-training on the social media domain, we demonstrate results on a medical entity linking dataset also in the social media: [AskAPatient](https://zenodo.org/record/55013#.X4ncRmTYpb8) [(Limsopatham and Collier 2016)](https://www.aclweb.org/anthology/P16-1096.pdf).
We follow the same 10-fold cross-validation procedure for all models and report the average result without fine-tuning. `[CLS]` is used as representations for entity mentions (we also tried average of all tokens but found `[CLS]` generally performs better).
Model | Accuracy@1 | Accuracy@5
-------|---------|---------
[BERT-base-uncased](https://huggingface.co/bert-base-uncased) | 38.2 | 43.3
[BioBERT v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) | 41.4 | 51.5
[ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) | 43.9 | 54.3
[BlueBERT](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/NCBI_BERT_pubmed_mimic_uncased_L-12_H-768_A-12.zip) | 41.5 | 48.5
[SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased) | 42.3 | 51.9
[PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) | 42.5 | 49.6
BioRedditBERT | **44.3** | **56.2**
### BibTeX entry and citation info
```bibtex
@inproceedings{basaldella-2020-cometa,
title = "{COMETA}: A Corpus for Medical Entity Linking in the Social Media",
author = "Basaldella, Marco and Liu, Fangyu, and Shareghi, Ehsan, and Collier, Nigel",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2020",
publisher = "Association for Computational Linguistics"
}
```
|
{"language": ["en"], "tags": ["BioNLP", "social_media"]}
|
cambridgeltl/BioRedditBERT-uncased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"BioNLP",
"social_media",
"en",
"arxiv:2010.03295",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.03295"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #feature-extraction #BioNLP #social_media #en #arxiv-2010.03295 #endpoints_compatible #has_space #region-us
|
BioRedditBERT
=============
Model description
-----------------
BioRedditBERT is a BERT model initialised from BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K') and further pre-trained on health-related Reddit posts. Please view our paper COMETA: A Corpus for Medical Entity Linking in the Social Media (EMNLP 2020) for more details.
Training data
-------------
We crawled all threads from 68 health themed subreddits such as 'r/AskDocs', 'r/health' and etc. starting from the beginning of 2015 to the end of 2018, obtaining a collection of more than
800K discussions. This collection was then pruned by removing deleted posts, comments from bots or moderators, and so on. In the end, we obtained the training corpus with ca. 300 million tokens and a vocabulary
size of ca. 780,000 words.
Training procedure
------------------
We use the same pre-training script in the original google-research/bert repo. The model is initialised with 'BioBERT-Base v1.0 + PubMed 200K + PMC 270K'.
We train with a batch size of 64, a max sequence length of 64, a learning rate of '2e-5' for 100k steps on two GeForce GTX 1080Ti (11 GB) GPUs. Other hyper-parameters are the same as default.
Eval results
------------
To show the benefit from further pre-training on the social media domain, we demonstrate results on a medical entity linking dataset also in the social media: AskAPatient (Limsopatham and Collier 2016).
We follow the same 10-fold cross-validation procedure for all models and report the average result without fine-tuning. '[CLS]' is used as representations for entity mentions (we also tried average of all tokens but found '[CLS]' generally performs better).
Model: BERT-base-uncased, Accuracy@1: 38.2, Accuracy@5: 43.3
Model: BioBERT v1.1, Accuracy@1: 41.4, Accuracy@5: 51.5
Model: ClinicalBERT, Accuracy@1: 43.9, Accuracy@5: 54.3
Model: BlueBERT, Accuracy@1: 41.5, Accuracy@5: 48.5
Model: SciBERT, Accuracy@1: 42.3, Accuracy@5: 51.9
Model: PubMedBERT, Accuracy@1: 42.5, Accuracy@5: 49.6
Model: BioRedditBERT, Accuracy@1: 44.3, Accuracy@5: 56.2
### BibTeX entry and citation info
|
[
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #feature-extraction #BioNLP #social_media #en #arxiv-2010.03295 #endpoints_compatible #has_space #region-us \n",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
---
language: multilingual
tags:
- biomedical
- lexical-semantics
- cross-lingual
datasets:
- UMLS
**[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br>
**[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**!
### SapBERT-XLMR
SapBERT [(Liu et al. 2021)](https://arxiv.org/pdf/2010.11784.pdf) trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AB, using [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) as the base model. Please use [CLS] as the representation of the input.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
```python
import numpy as np
import torch
from tqdm.auto import tqdm
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext")
model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext").cuda()
# replace with your own list of entity names
all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"]
bs = 128 # batch size during inference
all_embs = []
for i in tqdm(np.arange(0, len(all_names), bs)):
toks = tokenizer.batch_encode_plus(all_names[i:i+bs],
padding="max_length",
max_length=25,
truncation=True,
return_tensors="pt")
toks_cuda = {}
for k,v in toks.items():
toks_cuda[k] = v.cuda()
cls_rep = model(**toks_cuda)[0][:,0,:] # use CLS representation as the embedding
all_embs.append(cls_rep.cpu().detach().numpy())
all_embs = np.concatenate(all_embs, axis=0)
```
For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert).
### Citation
```bibtex
@inproceedings{liu2021learning,
title={Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={Proceedings of ACL-IJCNLP 2021},
month = aug,
year={2021}
}
```
|
{}
|
cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR-large
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"arxiv:2010.11784",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.11784"
] |
[] |
TAGS
#transformers #pytorch #xlm-roberta #feature-extraction #arxiv-2010.11784 #endpoints_compatible #region-us
|
---
language: multilingual
tags:
- biomedical
- lexical-semantics
- cross-lingual
datasets:
- UMLS
[news] A cross-lingual extension of SapBERT will appear in the main onference of ACL 2021! <br>
[news] SapBERT will appear in the conference proceedings of NAACL 2021!
### SapBERT-XLMR
SapBERT (Liu et al. 2021) trained with UMLS 2020AB, using xlm-roberta-large as the base model. Please use [CLS] as the representation of the input.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
For more details about training and eval, see SapBERT github repo.
|
[
"### SapBERT-XLMR\nSapBERT (Liu et al. 2021) trained with UMLS 2020AB, using xlm-roberta-large as the base model. Please use [CLS] as the representation of the input.",
"#### Extracting embeddings from SapBERT\n\nThe following script converts a list of strings (entity names) into embeddings.\n\n\nFor more details about training and eval, see SapBERT github repo."
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #feature-extraction #arxiv-2010.11784 #endpoints_compatible #region-us \n",
"### SapBERT-XLMR\nSapBERT (Liu et al. 2021) trained with UMLS 2020AB, using xlm-roberta-large as the base model. Please use [CLS] as the representation of the input.",
"#### Extracting embeddings from SapBERT\n\nThe following script converts a list of strings (entity names) into embeddings.\n\n\nFor more details about training and eval, see SapBERT github repo."
] |
feature-extraction
|
transformers
|
---
language: multilingual
tags:
- biomedical
- lexical-semantics
- cross-lingual
datasets:
- UMLS
**[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br>
**[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**!
### SapBERT-XLMR
SapBERT [(Liu et al. 2020)](https://arxiv.org/pdf/2010.11784.pdf) trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AB, using [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) as the base model. Please use [CLS] as the representation of the input.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
```python
import numpy as np
import torch
from tqdm.auto import tqdm
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext")
model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext").cuda()
# replace with your own list of entity names
all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"]
bs = 128 # batch size during inference
all_embs = []
for i in tqdm(np.arange(0, len(all_names), bs)):
toks = tokenizer.batch_encode_plus(all_names[i:i+bs],
padding="max_length",
max_length=25,
truncation=True,
return_tensors="pt")
toks_cuda = {}
for k,v in toks.items():
toks_cuda[k] = v.cuda()
cls_rep = model(**toks_cuda)[0][:,0,:] # use CLS representation as the embedding
all_embs.append(cls_rep.cpu().detach().numpy())
all_embs = np.concatenate(all_embs, axis=0)
```
For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert).
### Citation
```bibtex
@inproceedings{liu2021learning,
title={Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={Proceedings of ACL-IJCNLP 2021},
month = aug,
year={2021}
}
```
|
{}
|
cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR
| null |
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:2010.11784",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.11784"
] |
[] |
TAGS
#transformers #pytorch #safetensors #xlm-roberta #feature-extraction #arxiv-2010.11784 #endpoints_compatible #region-us
|
---
language: multilingual
tags:
- biomedical
- lexical-semantics
- cross-lingual
datasets:
- UMLS
[news] A cross-lingual extension of SapBERT will appear in the main onference of ACL 2021! <br>
[news] SapBERT will appear in the conference proceedings of NAACL 2021!
### SapBERT-XLMR
SapBERT (Liu et al. 2020) trained with UMLS 2020AB, using xlm-roberta-base as the base model. Please use [CLS] as the representation of the input.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
For more details about training and eval, see SapBERT github repo.
|
[
"### SapBERT-XLMR\nSapBERT (Liu et al. 2020) trained with UMLS 2020AB, using xlm-roberta-base as the base model. Please use [CLS] as the representation of the input.",
"#### Extracting embeddings from SapBERT\n\nThe following script converts a list of strings (entity names) into embeddings.\n\n\nFor more details about training and eval, see SapBERT github repo."
] |
[
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #feature-extraction #arxiv-2010.11784 #endpoints_compatible #region-us \n",
"### SapBERT-XLMR\nSapBERT (Liu et al. 2020) trained with UMLS 2020AB, using xlm-roberta-base as the base model. Please use [CLS] as the representation of the input.",
"#### Extracting embeddings from SapBERT\n\nThe following script converts a list of strings (entity names) into embeddings.\n\n\nFor more details about training and eval, see SapBERT github repo."
] |
feature-extraction
|
transformers
|
---
language: en
tags:
- biomedical
- lexical-semantics
datasets:
- UMLS
**[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br>
**[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**!
### SapBERT-PubMedBERT
SapBERT by [Liu et al. (2020)](https://arxiv.org/pdf/2010.11784.pdf). Trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AA (English only), using [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) as the base model. Please use the mean-pooling of the output as the representation.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
```python
import numpy as np
import torch
from tqdm.auto import tqdm
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token")
model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token").cuda()
# replace with your own list of entity names
all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"]
bs = 128 # batch size during inference
all_embs = []
for i in tqdm(np.arange(0, len(all_names), bs)):
toks = tokenizer.batch_encode_plus(all_names[i:i+bs],
padding="max_length",
max_length=25,
truncation=True,
return_tensors="pt")
toks_cuda = {}
for k,v in toks.items():
toks_cuda[k] = v.cuda()
cls_rep = model(**toks_cuda)[0].mean(1)# use mean pooling representation as the embedding
all_embs.append(cls_rep.cpu().detach().numpy())
all_embs = np.concatenate(all_embs, axis=0)
```
For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert).
### Citation
```bibtex
@inproceedings{liu-etal-2021-self,
title = "Self-Alignment Pretraining for Biomedical Entity Representations",
author = "Liu, Fangyu and
Shareghi, Ehsan and
Meng, Zaiqiao and
Basaldella, Marco and
Collier, Nigel",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.naacl-main.334",
pages = "4228--4238",
abstract = "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.",
}
```
|
{}
|
cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"feature-extraction",
"arxiv:2010.11784",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.11784"
] |
[] |
TAGS
#transformers #pytorch #jax #safetensors #bert #feature-extraction #arxiv-2010.11784 #endpoints_compatible #has_space #region-us
|
---
language: en
tags:
- biomedical
- lexical-semantics
datasets:
- UMLS
[news] A cross-lingual extension of SapBERT will appear in the main onference of ACL 2021! <br>
[news] SapBERT will appear in the conference proceedings of NAACL 2021!
### SapBERT-PubMedBERT
SapBERT by Liu et al. (2020). Trained with UMLS 2020AA (English only), using microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext as the base model. Please use the mean-pooling of the output as the representation.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
For more details about training and eval, see SapBERT github repo.
|
[
"### SapBERT-PubMedBERT\nSapBERT by Liu et al. (2020). Trained with UMLS 2020AA (English only), using microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext as the base model. Please use the mean-pooling of the output as the representation.",
"#### Extracting embeddings from SapBERT\n\nThe following script converts a list of strings (entity names) into embeddings.\n\n\nFor more details about training and eval, see SapBERT github repo."
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #feature-extraction #arxiv-2010.11784 #endpoints_compatible #has_space #region-us \n",
"### SapBERT-PubMedBERT\nSapBERT by Liu et al. (2020). Trained with UMLS 2020AA (English only), using microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext as the base model. Please use the mean-pooling of the output as the representation.",
"#### Extracting embeddings from SapBERT\n\nThe following script converts a list of strings (entity names) into embeddings.\n\n\nFor more details about training and eval, see SapBERT github repo."
] |
feature-extraction
|
transformers
|
---
datasets:
- UMLS
**[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br>
**[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**!
### SapBERT-PubMedBERT
SapBERT by [Liu et al. (2020)](https://arxiv.org/pdf/2010.11784.pdf). Trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AA (English only), using [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) as the base model.
### Expected input and output
The input should be a string of biomedical entity names, e.g., "covid infection" or "Hydroxychloroquine". The [CLS] embedding of the last layer is regarded as the output.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
```python
import numpy as np
import torch
from tqdm.auto import tqdm
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext")
model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext").cuda()
# replace with your own list of entity names
all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"]
bs = 128 # batch size during inference
all_embs = []
for i in tqdm(np.arange(0, len(all_names), bs)):
toks = tokenizer.batch_encode_plus(all_names[i:i+bs],
padding="max_length",
max_length=25,
truncation=True,
return_tensors="pt")
toks_cuda = {}
for k,v in toks.items():
toks_cuda[k] = v.cuda()
cls_rep = model(**toks_cuda)[0][:,0,:] # use CLS representation as the embedding
all_embs.append(cls_rep.cpu().detach().numpy())
all_embs = np.concatenate(all_embs, axis=0)
```
For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert).
### Citation
```bibtex
@inproceedings{liu-etal-2021-self,
title = "Self-Alignment Pretraining for Biomedical Entity Representations",
author = "Liu, Fangyu and
Shareghi, Ehsan and
Meng, Zaiqiao and
Basaldella, Marco and
Collier, Nigel",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.naacl-main.334",
pages = "4228--4238",
abstract = "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.",
}
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["biomedical", "lexical semantics", "bionlp", "biology", "science", "embedding", "entity linking"]}
|
cambridgeltl/SapBERT-from-PubMedBERT-fulltext
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"biomedical",
"lexical semantics",
"bionlp",
"biology",
"science",
"embedding",
"entity linking",
"en",
"arxiv:2010.11784",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.11784"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #feature-extraction #biomedical #lexical semantics #bionlp #biology #science #embedding #entity linking #en #arxiv-2010.11784 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
---
datasets:
- UMLS
[news] A cross-lingual extension of SapBERT will appear in the main onference of ACL 2021! <br>
[news] SapBERT will appear in the conference proceedings of NAACL 2021!
### SapBERT-PubMedBERT
SapBERT by Liu et al. (2020). Trained with UMLS 2020AA (English only), using microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext as the base model.
### Expected input and output
The input should be a string of biomedical entity names, e.g., "covid infection" or "Hydroxychloroquine". The [CLS] embedding of the last layer is regarded as the output.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
For more details about training and eval, see SapBERT github repo.
|
[
"### SapBERT-PubMedBERT\nSapBERT by Liu et al. (2020). Trained with UMLS 2020AA (English only), using microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext as the base model.",
"### Expected input and output\nThe input should be a string of biomedical entity names, e.g., \"covid infection\" or \"Hydroxychloroquine\". The [CLS] embedding of the last layer is regarded as the output.",
"#### Extracting embeddings from SapBERT\n\nThe following script converts a list of strings (entity names) into embeddings.\n\n\nFor more details about training and eval, see SapBERT github repo."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #feature-extraction #biomedical #lexical semantics #bionlp #biology #science #embedding #entity linking #en #arxiv-2010.11784 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### SapBERT-PubMedBERT\nSapBERT by Liu et al. (2020). Trained with UMLS 2020AA (English only), using microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext as the base model.",
"### Expected input and output\nThe input should be a string of biomedical entity names, e.g., \"covid infection\" or \"Hydroxychloroquine\". The [CLS] embedding of the last layer is regarded as the output.",
"#### Extracting embeddings from SapBERT\n\nThe following script converts a list of strings (entity names) into embeddings.\n\n\nFor more details about training and eval, see SapBERT github repo."
] |
feature-extraction
|
transformers
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-bert-base-uncased-sentence-drophead
An unsupervised sentence encoder proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2104.08027.pdf), using [drophead](https://aclanthology.org/2020.findings-emnlp.178.pdf) instead of dropout as feature space augmentation. Trained with unlabelled raw sentences, using [bert-base-uncased](https://huggingface.co/bert-base-uncased) as the base model. Please use mean-pooling over *all tokens* as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
### Citation
```bibtex
@inproceedings{
liu2021fast,
title={Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={EMNLP 2021},
year={2021}
}
```
|
{}
|
cambridgeltl/mirror-bert-base-uncased-sentence-drophead
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"arxiv:2104.08027",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08027"
] |
[] |
TAGS
#transformers #pytorch #safetensors #bert #feature-extraction #arxiv-2104.08027 #endpoints_compatible #region-us
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-bert-base-uncased-sentence-drophead
An unsupervised sentence encoder proposed by Liu et al. (2021), using drophead instead of dropout as feature space augmentation. Trained with unlabelled raw sentences, using bert-base-uncased as the base model. Please use mean-pooling over *all tokens* as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
|
[
"### cambridgeltl/mirror-bert-base-uncased-sentence-drophead\nAn unsupervised sentence encoder proposed by Liu et al. (2021), using drophead instead of dropout as feature space augmentation. Trained with unlabelled raw sentences, using bert-base-uncased as the base model. Please use mean-pooling over *all tokens* as the representation of the input.\n\nNote the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs."
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #feature-extraction #arxiv-2104.08027 #endpoints_compatible #region-us \n",
"### cambridgeltl/mirror-bert-base-uncased-sentence-drophead\nAn unsupervised sentence encoder proposed by Liu et al. (2021), using drophead instead of dropout as feature space augmentation. Trained with unlabelled raw sentences, using bert-base-uncased as the base model. Please use mean-pooling over *all tokens* as the representation of the input.\n\nNote the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs."
] |
feature-extraction
|
transformers
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-bert-base-uncased-sentence
An unsupervised sentence encoder proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2104.08027.pdf). Trained with unlabelled raw sentences, using [bert-base-uncased](https://huggingface.co/bert-base-uncased) as the base model. Please use mean-pooling over *all tokens* (including padded ones) as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
### Citation
```bibtex
@inproceedings{
liu2021fast,
title={Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={EMNLP 2021},
year={2021}
}
```
|
{}
|
cambridgeltl/mirror-bert-base-uncased-sentence
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08027",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08027"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.08027 #endpoints_compatible #region-us
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-bert-base-uncased-sentence
An unsupervised sentence encoder proposed by Liu et al. (2021). Trained with unlabelled raw sentences, using bert-base-uncased as the base model. Please use mean-pooling over *all tokens* (including padded ones) as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
|
[
"### cambridgeltl/mirror-bert-base-uncased-sentence\nAn unsupervised sentence encoder proposed by Liu et al. (2021). Trained with unlabelled raw sentences, using bert-base-uncased as the base model. Please use mean-pooling over *all tokens* (including padded ones) as the representation of the input.\n\nNote the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs."
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.08027 #endpoints_compatible #region-us \n",
"### cambridgeltl/mirror-bert-base-uncased-sentence\nAn unsupervised sentence encoder proposed by Liu et al. (2021). Trained with unlabelled raw sentences, using bert-base-uncased as the base model. Please use mean-pooling over *all tokens* (including padded ones) as the representation of the input.\n\nNote the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs."
] |
feature-extraction
|
transformers
|
---
language: en
tags:
- word-embeddings
- word-similarity
### mirror-bert-base-uncased-word
An unsupervised word encoder proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2104.08027.pdf). Trained with a set of unlabelled words, using [bert-base-uncased](https://huggingface.co/bert-base-uncased) as the base model. Please use `[CLS]` as the representation of the input.
### Citation
```bibtex
@inproceedings{
liu2021fast,
title={Fast, Effective and Self-Supervised: Transforming Masked LanguageModels into Universal Lexical and Sentence Encoders},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={EMNLP 2021},
year={2021}
}
```
|
{}
|
cambridgeltl/mirror-bert-base-uncased-word
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"arxiv:2104.08027",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08027"
] |
[] |
TAGS
#transformers #pytorch #safetensors #bert #feature-extraction #arxiv-2104.08027 #endpoints_compatible #region-us
|
---
language: en
tags:
- word-embeddings
- word-similarity
### mirror-bert-base-uncased-word
An unsupervised word encoder proposed by Liu et al. (2021). Trained with a set of unlabelled words, using bert-base-uncased as the base model. Please use '[CLS]' as the representation of the input.
|
[
"### mirror-bert-base-uncased-word\nAn unsupervised word encoder proposed by Liu et al. (2021). Trained with a set of unlabelled words, using bert-base-uncased as the base model. Please use '[CLS]' as the representation of the input."
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #feature-extraction #arxiv-2104.08027 #endpoints_compatible #region-us \n",
"### mirror-bert-base-uncased-word\nAn unsupervised word encoder proposed by Liu et al. (2021). Trained with a set of unlabelled words, using bert-base-uncased as the base model. Please use '[CLS]' as the representation of the input."
] |
feature-extraction
|
transformers
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-roberta-base-sentence-drophead
An unsupervised sentence encoder proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2104.08027.pdf), using [drophead](https://aclanthology.org/2020.findings-emnlp.178.pdf) instead of dropout as feature space augmentation. The model is trained with unlabelled raw sentences, using [roberta-base](https://huggingface.co/roberta-base) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
### Citation
```bibtex
@inproceedings{
liu2021fast,
title={Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={EMNLP 2021},
year={2021}
}
```
|
{}
|
cambridgeltl/mirror-roberta-base-sentence-drophead
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:2104.08027",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08027"
] |
[] |
TAGS
#transformers #pytorch #safetensors #roberta #feature-extraction #arxiv-2104.08027 #endpoints_compatible #region-us
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-roberta-base-sentence-drophead
An unsupervised sentence encoder proposed by Liu et al. (2021), using drophead instead of dropout as feature space augmentation. The model is trained with unlabelled raw sentences, using roberta-base as the base model. Please use '[CLS]' (before pooler) as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
|
[
"### cambridgeltl/mirror-roberta-base-sentence-drophead\nAn unsupervised sentence encoder proposed by Liu et al. (2021), using drophead instead of dropout as feature space augmentation. The model is trained with unlabelled raw sentences, using roberta-base as the base model. Please use '[CLS]' (before pooler) as the representation of the input.\n\nNote the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs."
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #feature-extraction #arxiv-2104.08027 #endpoints_compatible #region-us \n",
"### cambridgeltl/mirror-roberta-base-sentence-drophead\nAn unsupervised sentence encoder proposed by Liu et al. (2021), using drophead instead of dropout as feature space augmentation. The model is trained with unlabelled raw sentences, using roberta-base as the base model. Please use '[CLS]' (before pooler) as the representation of the input.\n\nNote the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs."
] |
feature-extraction
|
transformers
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-roberta-base-sentence
An unsupervised sentence encoder proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2104.08027.pdf). The model is trained with unlabelled raw sentences, using [roberta-base](https://huggingface.co/roberta-base) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
### Citation
```bibtex
@inproceedings{
liu2021fast,
title={Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={EMNLP 2021},
year={2021}
}
```
|
{}
|
cambridgeltl/mirror-roberta-base-sentence
| null |
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2104.08027",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08027"
] |
[] |
TAGS
#transformers #pytorch #roberta #feature-extraction #arxiv-2104.08027 #endpoints_compatible #region-us
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-roberta-base-sentence
An unsupervised sentence encoder proposed by Liu et al. (2021). The model is trained with unlabelled raw sentences, using roberta-base as the base model. Please use '[CLS]' (before pooler) as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
|
[
"### cambridgeltl/mirror-roberta-base-sentence\nAn unsupervised sentence encoder proposed by Liu et al. (2021). The model is trained with unlabelled raw sentences, using roberta-base as the base model. Please use '[CLS]' (before pooler) as the representation of the input.\n\nNote the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs."
] |
[
"TAGS\n#transformers #pytorch #roberta #feature-extraction #arxiv-2104.08027 #endpoints_compatible #region-us \n",
"### cambridgeltl/mirror-roberta-base-sentence\nAn unsupervised sentence encoder proposed by Liu et al. (2021). The model is trained with unlabelled raw sentences, using roberta-base as the base model. Please use '[CLS]' (before pooler) as the representation of the input.\n\nNote the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs."
] |
text-generation
|
transformers
|
This model provides a GPT-2 language model trained with SimCTG on the English Wikipedia based on our paper [_A Contrastive Framework for Neural Text Generation_](https://arxiv.org/abs/2202.06417).
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our [project repo](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top). In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
```yaml
pip install simctg --upgrade
```
## 2. Initialize SimCTG Model:
```python
import torch
# load SimCTG language model
from simctg.simctggpt import SimCTGGPT
model_name = r'cambridgeltl/simctg_english_wikipedia'
model = SimCTGGPT(model_name)
model.eval()
tokenizer = model.tokenizer
```
## 3. Prepare the Text Prefix:
```python
prefix_text = r"Insect farming is the practice of raising and breeding insects as livestock, also referred to as minilivestock or micro stock. Insects may be farmed for the commodities"
print ('Prefix is: {}'.format(prefix_text))
tokens = tokenizer.tokenize(prefix_text)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
input_ids = torch.LongTensor(input_ids).view(1,-1)
```
## 4. Generate Text with Contrastive Search:
```python
beam_width, alpha, decoding_len = 5, 0.6, 128
output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width,
alpha=alpha, decoding_len=decoding_len)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output))
'''
Prefix is: Insect farming is the practice of raising and breeding insects as livestock, also referred to as minilivestock or
micro stock. Insects may be farmed for the commodities
Output:
----------------------------------------------------------------------------------------------------
Insect farming is the practice of raising and breeding insects as livestock, also referred to as minilivestock or micro stock.
Insects may be farmed for the commodities they produce, such as honey, corn, sorghum, and other crops. In some cases, the
production of insects is a way to increase income for the owner or his family. This type of farming has been described as "an
economic system that benefits all people regardless of race, sex, or social status" (p. 9). A large number of farmers in North
America, Europe, and South America have used the method of farming for food production in order to feed their families and livestock.
The most common method of farming is by hand-cropping, which consists of cutting a hole in the ground and using a saw
'''
```
For more details of our work, please refer to our main [project repo](https://github.com/yxuansu/SimCTG).
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
```bibtex
@article{su2022contrastive,
title={A Contrastive Framework for Neural Text Generation},
author={Su, Yixuan and Lan, Tian and Wang, Yan and Yogatama, Dani and Kong, Lingpeng and Collier, Nigel},
journal={arXiv preprint arXiv:2202.06417},
year={2022}
}
```
|
{}
|
cambridgeltl/simctg_english_wikipedia
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2202.06417",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.06417"
] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #arxiv-2202.06417 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This model provides a GPT-2 language model trained with SimCTG on the English Wikipedia based on our paper _A Contrastive Framework for Neural Text Generation_.
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our project repo. In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
## 2. Initialize SimCTG Model:
## 3. Prepare the Text Prefix:
## 4. Generate Text with Contrastive Search:
For more details of our work, please refer to our main project repo.
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
|
[
"## 1. Installation of SimCTG:",
"## 2. Initialize SimCTG Model:",
"## 3. Prepare the Text Prefix:",
"## 4. Generate Text with Contrastive Search:\n\n\nFor more details of our work, please refer to our main project repo.",
"## 5. Citation:\nIf you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #arxiv-2202.06417 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## 1. Installation of SimCTG:",
"## 2. Initialize SimCTG Model:",
"## 3. Prepare the Text Prefix:",
"## 4. Generate Text with Contrastive Search:\n\n\nFor more details of our work, please refer to our main project repo.",
"## 5. Citation:\nIf you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!"
] |
text-generation
|
transformers
|
This model provides a Chinese GPT-2 language model trained with SimCTG on the LCCC benchmark [(Wang et al., 2020)](https://arxiv.org/pdf/2008.03946v2.pdf) based on our paper [_A Contrastive Framework for Neural Text Generation_](https://arxiv.org/abs/2202.06417).
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our [project repo](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top). In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
```yaml
pip install simctg --upgrade
```
## 2. Initialize SimCTG Model:
```python
import torch
# load SimCTG language model
from simctg.simctggpt import SimCTGGPT
model_name = r'cambridgeltl/simctg_lccc_dialogue'
model = SimCTGGPT(model_name)
model.eval()
tokenizer = model.tokenizer
eos_token = '[SEP]'
eos_token_id = tokenizer.convert_tokens_to_ids([eos_token])[0]
```
## 3. Prepare the Text Prefix:
```python
context_list = ['刺猬很可爱!以前别人送了只没养,味儿太大!', '是很可爱但是非常臭', '是啊,没办法养', '那个怎么养哦不会扎手吗']
prefix_text = eos_token.join(context_list).strip(eos_token) + eos_token
print ('Prefix is: {}'.format(prefix_text))
tokens = tokenizer.tokenize(prefix_text)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
input_ids = torch.LongTensor(input_ids).view(1,-1)
```
## 4. Generate Text with Contrastive Search:
```python
beam_width, alpha, decoding_len = 5, 0.6, 64
output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width, alpha=alpha,
decoding_len=decoding_len, end_of_sequence_token_id=eos_token_id,
early_stop=True)
print("Output:\n" + 100 * '-')
print(''.join(tokenizer.decode(output)))
'''
Prefix is: 刺猬很可爱!以前别人送了只没养,味儿太大![SEP]是很可爱但是非常臭[SEP]是啊,没办法养[SEP]那个怎么养哦不会扎手吗[SEP]
Output:
----------------------------------------------------------------------------------------------------
刺猬很可爱!以前别人送了只没养,味儿太大![SEP]是很可爱但是非常臭[SEP]是啊,没办法养[SEP]那个怎么养哦不会扎手吗[SEP]我觉得还好,就是有点臭
'''
```
For more details of our work, please refer to our main [project repo](https://github.com/yxuansu/SimCTG).
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
```bibtex
@article{su2022contrastive,
title={A Contrastive Framework for Neural Text Generation},
author={Su, Yixuan and Lan, Tian and Wang, Yan and Yogatama, Dani and Kong, Lingpeng and Collier, Nigel},
journal={arXiv preprint arXiv:2202.06417},
year={2022}
}
```
|
{}
|
cambridgeltl/simctg_lccc_dialogue
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2008.03946",
"arxiv:2202.06417",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2008.03946",
"2202.06417"
] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #arxiv-2008.03946 #arxiv-2202.06417 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This model provides a Chinese GPT-2 language model trained with SimCTG on the LCCC benchmark (Wang et al., 2020) based on our paper _A Contrastive Framework for Neural Text Generation_.
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our project repo. In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
## 2. Initialize SimCTG Model:
## 3. Prepare the Text Prefix:
## 4. Generate Text with Contrastive Search:
For more details of our work, please refer to our main project repo.
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
|
[
"## 1. Installation of SimCTG:",
"## 2. Initialize SimCTG Model:",
"## 3. Prepare the Text Prefix:",
"## 4. Generate Text with Contrastive Search:\n\n\nFor more details of our work, please refer to our main project repo.",
"## 5. Citation:\nIf you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #arxiv-2008.03946 #arxiv-2202.06417 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## 1. Installation of SimCTG:",
"## 2. Initialize SimCTG Model:",
"## 3. Prepare the Text Prefix:",
"## 4. Generate Text with Contrastive Search:\n\n\nFor more details of our work, please refer to our main project repo.",
"## 5. Citation:\nIf you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!"
] |
text-generation
|
transformers
|
This model provides a GPT-2 language model trained with SimCTG on the Wikitext-103 benchmark [(Merity et al., 2016)](https://arxiv.org/abs/1609.07843) based on our paper [_A Contrastive Framework for Neural Text Generation_](https://arxiv.org/abs/2202.06417).
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our [project repo](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top). In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
```yaml
pip install simctg --upgrade
```
## 2. Initialize SimCTG Model:
```python
import torch
# load SimCTG language model
from simctg.simctggpt import SimCTGGPT
model_name = r'cambridgeltl/simctg_wikitext103'
model = SimCTGGPT(model_name)
model.eval()
tokenizer = model.tokenizer
```
## 3. Prepare the Text Prefix:
```python
prefix_text = r"Butt criticized Donald 's controls in certain situations in the game , as well as the difficulty of some levels and puzzles .
Buchanan also criticized the controls , calling"
print ('Prefix is: {}'.format(prefix_text))
tokens = tokenizer.tokenize(prefix_text)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
input_ids = torch.LongTensor(input_ids).view(1,-1)
```
## 4. Generate Text with Contrastive Search:
```python
beam_width, alpha, decoding_len = 8, 0.6, 128
output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width,
alpha=alpha, decoding_len=decoding_len)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output))
'''
Prefix is: Butt criticized Donald 's controls in certain situations in the game , as well as the difficulty of some levels and puzzles .
Buchanan also criticized the controls , calling
Output:
----------------------------------------------------------------------------------------------------
Butt criticized Donald's controls in certain situations in the game, as well as the difficulty of some levels and puzzles. Buchanan also
criticized the controls, calling them " unimpressive " and a " nightmare " of an experience to play with players unfamiliar with Tetris.
On the other hand, his opinion was shared by other reviewers, and some were critical of the game's technical design for the Wii version
of Tetris. In addition, Tintin's review included a quote from Roger Ebert, who said that Tetris was better than the original game due to
its simplicity and ease of play. Ebert's comments were included in the game's DVD commentary, released on March 22, 2010. It is unclear
if any of the video commentary was taken from the DVD
'''
```
For more details of our work, please refer to our main [project repo](https://github.com/yxuansu/SimCTG).
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
```bibtex
@article{su2022contrastive,
title={A Contrastive Framework for Neural Text Generation},
author={Su, Yixuan and Lan, Tian and Wang, Yan and Yogatama, Dani and Kong, Lingpeng and Collier, Nigel},
journal={arXiv preprint arXiv:2202.06417},
year={2022}
}
```
|
{}
|
cambridgeltl/simctg_wikitext103
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:1609.07843",
"arxiv:2202.06417",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1609.07843",
"2202.06417"
] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #arxiv-1609.07843 #arxiv-2202.06417 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
This model provides a GPT-2 language model trained with SimCTG on the Wikitext-103 benchmark (Merity et al., 2016) based on our paper _A Contrastive Framework for Neural Text Generation_.
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our project repo. In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
## 2. Initialize SimCTG Model:
## 3. Prepare the Text Prefix:
## 4. Generate Text with Contrastive Search:
For more details of our work, please refer to our main project repo.
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
|
[
"## 1. Installation of SimCTG:",
"## 2. Initialize SimCTG Model:",
"## 3. Prepare the Text Prefix:",
"## 4. Generate Text with Contrastive Search:\n\n\nFor more details of our work, please refer to our main project repo.",
"## 5. Citation:\nIf you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #arxiv-1609.07843 #arxiv-2202.06417 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## 1. Installation of SimCTG:",
"## 2. Initialize SimCTG Model:",
"## 3. Prepare the Text Prefix:",
"## 4. Generate Text with Contrastive Search:\n\n\nFor more details of our work, please refer to our main project repo.",
"## 5. Citation:\nIf you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!"
] |
feature-extraction
|
transformers
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-bert-base
An unsupervised sentence encoder (bi-encoder) proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2109.13059.pdf). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using [princeton-nlp/unsup-simcse-bert-base-uncased](https://huggingface.co/princeton-nlp/unsup-simcse-bert-base-uncased) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
### Citation
```bibtex
@article{liu2021trans,
title={Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations},
author={Liu, Fangyu and Jiao, Yunlong and Massiah, Jordan and Yilmaz, Emine and Havrylov, Serhii},
journal={arXiv preprint arXiv:2109.13059},
year={2021}
}
```
|
{}
|
cambridgeltl/trans-encoder-bi-simcse-bert-base
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2109.13059",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.13059"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2109.13059 #endpoints_compatible #region-us
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-bert-base
An unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-bert-base-uncased as the base model. Please use '[CLS]' (before pooler) as the representation of the input.
|
[
"### cambridgeltl/trans-encoder-bi-simcse-bert-base\nAn unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-bert-base-uncased as the base model. Please use '[CLS]' (before pooler) as the representation of the input."
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2109.13059 #endpoints_compatible #region-us \n",
"### cambridgeltl/trans-encoder-bi-simcse-bert-base\nAn unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-bert-base-uncased as the base model. Please use '[CLS]' (before pooler) as the representation of the input."
] |
feature-extraction
|
transformers
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-bert-large
An unsupervised sentence encoder (bi-encoder) proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2109.13059.pdf). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using [princeton-nlp/unsup-simcse-bert-large-uncased](https://huggingface.co/princeton-nlp/unsup-simcse-bert-large-uncased) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
### Citation
```bibtex
@article{liu2021trans,
title={Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations},
author={Liu, Fangyu and Jiao, Yunlong and Massiah, Jordan and Yilmaz, Emine and Havrylov, Serhii},
journal={arXiv preprint arXiv:2109.13059},
year={2021}
}
```
|
{}
|
cambridgeltl/trans-encoder-bi-simcse-bert-large
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2109.13059",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.13059"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2109.13059 #endpoints_compatible #region-us
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-bert-large
An unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-bert-large-uncased as the base model. Please use '[CLS]' (before pooler) as the representation of the input.
|
[
"### cambridgeltl/trans-encoder-bi-simcse-bert-large\nAn unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-bert-large-uncased as the base model. Please use '[CLS]' (before pooler) as the representation of the input."
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2109.13059 #endpoints_compatible #region-us \n",
"### cambridgeltl/trans-encoder-bi-simcse-bert-large\nAn unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-bert-large-uncased as the base model. Please use '[CLS]' (before pooler) as the representation of the input."
] |
feature-extraction
|
transformers
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-roberta-base
An unsupervised sentence encoder (bi-encoder) proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2109.13059.pdf). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using [princeton-nlp/unsup-simcse-roberta-base](https://huggingface.co/princeton-nlp/unsup-simcse-roberta-base) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
### Citation
```bibtex
@article{liu2021trans,
title={Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations},
author={Liu, Fangyu and Jiao, Yunlong and Massiah, Jordan and Yilmaz, Emine and Havrylov, Serhii},
journal={arXiv preprint arXiv:2109.13059},
year={2021}
}
```
|
{}
|
cambridgeltl/trans-encoder-bi-simcse-roberta-base
| null |
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2109.13059",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.13059"
] |
[] |
TAGS
#transformers #pytorch #roberta #feature-extraction #arxiv-2109.13059 #endpoints_compatible #region-us
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-roberta-base
An unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-roberta-base as the base model. Please use '[CLS]' (before pooler) as the representation of the input.
|
[
"### cambridgeltl/trans-encoder-bi-simcse-roberta-base\nAn unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-roberta-base as the base model. Please use '[CLS]' (before pooler) as the representation of the input."
] |
[
"TAGS\n#transformers #pytorch #roberta #feature-extraction #arxiv-2109.13059 #endpoints_compatible #region-us \n",
"### cambridgeltl/trans-encoder-bi-simcse-roberta-base\nAn unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-roberta-base as the base model. Please use '[CLS]' (before pooler) as the representation of the input."
] |
feature-extraction
|
transformers
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-roberta-large
An unsupervised sentence encoder (bi-encoder) proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2109.13059.pdf). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using [princeton-nlp/unsup-simcse-roberta-large](https://huggingface.co/princeton-nlp/unsup-simcse-roberta-large) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
### Citation
```bibtex
@article{liu2021trans,
title={Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations},
author={Liu, Fangyu and Jiao, Yunlong and Massiah, Jordan and Yilmaz, Emine and Havrylov, Serhii},
journal={arXiv preprint arXiv:2109.13059},
year={2021}
}
```
|
{}
|
cambridgeltl/trans-encoder-bi-simcse-roberta-large
| null |
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2109.13059",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.13059"
] |
[] |
TAGS
#transformers #pytorch #roberta #feature-extraction #arxiv-2109.13059 #endpoints_compatible #region-us
|
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-roberta-large
An unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-roberta-large as the base model. Please use '[CLS]' (before pooler) as the representation of the input.
|
[
"### cambridgeltl/trans-encoder-bi-simcse-roberta-large\nAn unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-roberta-large as the base model. Please use '[CLS]' (before pooler) as the representation of the input."
] |
[
"TAGS\n#transformers #pytorch #roberta #feature-extraction #arxiv-2109.13059 #endpoints_compatible #region-us \n",
"### cambridgeltl/trans-encoder-bi-simcse-roberta-large\nAn unsupervised sentence encoder (bi-encoder) proposed by Liu et al. (2021). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using princeton-nlp/unsup-simcse-roberta-large as the base model. Please use '[CLS]' (before pooler) as the representation of the input."
] |
null |
transformers
|
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-ccnet-4gb")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet-4gb")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-ccnet-4gb", tokenizer="camembert/camembert-base-ccnet-4gb")
results = camembert_fill_mask("Le camembert est-il <mask> ?")
# results
#[{'sequence': '<s> Le camembert est-il sain?</s>', 'score': 0.07001790404319763, 'token': 10286},
#{'sequence': '<s> Le camembert est-il français?</s>', 'score': 0.057594332844018936, 'token': 384},
#{'sequence': '<s> Le camembert est-il bon?</s>', 'score': 0.04098724573850632, 'token': 305},
#{'sequence': '<s> Le camembert est-il périmé?</s>', 'score': 0.03486393392086029, 'token': 30862},
#{'sequence': '<s> Le camembert est-il cher?</s>', 'score': 0.021535946056246758, 'token': 1604}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 133, 22, 1250, 16, 12034, 14324, 81, 76, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[ 0.0331, 0.0095, -0.2776, ..., 0.2875, -0.0827, -0.2467],
# [-0.1348, 0.0478, -0.5409, ..., 0.8330, 0.0467, 0.0662],
# [ 0.0920, -0.0264, 0.0177, ..., 0.1112, 0.0108, -0.1123],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-ccnet-4gb", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet-4gb", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.0144, 0.1855, 0.4895, ..., -0.1537, 0.0107, -0.2293],
# [-0.6664, -0.0880, -0.1539, ..., 0.3635, 0.4047, 0.1258],
# [ 0.0511, 0.0540, 0.2545, ..., 0.0709, -0.0288, -0.0779],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
{"language": "fr"}
|
almanach/camembert-base-ccnet-4gb
| null |
[
"transformers",
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.03894"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #camembert #fr #arxiv-1911.03894 #endpoints_compatible #region-us
|
CamemBERT: a Tasty French Language Model
========================================
Introduction
------------
CamemBERT is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to Camembert Website
Pre-trained models
------------------
How to use CamemBERT with HuggingFace
-------------------------------------
##### Load CamemBERT and its sub-word tokenizer :
##### Filling masks using pipeline
##### Extract contextual embedding features from Camembert output
##### Extract contextual embedding features from all Camembert layers
Authors
-------
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
If you use our work, please cite:
|
[
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #camembert #fr #arxiv-1911.03894 #endpoints_compatible #region-us \n",
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
null |
transformers
|
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-ccnet")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-ccnet", tokenizer="camembert/camembert-base-ccnet")
results = camembert_fill_mask("Le camembert est <mask> :)")
# results
#[{'sequence': '<s> Le camembert est bon :)</s>', 'score': 0.14011502265930176, 'token': 305},
# {'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.13929404318332672, 'token': 11661},
# {'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.07010319083929062, 'token': 3497},
# {'sequence': '<s> Le camembert est parfait :)</s>', 'score': 0.025885622948408127, 'token': 2528},
# {'sequence': '<s> Le camembert est top :)</s>', 'score': 0.025684962049126625, 'token': 2328}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁cam', 'ember', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 133, 22, 1250, 16, 12034, 14324, 81, 76, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[ 0.0667, -0.2467, 0.0954, ..., 0.2144, 0.0279, 0.3621],
# [-0.0472, 0.4092, -0.6602, ..., 0.2095, 0.1391, -0.0401],
# [ 0.1911, -0.2347, -0.0811, ..., 0.4306, -0.0639, 0.1821],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-ccnet", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[ 0.0057, -0.1022, 0.0163, ..., -0.0675, -0.0360, 0.1078],
# [-0.1096, -0.3344, -0.0593, ..., 0.1625, -0.0432, -0.1646],
# [ 0.3751, -0.3829, 0.0844, ..., 0.1067, -0.0330, 0.3334],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
{"language": "fr"}
|
almanach/camembert-base-ccnet
| null |
[
"transformers",
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.03894"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #camembert #fr #arxiv-1911.03894 #endpoints_compatible #region-us
|
CamemBERT: a Tasty French Language Model
========================================
Introduction
------------
CamemBERT is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to Camembert Website
Pre-trained models
------------------
How to use CamemBERT with HuggingFace
-------------------------------------
##### Load CamemBERT and its sub-word tokenizer :
##### Filling masks using pipeline
##### Extract contextual embedding features from Camembert output
##### Extract contextual embedding features from all Camembert layers
Authors
-------
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
If you use our work, please cite:
|
[
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #camembert #fr #arxiv-1911.03894 #endpoints_compatible #region-us \n",
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
null |
transformers
|
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-oscar-4gb")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-oscar-4gb")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-oscar-4gb", tokenizer="camembert/camembert-base-oscar-4gb")
>>> results = camembert_fill_mask("Le camembert est <mask> !")
# results
#[{'sequence': '<s> Le camembert est parfait!</s>', 'score': 0.04089554399251938, 'token': 1654},
#{'sequence': '<s> Le camembert est délicieux!</s>', 'score': 0.037193264812231064, 'token': 7200},
#{'sequence': '<s> Le camembert est prêt!</s>', 'score': 0.025467922911047935, 'token': 1415},
#{'sequence': '<s> Le camembert est meilleur!</s>', 'score': 0.022812040522694588, 'token': 528},
#{'sequence': '<s> Le camembert est différent!</s>', 'score': 0.017135459929704666, 'token': 2935}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 121, 11, 660, 16, 730, 25543, 110, 83, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[-0.1120, -0.1464, 0.0181, ..., -0.1723, -0.0278, 0.1606],
# [ 0.1234, 0.1202, -0.0773, ..., -0.0405, -0.0668, -0.0788],
# [-0.0440, 0.0480, -0.1926, ..., 0.1066, -0.0961, 0.0637],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-oscar-4gb", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-oscar-4gb", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.1584, -0.1207, -0.0179, ..., 0.5457, 0.1491, -0.1191],
# [-0.1122, 0.3634, 0.0676, ..., 0.4395, -0.0470, -0.3781],
# [-0.2232, 0.0019, 0.0140, ..., 0.4461, -0.0233, 0.0735],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
{"language": "fr"}
|
almanach/camembert-base-oscar-4gb
| null |
[
"transformers",
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.03894"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #camembert #fr #arxiv-1911.03894 #endpoints_compatible #region-us
|
CamemBERT: a Tasty French Language Model
========================================
Introduction
------------
CamemBERT is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to Camembert Website
Pre-trained models
------------------
How to use CamemBERT with HuggingFace
-------------------------------------
##### Load CamemBERT and its sub-word tokenizer :
##### Filling masks using pipeline
##### Extract contextual embedding features from Camembert output
##### Extract contextual embedding features from all Camembert layers
Authors
-------
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
If you use our work, please cite:
|
[
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #camembert #fr #arxiv-1911.03894 #endpoints_compatible #region-us \n",
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
null |
transformers
|
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-wikipedia-4gb")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-wikipedia-4gb", tokenizer="camembert/camembert-base-wikipedia-4gb")
results = camembert_fill_mask("Le camembert est un fromage de <mask>!")
# results
#[{'sequence': '<s> Le camembert est un fromage de chèvre!</s>', 'score': 0.4937814474105835, 'token': 19370},
#{'sequence': '<s> Le camembert est un fromage de brebis!</s>', 'score': 0.06255942583084106, 'token': 30616},
#{'sequence': '<s> Le camembert est un fromage de montagne!</s>', 'score': 0.04340197145938873, 'token': 2364},
# {'sequence': '<s> Le camembert est un fromage de Noël!</s>', 'score': 0.02823255956172943, 'token': 3236},
#{'sequence': '<s> Le camembert est un fromage de vache!</s>', 'score': 0.021357402205467224, 'token': 12329}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 221, 10, 10600, 14, 8952, 10540, 75, 1114, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[-0.0928, 0.0506, -0.0094, ..., -0.2388, 0.1177, -0.1302],
# [ 0.0662, 0.1030, -0.2355, ..., -0.4224, -0.0574, -0.2802],
# [-0.0729, 0.0547, 0.0192, ..., -0.1743, 0.0998, -0.2677],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-wikipedia-4gb", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.0059, -0.0227, 0.0065, ..., -0.0770, 0.0369, 0.0095],
# [ 0.2838, -0.1531, -0.3642, ..., -0.0027, -0.8502, -0.7914],
# [-0.0073, -0.0338, -0.0011, ..., 0.0533, -0.0250, -0.0061],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
{"language": "fr"}
|
almanach/camembert-base-wikipedia-4gb
| null |
[
"transformers",
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.03894"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #camembert #fr #arxiv-1911.03894 #endpoints_compatible #region-us
|
CamemBERT: a Tasty French Language Model
========================================
Introduction
------------
CamemBERT is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to Camembert Website
Pre-trained models
------------------
How to use CamemBERT with HuggingFace
-------------------------------------
##### Load CamemBERT and its sub-word tokenizer :
##### Filling masks using pipeline
##### Extract contextual embedding features from Camembert output
##### Extract contextual embedding features from all Camembert layers
Authors
-------
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
If you use our work, please cite:
|
[
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #camembert #fr #arxiv-1911.03894 #endpoints_compatible #region-us \n",
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
fill-mask
|
transformers
|
> 🚨 **Update:** This checkpoint is deprecated, please use https://huggingface.co/almanach/camembert-base instead 🚨
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-wikipedia-4gb")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-wikipedia-4gb", tokenizer="camembert/camembert-base-wikipedia-4gb")
results = camembert_fill_mask("Le camembert est un fromage de <mask>!")
# results
#[{'sequence': '<s> Le camembert est un fromage de chèvre!</s>', 'score': 0.4937814474105835, 'token': 19370},
#{'sequence': '<s> Le camembert est un fromage de brebis!</s>', 'score': 0.06255942583084106, 'token': 30616},
#{'sequence': '<s> Le camembert est un fromage de montagne!</s>', 'score': 0.04340197145938873, 'token': 2364},
# {'sequence': '<s> Le camembert est un fromage de Noël!</s>', 'score': 0.02823255956172943, 'token': 3236},
#{'sequence': '<s> Le camembert est un fromage de vache!</s>', 'score': 0.021357402205467224, 'token': 12329}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 221, 10, 10600, 14, 8952, 10540, 75, 1114, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[-0.0928, 0.0506, -0.0094, ..., -0.2388, 0.1177, -0.1302],
# [ 0.0662, 0.1030, -0.2355, ..., -0.4224, -0.0574, -0.2802],
# [-0.0729, 0.0547, 0.0192, ..., -0.1743, 0.0998, -0.2677],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-wikipedia-4gb", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.0059, -0.0227, 0.0065, ..., -0.0770, 0.0369, 0.0095],
# [ 0.2838, -0.1531, -0.3642, ..., -0.0027, -0.8502, -0.7914],
# [-0.0073, -0.0338, -0.0011, ..., 0.0533, -0.0250, -0.0061],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
{"language": "fr"}
|
almanach/camembert-base-legacy
| null |
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"fr",
"arxiv:1911.03894",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.03894"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #camembert #fill-mask #fr #arxiv-1911.03894 #autotrain_compatible #endpoints_compatible #region-us
|
>
> Update: This checkpoint is deprecated, please use URL instead
>
>
>
CamemBERT: a Tasty French Language Model
========================================
Introduction
------------
CamemBERT is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to Camembert Website
Pre-trained models
------------------
How to use CamemBERT with HuggingFace
-------------------------------------
##### Load CamemBERT and its sub-word tokenizer :
##### Filling masks using pipeline
##### Extract contextual embedding features from Camembert output
##### Extract contextual embedding features from all Camembert layers
Authors
-------
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
If you use our work, please cite:
|
[
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #camembert #fill-mask #fr #arxiv-1911.03894 #autotrain_compatible #endpoints_compatible #region-us \n",
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
null |
transformers
|
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-large")
camembert = CamembertModel.from_pretrained("camembert/camembert-large")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-large", tokenizer="camembert/camembert-large")
results = camembert_fill_mask("Le camembert est <mask> :)")
# results
#[{'sequence': '<s> Le camembert est bon :)</s>', 'score': 0.15560828149318695, 'token': 305},
#{'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.06821336597204208, 'token': 3497},
#{'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.060438305139541626, 'token': 11661},
#{'sequence': '<s> Le camembert est ici :)</s>', 'score': 0.02023460529744625, 'token': 373},
#{'sequence': '<s> Le camembert est meilleur :)</s>', 'score': 0.01778135634958744, 'token': 876}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁cam', 'ember', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 133, 22, 1250, 16, 12034, 14324, 81, 76, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# torch.Size([1, 10, 1024])
#tensor([[[-0.1284, 0.2643, 0.4374, ..., 0.1627, 0.1308, -0.2305],
# [ 0.4576, -0.6345, -0.2029, ..., -0.1359, -0.2290, -0.6318],
# [ 0.0381, 0.0429, 0.5111, ..., -0.1177, -0.1913, -0.1121],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-large", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-large", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 25 (input embedding layer + 24 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 1024])
#tensor([[[-0.0600, 0.0742, 0.0332, ..., -0.0525, -0.0637, -0.0287],
# [ 0.0950, 0.2840, 0.1985, ..., 0.2073, -0.2172, -0.6321],
# [ 0.1381, 0.1872, 0.1614, ..., -0.0339, -0.2530, -0.1182],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
{"language": "fr"}
|
almanach/camembert-large
| null |
[
"transformers",
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.03894"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #camembert #fr #arxiv-1911.03894 #endpoints_compatible #region-us
|
CamemBERT: a Tasty French Language Model
========================================
Introduction
------------
CamemBERT is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to Camembert Website
Pre-trained models
------------------
How to use CamemBERT with HuggingFace
-------------------------------------
##### Load CamemBERT and its sub-word tokenizer :
##### Filling masks using pipeline
##### Extract contextual embedding features from Camembert output
##### Extract contextual embedding features from all Camembert layers
Authors
-------
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
If you use our work, please cite:
|
[
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #camembert #fr #arxiv-1911.03894 #endpoints_compatible #region-us \n",
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000-earlystop
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9095
- Rouge1: 27.9262
- Rouge2: 11.895
- Rougel: 21.4029
- Rougelsum: 24.7805
- Gen Len: 67.68
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.502 | 1.0 | 1000 | 1.7405 | 26.5705 | 11.4807 | 20.1226 | 23.6827 | 66.73 |
| 0.7337 | 2.0 | 2000 | 1.9095 | 27.9262 | 11.895 | 21.4029 | 24.7805 | 67.68 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "bart-large-cnn-finetuned-weaksup-1000-earlystop", "results": []}]}
|
cammy/bart-large-cnn-finetuned-weaksup-1000-earlystop
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
bart-large-cnn-finetuned-weaksup-1000-earlystop
===============================================
This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9095
* Rouge1: 27.9262
* Rouge2: 11.895
* Rougel: 21.4029
* Rougelsum: 24.7805
* Gen Len: 67.68
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000-pad
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4168
- Rouge1: 26.2506
- Rouge2: 10.7802
- Rougel: 19.2236
- Rougelsum: 22.6883
- Gen Len: 68.74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1434 | 1.0 | 1000 | 0.4168 | 26.2506 | 10.7802 | 19.2236 | 22.6883 | 68.74 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "bart-large-cnn-finetuned-weaksup-1000-pad", "results": []}]}
|
cammy/bart-large-cnn-finetuned-weaksup-1000-pad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
bart-large-cnn-finetuned-weaksup-1000-pad
=========================================
This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4168
* Rouge1: 26.2506
* Rouge2: 10.7802
* Rougel: 19.2236
* Rougelsum: 22.6883
* Gen Len: 68.74
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6325
- Rouge1: 26.1954
- Rouge2: 10.7128
- Rougel: 19.3873
- Rougelsum: 22.785
- Gen Len: 66.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.3896 | 1.0 | 1000 | 1.6325 | 26.1954 | 10.7128 | 19.3873 | 22.785 | 66.85 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "bart-large-cnn-finetuned-weaksup-1000", "results": []}]}
|
cammy/bart-large-cnn-finetuned-weaksup-1000
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
bart-large-cnn-finetuned-weaksup-1000
=====================================
This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6325
* Rouge1: 26.1954
* Rouge2: 10.7128
* Rougel: 19.3873
* Rougelsum: 22.785
* Gen Len: 66.85
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-10000-pad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3541
- eval_rouge1: 27.8229
- eval_rouge2: 12.9484
- eval_rougeL: 21.4909
- eval_rougeLsum: 24.7737
- eval_gen_len: 67.365
- eval_runtime: 1162.9446
- eval_samples_per_second: 0.86
- eval_steps_per_second: 0.86
- epoch: 2.0
- step: 20000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-large-cnn-finetuned-weaksup-10000-pad-early", "results": []}]}
|
cammy/bart-large-cnn-finetuned-weaksup-10000-pad-early
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# bart-large-cnn-finetuned-weaksup-10000-pad-early
This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3541
- eval_rouge1: 27.8229
- eval_rouge2: 12.9484
- eval_rougeL: 21.4909
- eval_rougeLsum: 24.7737
- eval_gen_len: 67.365
- eval_runtime: 1162.9446
- eval_samples_per_second: 0.86
- eval_steps_per_second: 0.86
- epoch: 2.0
- step: 20000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# bart-large-cnn-finetuned-weaksup-10000-pad-early\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.3541\n- eval_rouge1: 27.8229\n- eval_rouge2: 12.9484\n- eval_rougeL: 21.4909\n- eval_rougeLsum: 24.7737\n- eval_gen_len: 67.365\n- eval_runtime: 1162.9446\n- eval_samples_per_second: 0.86\n- eval_steps_per_second: 0.86\n- epoch: 2.0\n- step: 20000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.2\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# bart-large-cnn-finetuned-weaksup-10000-pad-early\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.3541\n- eval_rouge1: 27.8229\n- eval_rouge2: 12.9484\n- eval_rougeL: 21.4909\n- eval_rougeLsum: 24.7737\n- eval_gen_len: 67.365\n- eval_runtime: 1162.9446\n- eval_samples_per_second: 0.86\n- eval_steps_per_second: 0.86\n- epoch: 2.0\n- step: 20000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.2\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-10000
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6031
- Rouge1: 28.3912
- Rouge2: 13.655
- Rougel: 22.287
- Rougelsum: 25.4794
- Gen Len: 67.995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 1.2991 | 1.0 | 10000 | 1.6031 | 28.3912 | 13.655 | 22.287 | 25.4794 | 67.995 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "bart-large-cnn-finetuned-weaksup-10000", "results": []}]}
|
cammy/bart-large-cnn-finetuned-weaksup-10000
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
bart-large-cnn-finetuned-weaksup-10000
======================================
This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6031
* Rouge1: 28.3912
* Rouge2: 13.655
* Rougel: 22.287
* Rougelsum: 25.4794
* Gen Len: 67.995
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-weaksup-1000
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6818
- Rouge1: 25.9199
- Rouge2: 11.2697
- Rougel: 20.3598
- Rougelsum: 22.8242
- Gen Len: 66.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.644 | 1.0 | 1000 | 1.6818 | 25.9199 | 11.2697 | 20.3598 | 22.8242 | 66.44 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "distilbart-cnn-12-6-finetuned-weaksup-1000", "results": []}]}
|
cammy/distilbart-cnn-12-6-finetuned-weaksup-1000
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbart-cnn-12-6-finetuned-weaksup-1000
==========================================
This model is a fine-tuned version of sshleifer/distilbart-cnn-12-6 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6818
* Rouge1: 25.9199
* Rouge2: 11.2697
* Rougel: 20.3598
* Rougelsum: 22.8242
* Gen Len: 66.44
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-multi_news-finetuned-weaksup-1000-pegasus
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1309
- Rouge1: 23.342
- Rouge2: 8.67
- Rougel: 17.2865
- Rougelsum: 19.8228
- Gen Len: 69.79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 2.4526 | 1.0 | 1000 | 2.1309 | 23.342 | 8.67 | 17.2865 | 19.8228 | 69.79 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "pegasus-multi_news-finetuned-weaksup-1000-pegasus", "results": []}]}
|
cammy/pegasus-multi_news-finetuned-weaksup-1000-pegasus
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
pegasus-multi\_news-finetuned-weaksup-1000-pegasus
==================================================
This model is a fine-tuned version of google/pegasus-multi\_news on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1309
* Rouge1: 23.342
* Rouge2: 8.67
* Rougel: 17.2865
* Rougelsum: 19.8228
* Gen Len: 69.79
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-weaksup-1000
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "roberta-base-finetuned-weaksup-1000", "results": []}]}
|
cammy/roberta-base-finetuned-weaksup-1000
| null |
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #encoder-decoder #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-base-finetuned-weaksup-1000
This model is a fine-tuned version of [](URL on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# roberta-base-finetuned-weaksup-1000\n\nThis model is a fine-tuned version of [](URL on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #encoder-decoder #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-base-finetuned-weaksup-1000\n\nThis model is a fine-tuned version of [](URL on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-weaksup-1000
This model is a fine-tuned version of [cammy/t5-base-finetuned-weaksup-1000](https://huggingface.co/cammy/t5-base-finetuned-weaksup-1000) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6699
- Rouge1: 22.2079
- Rouge2: 9.54
- Rougel: 19.9593
- Rougelsum: 20.2524
- Gen Len: 18.17
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.6257 | 1.0 | 1000 | 1.6699 | 22.2079 | 9.54 | 19.9593 | 20.2524 | 18.17 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "t5-base-finetuned-weaksup-1000", "results": []}]}
|
cammy/t5-base-finetuned-weaksup-1000
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-base-finetuned-weaksup-1000
==============================
This model is a fine-tuned version of cammy/t5-base-finetuned-weaksup-1000 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6699
* Rouge1: 22.2079
* Rouge2: 9.54
* Rougel: 19.9593
* Rougelsum: 20.2524
* Gen Len: 18.17
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-generation
|
transformers
|
news generator dummy
|
{}
|
candra/gpt2-newgen-test
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
news generator dummy
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
small gpt2 headline
|
{}
|
candra/headline-small-gpt2
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
small gpt2 headline
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
audio-to-audio
|
asteroid
|
## Asteroid model `cankeles/ConvTasNet_WHAMR_enhsingle_16k`
Description:
This model was fine tuned on a modified version of WHAMR! where the speakers were taken from audiobook recordings and reverb was added by Pedalboard, Spotify.
The initial model was taken from here: https://huggingface.co/JorisCos/ConvTasNet_Libri1Mix_enhsingle_16k
This model was trained by M. Can Keles using the WHAM recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the WHAM dataset.
Training config:
```yml
data:
mode: min
nondefault_nsrc: null
sample_rate: 16000
task: enh_single
train_dir: wav16k/min/tr/
valid_dir: wav16k/min/cv/
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/tmp
help: null
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments: {}
training:
batch_size: 2
early_stop: true
epochs: 10
half_lr: true
num_workers: 4
```
Results:
```
'sar': 13.612368475881558,
'sar_imp': 9.709316571584433,
'sdr': 13.612368475881558,
'sdr_imp': 9.709316571584433,
'si_sdr': 12.978640274976373,
'si_sdr_imp': 9.161273840297232,
'sir': inf,
'sir_imp': nan,
'stoi': 0.9214516928197306,
'stoi_imp': 0.11657488247668318
```
|
{"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri1Mix", "enh_single"]}
|
cankeles/ConvTasNet_WHAMR_enhsingle_16k
| null |
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us
|
## Asteroid model 'cankeles/ConvTasNet_WHAMR_enhsingle_16k'
Description:
This model was fine tuned on a modified version of WHAMR! where the speakers were taken from audiobook recordings and reverb was added by Pedalboard, Spotify.
The initial model was taken from here: URL
This model was trained by M. Can Keles using the WHAM recipe in Asteroid.
It was trained on the 'enh_single' task of the WHAM dataset.
Training config:
Results:
|
[
"## Asteroid model 'cankeles/ConvTasNet_WHAMR_enhsingle_16k'\n\nDescription:\n\nThis model was fine tuned on a modified version of WHAMR! where the speakers were taken from audiobook recordings and reverb was added by Pedalboard, Spotify.\n\nThe initial model was taken from here: URL\n\nThis model was trained by M. Can Keles using the WHAM recipe in Asteroid.\nIt was trained on the 'enh_single' task of the WHAM dataset.\n\nTraining config:\n\n\n \n\nResults:"
] |
[
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us \n",
"## Asteroid model 'cankeles/ConvTasNet_WHAMR_enhsingle_16k'\n\nDescription:\n\nThis model was fine tuned on a modified version of WHAMR! where the speakers were taken from audiobook recordings and reverb was added by Pedalboard, Spotify.\n\nThe initial model was taken from here: URL\n\nThis model was trained by M. Can Keles using the WHAM recipe in Asteroid.\nIt was trained on the 'enh_single' task of the WHAM dataset.\n\nTraining config:\n\n\n \n\nResults:"
] |
audio-to-audio
|
asteroid
|
## Asteroid model `cankeles/DPTNet_WHAMR_enhsignle_16k`
Description:
This model was trained by M. Can Keleş using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
mode: min
nondefault_nsrc: null
sample_rate: 16000
segment: 2.0
task: enh_single
train_dir: wav16k/min/tr/
valid_dir: wav16k/min/cv/
filterbank:
kernel_size: 16
n_filters: 64
stride: 8
main_args:
exp_dir: exp/tmp
help: null
masknet:
bidirectional: true
chunk_size: 100
dropout: 0
ff_activation: relu
ff_hid: 256
hop_size: 50
in_chan: 64
mask_act: sigmoid
n_repeats: 2
n_src: 1
norm_type: gLN
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
positional arguments: {}
scheduler:
d_model: 64
steps_per_epoch: 10000
training:
batch_size: 4
early_stop: true
epochs: 60
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On custom min test set :
```yml
'sar': 12.853384266251018,
'sar_imp': 8.950332361953906,
'sdr': 12.853384266251018,
'sdr_imp': 8.950332361953906,
'si_sdr': 12.247012621312548,
'si_sdr_imp': 8.429646186633407,
'sir': inf,
'sir_imp': nan,
'stoi': 0.9022338865380519,
'stoi_imp': 0.09735707619500522
```
|
{"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "DPTNet", "audio-to-audio"], "datasets": ["Libri1Mix", "enh_single"]}
|
cankeles/DPTNet_WHAMR_enhsingle_16k
| null |
[
"asteroid",
"pytorch",
"audio",
"DPTNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#asteroid #pytorch #audio #DPTNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #region-us
|
## Asteroid model 'cankeles/DPTNet_WHAMR_enhsignle_16k'
Description:
This model was trained by M. Can Keleş using the librimix recipe in Asteroid.
It was trained on the 'enh_single' task of the Libri1Mix dataset.
Training config:
Results:
On custom min test set :
|
[
"## Asteroid model 'cankeles/DPTNet_WHAMR_enhsignle_16k'\n\nDescription:\n\nThis model was trained by M. Can Keleş using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn custom min test set :"
] |
[
"TAGS\n#asteroid #pytorch #audio #DPTNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #region-us \n",
"## Asteroid model 'cankeles/DPTNet_WHAMR_enhsignle_16k'\n\nDescription:\n\nThis model was trained by M. Can Keleş using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn custom min test set :"
] |
feature-extraction
|
transformers
|
# BERT-of-Theseus
See our paper ["BERT-of-Theseus: Compressing BERT by Progressive Module Replacing"](http://arxiv.org/abs/2002.02925).
BERT-of-Theseus is a new compressed BERT by progressively replacing the components of the original BERT.

## Load Pretrained Model on MNLI
We provide a 6-layer pretrained model on MNLI as a general-purpose model, which can transfer to other sentence classification tasks, outperforming DistillBERT (with the same 6-layer structure) on six tasks of GLUE (dev set).
| Method | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B |
|-----------------|------|------|------|------|------|-------|-------|
| BERT-base | 83.5 | 89.5 | 91.2 | 89.8 | 71.1 | 91.5 | 88.9 |
| DistillBERT | 79.0 | 87.5 | 85.3 | 84.9 | 59.9 | 90.7 | 81.2 |
| BERT-of-Theseus | 82.1 | 87.5 | 88.8 | 88.8 | 70.1 | 91.8 | 87.8 |
Please Note: this checkpoint is for [Intermediate-Task Transfer Learning](https://arxiv.org/abs/2005.00628) so it does not include the classification head for MNLI! Please fine-tune it before use (like DistilBERT).
|
{"datasets": ["multi_nli"], "thumbnail": "https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png"}
|
canwenxu/BERT-of-Theseus-MNLI
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"dataset:multi_nli",
"arxiv:2002.02925",
"arxiv:2005.00628",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.02925",
"2005.00628"
] |
[] |
TAGS
#transformers #pytorch #jax #bert #feature-extraction #dataset-multi_nli #arxiv-2002.02925 #arxiv-2005.00628 #endpoints_compatible #region-us
|
BERT-of-Theseus
===============
See our paper "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing".
BERT-of-Theseus is a new compressed BERT by progressively replacing the components of the original BERT.
!BERT of Theseus
Load Pretrained Model on MNLI
-----------------------------
We provide a 6-layer pretrained model on MNLI as a general-purpose model, which can transfer to other sentence classification tasks, outperforming DistillBERT (with the same 6-layer structure) on six tasks of GLUE (dev set).
Please Note: this checkpoint is for Intermediate-Task Transfer Learning so it does not include the classification head for MNLI! Please fine-tune it before use (like DistilBERT).
|
[] |
[
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #dataset-multi_nli #arxiv-2002.02925 #arxiv-2005.00628 #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
#Chris DialoGPT Model
|
{"tags": ["conversational"]}
|
caps1994/DialoGPT-small-chrisbot-caps1994
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Chris DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
#Chris DialoGPT Model
|
{"tags": ["conversational"]}
|
caps1994/DialoGPT-small-chrisbot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Chris DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
caps1994/DialoGPT-small-harrypotter-caps1994
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
fill-mask
|
transformers
|
# Twitter 2021 90M (RoBERTa-base)
This is a RoBERTa-base model trained on 90M tweets until the end of 2019.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-2019-90m"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.28870 getting
2) 0.28611 not
3) 0.15485 fully
4) 0.07357 self
5) 0.01812 being
------------------------------
I keep forgetting to bring a <mask>.
1) 0.12194 book
2) 0.04396 pillow
3) 0.04202 bag
4) 0.03038 wallet
5) 0.02729 charger
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.65505 End
2) 0.19230 The
3) 0.03856 the
4) 0.01223 end
5) 0.00978 this
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-2019-90m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99078 The movie was great
2) 0.96701 Just finished reading 'Embeddings in NLP'
3) 0.96037 I just ordered fried chicken 🐣
4) 0.95919 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-2019-90m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
{"language": "en", "license": "mit", "tags": ["timelms", "twitter"], "datasets": ["twitter-api"]}
|
cardiffnlp/twitter-roberta-base-2019-90m
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.03829"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Twitter 2021 90M (RoBERTa-base)
This is a RoBERTa-base model trained on 90M tweets until the end of 2019.
More details and performance scores are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
|
[
"# Twitter 2021 90M (RoBERTa-base)\n\nThis is a RoBERTa-base model trained on 90M tweets until the end of 2019.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Twitter 2021 90M (RoBERTa-base)\n\nThis is a RoBERTa-base model trained on 90M tweets until the end of 2019.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
fill-mask
|
transformers
|
# Twitter 2021 124M (RoBERTa-base)
This is a RoBERTa-base model trained on 123.86M tweets until the end of 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-2021-124m"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.39613 fully
2) 0.26333 getting
3) 0.18988 not
4) 0.02312 still
5) 0.02099 already
------------------------------
I keep forgetting to bring a <mask>.
1) 0.08356 mask
2) 0.05696 book
3) 0.03505 bag
4) 0.02983 backpack
5) 0.02847 blanket
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.46618 the
2) 0.24042 The
3) 0.03216 End
4) 0.02925 Squid
5) 0.02610 this
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-2021-124m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.98969 The movie was great
2) 0.96102 Just finished reading 'Embeddings in NLP'
3) 0.95565 I just ordered fried chicken 🐣
4) 0.95041 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-2021-124m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
{"language": "en", "license": "mit", "tags": ["timelms", "twitter"], "datasets": ["twitter-api"]}
|
cardiffnlp/twitter-roberta-base-2021-124m
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.03829"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Twitter 2021 124M (RoBERTa-base)
This is a RoBERTa-base model trained on 123.86M tweets until the end of 2021.
More details and performance scores are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
|
[
"# Twitter 2021 124M (RoBERTa-base)\n\nThis is a RoBERTa-base model trained on 123.86M tweets until the end of 2021.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Twitter 2021 124M (RoBERTa-base)\n\nThis is a RoBERTa-base model trained on 123.86M tweets until the end of 2021.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
fill-mask
|
transformers
|
# Twitter December 2020 (RoBERTa-base, 107M)
This is a RoBERTa-base model trained on 107.06M tweets until the end of December 2020.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-dec2020"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.42239 not
2) 0.23834 getting
3) 0.10684 fully
4) 0.07550 being
5) 0.02097 already
------------------------------
I keep forgetting to bring a <mask>.
1) 0.08145 mask
2) 0.05051 laptop
3) 0.04620 book
4) 0.03910 bag
5) 0.03824 blanket
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.57602 the
2) 0.25120 The
3) 0.02610 End
4) 0.02324 this
5) 0.00690 This
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-dec2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99084 The movie was great
2) 0.96618 Just finished reading 'Embeddings in NLP'
3) 0.96127 I just ordered fried chicken 🐣
4) 0.95315 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-dec2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
{"language": "en", "license": "mit", "tags": ["timelms", "twitter"], "datasets": ["twitter-api"]}
|
cardiffnlp/twitter-roberta-base-dec2020
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.03829"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Twitter December 2020 (RoBERTa-base, 107M)
This is a RoBERTa-base model trained on 107.06M tweets until the end of December 2020.
More details and performance scores are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
|
[
"# Twitter December 2020 (RoBERTa-base, 107M)\n\nThis is a RoBERTa-base model trained on 107.06M tweets until the end of December 2020.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Twitter December 2020 (RoBERTa-base, 107M)\n\nThis is a RoBERTa-base model trained on 107.06M tweets until the end of December 2020.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
fill-mask
|
transformers
|
# Twitter December 2021 (RoBERTa-base, 124M)
This is a RoBERTa-base model trained on 123.86M tweets until the end of December 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-dec2021"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.33211 fully
2) 0.26205 not
3) 0.22305 getting
4) 0.03790 still
5) 0.01817 all
------------------------------
I keep forgetting to bring a <mask>.
1) 0.04808 mask
2) 0.04628 book
3) 0.03597 lighter
4) 0.03391 pen
5) 0.02982 knife
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.34191 Squid
2) 0.23768 the
3) 0.15699 The
4) 0.02766 End
5) 0.01233 this
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-dec2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99004 The movie was great
2) 0.96320 Just finished reading 'Embeddings in NLP'
3) 0.95858 I just ordered fried chicken 🐣
4) 0.95356 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-dec2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
{"language": "en", "license": "mit", "tags": ["timelms", "twitter"], "datasets": ["twitter-api"]}
|
cardiffnlp/twitter-roberta-base-dec2021
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.03829"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Twitter December 2021 (RoBERTa-base, 124M)
This is a RoBERTa-base model trained on 123.86M tweets until the end of December 2021.
More details and performance scores are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
|
[
"# Twitter December 2021 (RoBERTa-base, 124M)\n\nThis is a RoBERTa-base model trained on 123.86M tweets until the end of December 2021.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Twitter December 2021 (RoBERTa-base, 124M)\n\nThis is a RoBERTa-base model trained on 123.86M tweets until the end of December 2021.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
text-classification
|
transformers
|
# Twitter-roBERTa-base for Emoji prediction
This is a roBERTa-base model trained on ~58M tweets and finetuned for emoji prediction with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='emoji'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Looking forward to Christmas"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Looking forward to Christmas"
# text = preprocess(text)
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) 🎄 0.5457
2) 😊 0.1417
3) 😁 0.0649
4) 😍 0.0395
5) ❤️ 0.03
6) 😜 0.028
7) ✨ 0.0263
8) 😉 0.0237
9) 😂 0.0177
10) 😎 0.0166
11) 😘 0.0143
12) 💕 0.014
13) 💙 0.0076
14) 💜 0.0068
15) 🔥 0.0065
16) 💯 0.004
17) 🇺🇸 0.0037
18) 📷 0.0034
19) ☀ 0.0033
20) 📸 0.0021
```
|
{}
|
cardiffnlp/twitter-roberta-base-emoji
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.12421"
] |
[] |
TAGS
#transformers #pytorch #tf #jax #roberta #text-classification #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Twitter-roBERTa-base for Emoji prediction
This is a roBERTa-base model trained on ~58M tweets and finetuned for emoji prediction with the TweetEval benchmark.
- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020).
- Git Repo: Tweeteval official repository.
## Example of classification
Output:
|
[
"# Twitter-roBERTa-base for Emoji prediction\n\nThis is a roBERTa-base model trained on ~58M tweets and finetuned for emoji prediction with the TweetEval benchmark.\n\n- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.",
"## Example of classification\n\n\n\nOutput:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #text-classification #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Twitter-roBERTa-base for Emoji prediction\n\nThis is a roBERTa-base model trained on ~58M tweets and finetuned for emoji prediction with the TweetEval benchmark.\n\n- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.",
"## Example of classification\n\n\n\nOutput:"
] |
text-classification
|
transformers
|
# Twitter-roBERTa-base for Emotion Recognition
This is a RoBERTa-base model trained on ~58M tweets and finetuned for emotion recognition with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
<b>New!</b> We just released a new emotion recognition model trained with more emotion types and with a newer RoBERTa-based model.
See [twitter-roberta-base-emotion-multilabel-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion-multilabel-latest) and [TweetNLP](https://github.com/cardiffnlp/tweetnlp) for more details.
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='emotion'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Celebrating my promotion 😎"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Celebrating my promotion 😎"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) joy 0.9382
2) optimism 0.0362
3) anger 0.0145
4) sadness 0.0112
```
|
{}
|
cardiffnlp/twitter-roberta-base-emotion
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.12421"
] |
[] |
TAGS
#transformers #pytorch #tf #jax #roberta #text-classification #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Twitter-roBERTa-base for Emotion Recognition
This is a RoBERTa-base model trained on ~58M tweets and finetuned for emotion recognition with the TweetEval benchmark.
- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020).
- Git Repo: Tweeteval official repository.
<b>New!</b> We just released a new emotion recognition model trained with more emotion types and with a newer RoBERTa-based model.
See twitter-roberta-base-emotion-multilabel-latest and TweetNLP for more details.
## Example of classification
Output:
|
[
"# Twitter-roBERTa-base for Emotion Recognition\n\nThis is a RoBERTa-base model trained on ~58M tweets and finetuned for emotion recognition with the TweetEval benchmark.\n\n- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.\n\n<b>New!</b> We just released a new emotion recognition model trained with more emotion types and with a newer RoBERTa-based model. \nSee twitter-roberta-base-emotion-multilabel-latest and TweetNLP for more details.",
"## Example of classification\n\n\n\nOutput:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #text-classification #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Twitter-roBERTa-base for Emotion Recognition\n\nThis is a RoBERTa-base model trained on ~58M tweets and finetuned for emotion recognition with the TweetEval benchmark.\n\n- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.\n\n<b>New!</b> We just released a new emotion recognition model trained with more emotion types and with a newer RoBERTa-based model. \nSee twitter-roberta-base-emotion-multilabel-latest and TweetNLP for more details.",
"## Example of classification\n\n\n\nOutput:"
] |
text-classification
|
transformers
|
# Twitter-roBERTa-base for Hate Speech Detection
This is a roBERTa-base model trained on ~58M tweets and finetuned for hate speech detection with the TweetEval benchmark.
This model is specialized to detect hate speech against women and immigrants.
**NEW!** We have made available a more recent and robust hate speech detection model here: [https://huggingface.co/cardiffnlp/twitter-roberta-base-hate-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-hate-latest)
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='hate'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) not-hate 0.9168
2) hate 0.0832
```
|
{}
|
cardiffnlp/twitter-roberta-base-hate
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.12421"
] |
[] |
TAGS
#transformers #pytorch #tf #jax #roberta #text-classification #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Twitter-roBERTa-base for Hate Speech Detection
This is a roBERTa-base model trained on ~58M tweets and finetuned for hate speech detection with the TweetEval benchmark.
This model is specialized to detect hate speech against women and immigrants.
NEW! We have made available a more recent and robust hate speech detection model here: URL
- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020).
- Git Repo: Tweeteval official repository.
## Example of classification
Output:
|
[
"# Twitter-roBERTa-base for Hate Speech Detection\n\nThis is a roBERTa-base model trained on ~58M tweets and finetuned for hate speech detection with the TweetEval benchmark. \nThis model is specialized to detect hate speech against women and immigrants.\n\nNEW! We have made available a more recent and robust hate speech detection model here: URL\n\n- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.",
"## Example of classification\n\n\n\nOutput:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #text-classification #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Twitter-roBERTa-base for Hate Speech Detection\n\nThis is a roBERTa-base model trained on ~58M tweets and finetuned for hate speech detection with the TweetEval benchmark. \nThis model is specialized to detect hate speech against women and immigrants.\n\nNEW! We have made available a more recent and robust hate speech detection model here: URL\n\n- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.",
"## Example of classification\n\n\n\nOutput:"
] |
text-classification
|
transformers
|
# Twitter-roBERTa-base for Irony Detection
This is a roBERTa-base model trained on ~58M tweets and finetuned for irony detection with the TweetEval benchmark.
This model has integrated into the [TweetNLP Python library](https://github.com/cardiffnlp/tweetnlp/).
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = [
]
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='irony'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Great, it broke the first day..."
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Great, it broke the first day..."
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) irony 0.914
2) non_irony 0.086
```
### Reference
Please cite the [reference paper](https://aclanthology.org/2020.findings-emnlp.148/) if you use this model.
```bibtex
@inproceedings{barbieri-etal-2020-tweeteval,
title = "{T}weet{E}val: Unified Benchmark and Comparative Evaluation for Tweet Classification",
author = "Barbieri, Francesco and
Camacho-Collados, Jose and
Espinosa Anke, Luis and
Neves, Leonardo",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.148",
doi = "10.18653/v1/2020.findings-emnlp.148",
pages = "1644--1650"
}
```
|
{"language": ["en"], "datasets": ["tweet_eval"]}
|
cardiffnlp/twitter-roberta-base-irony
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.12421"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #roberta #text-classification #en #dataset-tweet_eval #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Twitter-roBERTa-base for Irony Detection
This is a roBERTa-base model trained on ~58M tweets and finetuned for irony detection with the TweetEval benchmark.
This model has integrated into the TweetNLP Python library.
- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020).
- Git Repo: Tweeteval official repository.
## Example of classification
Output:
### Reference
Please cite the reference paper if you use this model.
|
[
"# Twitter-roBERTa-base for Irony Detection\n\nThis is a roBERTa-base model trained on ~58M tweets and finetuned for irony detection with the TweetEval benchmark. \nThis model has integrated into the TweetNLP Python library.\n\n- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.",
"## Example of classification\n\n\n\nOutput:",
"### Reference\n\nPlease cite the reference paper if you use this model."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #text-classification #en #dataset-tweet_eval #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Twitter-roBERTa-base for Irony Detection\n\nThis is a roBERTa-base model trained on ~58M tweets and finetuned for irony detection with the TweetEval benchmark. \nThis model has integrated into the TweetNLP Python library.\n\n- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.",
"## Example of classification\n\n\n\nOutput:",
"### Reference\n\nPlease cite the reference paper if you use this model."
] |
fill-mask
|
transformers
|
# Twitter June 2020 (RoBERTa-base, 99M)
This is a RoBERTa-base model trained on 98.66M tweets until the end of June 2020.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-jun2020"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.52684 not
2) 0.18349 getting
3) 0.07971 fully
4) 0.05598 being
5) 0.02347 self
------------------------------
I keep forgetting to bring a <mask>.
1) 0.13266 mask
2) 0.04859 book
3) 0.04851 laptop
4) 0.03123 pillow
5) 0.02747 blanket
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.35750 The
2) 0.32703 the
3) 0.13048 End
4) 0.02261 this
5) 0.01066 This
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-jun2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99078 The movie was great
2) 0.96610 Just finished reading 'Embeddings in NLP'
3) 0.96095 What time is the next game?
4) 0.95855 I just ordered fried chicken 🐣
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-jun2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
{"language": "en", "license": "mit", "tags": ["timelms", "twitter"], "datasets": ["twitter-api"]}
|
cardiffnlp/twitter-roberta-base-jun2020
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.03829"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Twitter June 2020 (RoBERTa-base, 99M)
This is a RoBERTa-base model trained on 98.66M tweets until the end of June 2020.
More details and performance scores are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
|
[
"# Twitter June 2020 (RoBERTa-base, 99M)\n\nThis is a RoBERTa-base model trained on 98.66M tweets until the end of June 2020.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Twitter June 2020 (RoBERTa-base, 99M)\n\nThis is a RoBERTa-base model trained on 98.66M tweets until the end of June 2020.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
fill-mask
|
transformers
|
# Twitter June 2021 (RoBERTa-base, 115M)
This is a RoBERTa-base model trained on 115.46M tweets until the end of June 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-jun2021"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.45169 fully
2) 0.22353 getting
3) 0.18540 not
4) 0.02392 still
5) 0.02231 already
------------------------------
I keep forgetting to bring a <mask>.
1) 0.06331 mask
2) 0.05423 book
3) 0.04505 knife
4) 0.03742 laptop
5) 0.03456 bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.69811 the
2) 0.14435 The
3) 0.02396 this
4) 0.00932 Championship
5) 0.00785 End
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-jun2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99014 The movie was great
2) 0.96346 Just finished reading 'Embeddings in NLP'
3) 0.95836 I just ordered fried chicken 🐣
4) 0.95051 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-jun2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
{"language": "en", "license": "mit", "tags": ["timelms", "twitter"], "datasets": ["twitter-api"]}
|
cardiffnlp/twitter-roberta-base-jun2021
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.03829"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Twitter June 2021 (RoBERTa-base, 115M)
This is a RoBERTa-base model trained on 115.46M tweets until the end of June 2021.
More details and performance scores are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
|
[
"# Twitter June 2021 (RoBERTa-base, 115M)\n\nThis is a RoBERTa-base model trained on 115.46M tweets until the end of June 2021.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Twitter June 2021 (RoBERTa-base, 115M)\n\nThis is a RoBERTa-base model trained on 115.46M tweets until the end of June 2021.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
fill-mask
|
transformers
|
# Twitter March 2020 (RoBERTa-base, 94M)
This is a RoBERTa-base model trained on 94.46M tweets until the end of March 2020.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-mar2020"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.57291 not
2) 0.14380 getting
3) 0.06983 self
4) 0.06813 fully
5) 0.02965 being
------------------------------
I keep forgetting to bring a <mask>.
1) 0.05637 book
2) 0.04557 laptop
3) 0.03842 wallet
4) 0.03824 pillow
5) 0.03485 bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.59311 the
2) 0.18969 The
3) 0.04493 this
4) 0.02133 End
5) 0.00796 This
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-mar2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.98956 The movie was great
2) 0.96389 Just finished reading 'Embeddings in NLP'
3) 0.95678 I just ordered fried chicken 🐣
4) 0.95588 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-mar2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
{"language": "en", "license": "mit", "tags": ["timelms", "twitter"], "datasets": ["twitter-api"]}
|
cardiffnlp/twitter-roberta-base-mar2020
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.03829"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Twitter March 2020 (RoBERTa-base, 94M)
This is a RoBERTa-base model trained on 94.46M tweets until the end of March 2020.
More details and performance scores are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
|
[
"# Twitter March 2020 (RoBERTa-base, 94M)\n\nThis is a RoBERTa-base model trained on 94.46M tweets until the end of March 2020.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Twitter March 2020 (RoBERTa-base, 94M)\n\nThis is a RoBERTa-base model trained on 94.46M tweets until the end of March 2020.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
fill-mask
|
transformers
|
# Twitter March 2021 (RoBERTa-base, 111M)
This is a RoBERTa-base model trained on 111.26M tweets until the end of March 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-mar2021"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.42688 getting
2) 0.30230 not
3) 0.07375 fully
4) 0.03619 already
5) 0.03055 being
------------------------------
I keep forgetting to bring a <mask>.
1) 0.07603 mask
2) 0.04933 book
3) 0.04029 knife
4) 0.03461 laptop
5) 0.03069 bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.53945 the
2) 0.27647 The
3) 0.03881 End
4) 0.01711 this
5) 0.00831 Championship
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-mar2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99106 The movie was great
2) 0.96662 Just finished reading 'Embeddings in NLP'
3) 0.96150 I just ordered fried chicken 🐣
4) 0.95560 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-mar2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
{"language": "en", "license": "mit", "tags": ["timelms", "twitter"], "datasets": ["twitter-api"]}
|
cardiffnlp/twitter-roberta-base-mar2021
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.03829"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Twitter March 2021 (RoBERTa-base, 111M)
This is a RoBERTa-base model trained on 111.26M tweets until the end of March 2021.
More details and performance scores are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
|
[
"# Twitter March 2021 (RoBERTa-base, 111M)\n\nThis is a RoBERTa-base model trained on 111.26M tweets until the end of March 2021.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Twitter March 2021 (RoBERTa-base, 111M)\n\nThis is a RoBERTa-base model trained on 111.26M tweets until the end of March 2021.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
text-classification
|
transformers
|
# Twitter-roBERTa-base for Offensive Language Identification
This is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='offensive'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) not-offensive 0.9073
2) offensive 0.0927
```
|
{}
|
cardiffnlp/twitter-roberta-base-offensive
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.12421"
] |
[] |
TAGS
#transformers #pytorch #tf #jax #roberta #text-classification #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Twitter-roBERTa-base for Offensive Language Identification
This is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark.
- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020).
- Git Repo: Tweeteval official repository.
## Example of classification
Output:
|
[
"# Twitter-roBERTa-base for Offensive Language Identification\n\nThis is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark.\n\n- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.",
"## Example of classification\n\n\n\nOutput:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #text-classification #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Twitter-roBERTa-base for Offensive Language Identification\n\nThis is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark.\n\n- Paper: _TweetEval_ benchmark (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.",
"## Example of classification\n\n\n\nOutput:"
] |
text-classification
|
transformers
|
# Twitter-roBERTa-base for Sentiment Analysis
This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English (for a similar multilingual model, see [XLM-T](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment)).
- Reference Paper: [_TweetEval_ (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
<b>Labels</b>:
0 -> Negative;
1 -> Neutral;
2 -> Positive
<b>New!</b> We just released a new sentiment analysis model trained on more recent and a larger quantity of tweets.
See [twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) and [TweetNLP](https://tweetnlp.org) for more details.
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='sentiment'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) positive 0.8466
2) neutral 0.1458
3) negative 0.0076
```
### BibTeX entry and citation info
Please cite the [reference paper](https://aclanthology.org/2020.findings-emnlp.148/) if you use this model.
```bibtex
@inproceedings{barbieri-etal-2020-tweeteval,
title = "{T}weet{E}val: Unified Benchmark and Comparative Evaluation for Tweet Classification",
author = "Barbieri, Francesco and
Camacho-Collados, Jose and
Espinosa Anke, Luis and
Neves, Leonardo",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.148",
doi = "10.18653/v1/2020.findings-emnlp.148",
pages = "1644--1650"
}
```
|
{"language": ["en"], "datasets": ["tweet_eval"]}
|
cardiffnlp/twitter-roberta-base-sentiment
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.12421"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #roberta #text-classification #en #dataset-tweet_eval #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Twitter-roBERTa-base for Sentiment Analysis
This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English (for a similar multilingual model, see XLM-T).
- Reference Paper: _TweetEval_ (Findings of EMNLP 2020).
- Git Repo: Tweeteval official repository.
<b>Labels</b>:
0 -> Negative;
1 -> Neutral;
2 -> Positive
<b>New!</b> We just released a new sentiment analysis model trained on more recent and a larger quantity of tweets.
See twitter-roberta-base-sentiment-latest and TweetNLP for more details.
## Example of classification
Output:
### BibTeX entry and citation info
Please cite the reference paper if you use this model.
|
[
"# Twitter-roBERTa-base for Sentiment Analysis\n\nThis is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English (for a similar multilingual model, see XLM-T).\n\n- Reference Paper: _TweetEval_ (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.\n\n<b>Labels</b>: \n0 -> Negative;\n1 -> Neutral;\n2 -> Positive\n\n<b>New!</b> We just released a new sentiment analysis model trained on more recent and a larger quantity of tweets. \nSee twitter-roberta-base-sentiment-latest and TweetNLP for more details.",
"## Example of classification\n\n\n\nOutput:",
"### BibTeX entry and citation info\n\nPlease cite the reference paper if you use this model."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #text-classification #en #dataset-tweet_eval #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Twitter-roBERTa-base for Sentiment Analysis\n\nThis is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English (for a similar multilingual model, see XLM-T).\n\n- Reference Paper: _TweetEval_ (Findings of EMNLP 2020). \n- Git Repo: Tweeteval official repository.\n\n<b>Labels</b>: \n0 -> Negative;\n1 -> Neutral;\n2 -> Positive\n\n<b>New!</b> We just released a new sentiment analysis model trained on more recent and a larger quantity of tweets. \nSee twitter-roberta-base-sentiment-latest and TweetNLP for more details.",
"## Example of classification\n\n\n\nOutput:",
"### BibTeX entry and citation info\n\nPlease cite the reference paper if you use this model."
] |
fill-mask
|
transformers
|
# Twitter September 2020 (RoBERTa-base, 103M)
This is a RoBERTa-base model trained on 102.86M tweets until the end of September 2020.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-sep2020"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.55215 not
2) 0.16466 getting
3) 0.08991 fully
4) 0.05542 being
5) 0.01733 still
------------------------------
I keep forgetting to bring a <mask>.
1) 0.18145 mask
2) 0.04476 book
3) 0.03751 knife
4) 0.03713 laptop
5) 0.02873 bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.53243 the
2) 0.24435 The
3) 0.04717 End
4) 0.02421 this
5) 0.00958 Championship
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-sep2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99045 The movie was great
2) 0.96650 Just finished reading 'Embeddings in NLP'
3) 0.95947 I just ordered fried chicken 🐣
4) 0.95707 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-sep2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
{"language": "en", "license": "mit", "tags": ["timelms", "twitter"], "datasets": ["twitter-api"]}
|
cardiffnlp/twitter-roberta-base-sep2020
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.03829"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Twitter September 2020 (RoBERTa-base, 103M)
This is a RoBERTa-base model trained on 102.86M tweets until the end of September 2020.
More details and performance scores are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
|
[
"# Twitter September 2020 (RoBERTa-base, 103M)\n\nThis is a RoBERTa-base model trained on 102.86M tweets until the end of September 2020.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Twitter September 2020 (RoBERTa-base, 103M)\n\nThis is a RoBERTa-base model trained on 102.86M tweets until the end of September 2020.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
fill-mask
|
transformers
|
# Twitter September 2021 (RoBERTa-base, 120M)
This is a RoBERTa-base model trained on 119.66M tweets until the end of September 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-sep2021"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.39329 fully
2) 0.26694 getting
3) 0.17438 not
4) 0.03422 still
5) 0.01845 all
------------------------------
I keep forgetting to bring a <mask>.
1) 0.06773 mask
2) 0.04548 book
3) 0.03826 charger
4) 0.03506 backpack
5) 0.02997 bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.63009 the
2) 0.16154 The
3) 0.02110 this
4) 0.01903 End
5) 0.00810 Championship
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-sep2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99022 The movie was great
2) 0.96274 Just finished reading 'Embeddings in NLP'
3) 0.96006 I just ordered fried chicken 🐣
4) 0.95725 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-sep2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
{"language": "en", "license": "mit", "tags": ["timelms", "twitter"], "datasets": ["twitter-api"]}
|
cardiffnlp/twitter-roberta-base-sep2021
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.03829"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Twitter September 2021 (RoBERTa-base, 120M)
This is a RoBERTa-base model trained on 119.66M tweets until the end of September 2021.
More details and performance scores are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
|
[
"# Twitter September 2021 (RoBERTa-base, 120M)\n\nThis is a RoBERTa-base model trained on 119.66M tweets until the end of September 2021.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #timelms #twitter #en #dataset-twitter-api #arxiv-2202.03829 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Twitter September 2021 (RoBERTa-base, 120M)\n\nThis is a RoBERTa-base model trained on 119.66M tweets until the end of September 2021.\nMore details and performance scores are available in the TimeLMs paper.\n\nBelow, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.\n\nFor other models trained until different periods, check this table.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".\nIf you're interested in retaining verified users which were also retained during training, you may keep the users listed here.",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction"
] |
fill-mask
|
transformers
|
# Twitter-roBERTa-base
This is a RoBERTa-base model trained on ~58M tweets on top of the original RoBERTa-base checkpoint, as described and evaluated in the [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
To evaluate this and other LMs on Twitter-specific data, please refer to the [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
```python
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def print_candidates():
for i in range(5):
token = tokenizer.decode(candidates[i]['token'])
score = np.round(candidates[i]['score'], 4)
print(f"{i+1}) {token} {score}")
texts = [
"I am so <mask> 😊",
"I am so <mask> 😢"
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
print_candidates()
```
Output:
```
------------------------------
I am so <mask> 😊
1) happy 0.402
2) excited 0.1441
3) proud 0.143
4) grateful 0.0669
5) blessed 0.0334
------------------------------
I am so <mask> 😢
1) sad 0.2641
2) sorry 0.1605
3) tired 0.138
4) sick 0.0278
5) hungry 0.0232
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import defaultdict
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
return features_mean
MODEL = "cardiffnlp/twitter-roberta-base"
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
d = defaultdict(int)
for tweet in tweets:
sim = 1-cosine(get_embedding(query),get_embedding(tweet))
d[tweet] = sim
print('Most similar to: ',query)
print('----------------------------------------')
for idx,x in enumerate(sorted(d.items(), key=lambda x:x[1], reverse=True)):
print(idx+1,x[0])
```
Output:
```
Most similar to: The book was awesome
----------------------------------------
1 The movie was great
2 Just finished reading 'Embeddings in NLP'
3 I just ordered fried chicken 🐣
4 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
### BibTeX entry and citation info
Please cite the [reference paper](https://aclanthology.org/2020.findings-emnlp.148/) if you use this model.
```bibtex
@inproceedings{barbieri-etal-2020-tweeteval,
title = "{T}weet{E}val: Unified Benchmark and Comparative Evaluation for Tweet Classification",
author = "Barbieri, Francesco and
Camacho-Collados, Jose and
Espinosa Anke, Luis and
Neves, Leonardo",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.148",
doi = "10.18653/v1/2020.findings-emnlp.148",
pages = "1644--1650"
}
```
|
{}
|
cardiffnlp/twitter-roberta-base
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.12421"
] |
[] |
TAGS
#transformers #pytorch #tf #jax #roberta #fill-mask #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Twitter-roBERTa-base
This is a RoBERTa-base model trained on ~58M tweets on top of the original RoBERTa-base checkpoint, as described and evaluated in the _TweetEval_ benchmark (Findings of EMNLP 2020).
To evaluate this and other LMs on Twitter-specific data, please refer to the Tweeteval official repository.
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
## Example Masked Language Model
Output:
## Example Tweet Embeddings
Output:
## Example Feature Extraction
### BibTeX entry and citation info
Please cite the reference paper if you use this model.
|
[
"# Twitter-roBERTa-base\n\nThis is a RoBERTa-base model trained on ~58M tweets on top of the original RoBERTa-base checkpoint, as described and evaluated in the _TweetEval_ benchmark (Findings of EMNLP 2020). \nTo evaluate this and other LMs on Twitter-specific data, please refer to the Tweeteval official repository.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction",
"### BibTeX entry and citation info\n\nPlease cite the reference paper if you use this model."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #arxiv-2010.12421 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Twitter-roBERTa-base\n\nThis is a RoBERTa-base model trained on ~58M tweets on top of the original RoBERTa-base checkpoint, as described and evaluated in the _TweetEval_ benchmark (Findings of EMNLP 2020). \nTo evaluate this and other LMs on Twitter-specific data, please refer to the Tweeteval official repository.",
"## Preprocess Text \nReplace usernames and links for placeholders: \"@user\" and \"http\".",
"## Example Masked Language Model \n\n\n\nOutput:",
"## Example Tweet Embeddings\n\nOutput:",
"## Example Feature Extraction",
"### BibTeX entry and citation info\n\nPlease cite the reference paper if you use this model."
] |
text-classification
|
transformers
|
# twitter-XLM-roBERTa-base for Sentiment Analysis
This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and finetuned for sentiment analysis. The sentiment fine-tuning was done on 8 languages (Ar, En, Fr, De, Hi, It, Sp, Pt) but it can be used for more languages (see paper for details).
- Paper: [XLM-T: A Multilingual Language Model Toolkit for Twitter](https://arxiv.org/abs/2104.12250).
- Git Repo: [XLM-T official repository](https://github.com/cardiffnlp/xlm-t).
This model has been integrated into the [TweetNLP library](https://github.com/cardiffnlp/tweetnlp).
## Example Pipeline
```python
from transformers import pipeline
model_path = "cardiffnlp/twitter-xlm-roberta-base-sentiment"
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("T'estimo!")
```
```
[{'label': 'Positive', 'score': 0.6600581407546997}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"cardiffnlp/twitter-xlm-roberta-base-sentiment"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) Positive 0.7673
2) Neutral 0.2015
3) Negative 0.0313
```
### Reference
```
@inproceedings{barbieri-etal-2022-xlm,
title = "{XLM}-{T}: Multilingual Language Models in {T}witter for Sentiment Analysis and Beyond",
author = "Barbieri, Francesco and
Espinosa Anke, Luis and
Camacho-Collados, Jose",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.27",
pages = "258--266"
}
```
|
{"language": "multilingual", "widget": [{"text": "\ud83e\udd17"}, {"text": "T'estimo! \u2764\ufe0f"}, {"text": "I love you!"}, {"text": "I hate you \ud83e\udd2e"}, {"text": "Mahal kita!"}, {"text": "\uc0ac\ub791\ud574!"}, {"text": "\ub09c \ub108\uac00 \uc2eb\uc5b4"}, {"text": "\ud83d\ude0d\ud83d\ude0d\ud83d\ude0d"}]}
|
cardiffnlp/twitter-xlm-roberta-base-sentiment
| null |
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"multilingual",
"arxiv:2104.12250",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.12250"
] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #tf #xlm-roberta #text-classification #multilingual #arxiv-2104.12250 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# twitter-XLM-roBERTa-base for Sentiment Analysis
This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and finetuned for sentiment analysis. The sentiment fine-tuning was done on 8 languages (Ar, En, Fr, De, Hi, It, Sp, Pt) but it can be used for more languages (see paper for details).
- Paper: XLM-T: A Multilingual Language Model Toolkit for Twitter.
- Git Repo: XLM-T official repository.
This model has been integrated into the TweetNLP library.
## Example Pipeline
## Full classification example
Output:
### Reference
|
[
"# twitter-XLM-roBERTa-base for Sentiment Analysis\n\nThis is a multilingual XLM-roBERTa-base model trained on ~198M tweets and finetuned for sentiment analysis. The sentiment fine-tuning was done on 8 languages (Ar, En, Fr, De, Hi, It, Sp, Pt) but it can be used for more languages (see paper for details).\n\n- Paper: XLM-T: A Multilingual Language Model Toolkit for Twitter. \n- Git Repo: XLM-T official repository.\n\nThis model has been integrated into the TweetNLP library.",
"## Example Pipeline",
"## Full classification example\n\n\n\nOutput:",
"### Reference"
] |
[
"TAGS\n#transformers #pytorch #tf #xlm-roberta #text-classification #multilingual #arxiv-2104.12250 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# twitter-XLM-roBERTa-base for Sentiment Analysis\n\nThis is a multilingual XLM-roBERTa-base model trained on ~198M tweets and finetuned for sentiment analysis. The sentiment fine-tuning was done on 8 languages (Ar, En, Fr, De, Hi, It, Sp, Pt) but it can be used for more languages (see paper for details).\n\n- Paper: XLM-T: A Multilingual Language Model Toolkit for Twitter. \n- Git Repo: XLM-T official repository.\n\nThis model has been integrated into the TweetNLP library.",
"## Example Pipeline",
"## Full classification example\n\n\n\nOutput:",
"### Reference"
] |
fill-mask
|
transformers
|
# Twitter-XLM-Roberta-base
This is a XLM-Roberta-base model trained on ~198M multilingual tweets, described and evaluated in the [reference paper](https://arxiv.org/abs/2104.12250). To evaluate this and other LMs on Twitter-specific data, please refer to the [main repository](https://github.com/cardiffnlp/xlm-t). A usage example is provided below.
## Computing tweet similarity
```python
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().numpy()
features_mean = np.mean(features[0], axis=0)
return features_mean
query = "Acabo de pedir pollo frito 🐣" #spanish
tweets = ["We had a great time! ⚽️", # english
"We hebben een geweldige tijd gehad! ⛩", # dutch
"Nous avons passé un bon moment! 🎥", # french
"Ci siamo divertiti! 🍝"] # italian
d = defaultdict(int)
for tweet in tweets:
sim = 1-cosine(get_embedding(query),get_embedding(tweet))
d[tweet] = sim
print('Most similar to: ',query)
print('----------------------------------------')
for idx,x in enumerate(sorted(d.items(), key=lambda x:x[1], reverse=True)):
print(idx+1,x[0])
```
```
Most similar to: Acabo de pedir pollo frito 🐣
----------------------------------------
1 Ci siamo divertiti! 🍝
2 Nous avons passé un bon moment! 🎥
3 We had a great time! ⚽️
4 We hebben een geweldige tijd gehad! ⛩
```
### BibTeX entry and citation info
Please cite the [reference paper](https://aclanthology.org/2022.lrec-1.27/) if you use this model.
```bibtex
@inproceedings{barbieri-etal-2022-xlm,
title = "{XLM}-{T}: Multilingual Language Models in {T}witter for Sentiment Analysis and Beyond",
author = "Barbieri, Francesco and
Espinosa Anke, Luis and
Camacho-Collados, Jose",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.27",
pages = "258--266",
abstract = "Language models are ubiquitous in current NLP, and their multilingual capacity has recently attracted considerable attention. However, current analyses have almost exclusively focused on (multilingual variants of) standard benchmarks, and have relied on clean pre-training and task-specific corpora as multilingual signals. In this paper, we introduce XLM-T, a model to train and evaluate multilingual language models in Twitter. In this paper we provide: (1) a new strong multilingual baseline consisting of an XLM-R (Conneau et al. 2020) model pre-trained on millions of tweets in over thirty languages, alongside starter code to subsequently fine-tune on a target task; and (2) a set of unified sentiment analysis Twitter datasets in eight different languages and a XLM-T model trained on this dataset.",
}
|
{"language": "multilingual", "widget": [{"text": "\ud83e\udd17\ud83e\udd17\ud83e\udd17<mask>"}, {"text": "\ud83d\udd25The goal of life is <mask> . \ud83d\udd25"}, {"text": "Il segreto della vita \u00e8 l\u2019<mask> . \u2764\ufe0f"}, {"text": "Hasta <mask> \ud83d\udc4b!"}]}
|
cardiffnlp/twitter-xlm-roberta-base
| null |
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"multilingual",
"arxiv:2104.12250",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.12250"
] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #tf #xlm-roberta #fill-mask #multilingual #arxiv-2104.12250 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Twitter-XLM-Roberta-base
This is a XLM-Roberta-base model trained on ~198M multilingual tweets, described and evaluated in the reference paper. To evaluate this and other LMs on Twitter-specific data, please refer to the main repository. A usage example is provided below.
## Computing tweet similarity
### BibTeX entry and citation info
Please cite the reference paper if you use this model.
'''bibtex
@inproceedings{barbieri-etal-2022-xlm,
title = "{XLM}-{T}: Multilingual Language Models in {T}witter for Sentiment Analysis and Beyond",
author = "Barbieri, Francesco and
Espinosa Anke, Luis and
Camacho-Collados, Jose",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "URL
pages = "258--266",
abstract = "Language models are ubiquitous in current NLP, and their multilingual capacity has recently attracted considerable attention. However, current analyses have almost exclusively focused on (multilingual variants of) standard benchmarks, and have relied on clean pre-training and task-specific corpora as multilingual signals. In this paper, we introduce XLM-T, a model to train and evaluate multilingual language models in Twitter. In this paper we provide: (1) a new strong multilingual baseline consisting of an XLM-R (Conneau et al. 2020) model pre-trained on millions of tweets in over thirty languages, alongside starter code to subsequently fine-tune on a target task; and (2) a set of unified sentiment analysis Twitter datasets in eight different languages and a XLM-T model trained on this dataset.",
}
|
[
"# Twitter-XLM-Roberta-base\nThis is a XLM-Roberta-base model trained on ~198M multilingual tweets, described and evaluated in the reference paper. To evaluate this and other LMs on Twitter-specific data, please refer to the main repository. A usage example is provided below.",
"## Computing tweet similarity",
"### BibTeX entry and citation info\n\nPlease cite the reference paper if you use this model.\n\n'''bibtex\n@inproceedings{barbieri-etal-2022-xlm,\n title = \"{XLM}-{T}: Multilingual Language Models in {T}witter for Sentiment Analysis and Beyond\",\n author = \"Barbieri, Francesco and\n Espinosa Anke, Luis and\n Camacho-Collados, Jose\",\n booktitle = \"Proceedings of the Thirteenth Language Resources and Evaluation Conference\",\n month = jun,\n year = \"2022\",\n address = \"Marseille, France\",\n publisher = \"European Language Resources Association\",\n url = \"URL\n pages = \"258--266\",\n abstract = \"Language models are ubiquitous in current NLP, and their multilingual capacity has recently attracted considerable attention. However, current analyses have almost exclusively focused on (multilingual variants of) standard benchmarks, and have relied on clean pre-training and task-specific corpora as multilingual signals. In this paper, we introduce XLM-T, a model to train and evaluate multilingual language models in Twitter. In this paper we provide: (1) a new strong multilingual baseline consisting of an XLM-R (Conneau et al. 2020) model pre-trained on millions of tweets in over thirty languages, alongside starter code to subsequently fine-tune on a target task; and (2) a set of unified sentiment analysis Twitter datasets in eight different languages and a XLM-T model trained on this dataset.\",\n}"
] |
[
"TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #multilingual #arxiv-2104.12250 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Twitter-XLM-Roberta-base\nThis is a XLM-Roberta-base model trained on ~198M multilingual tweets, described and evaluated in the reference paper. To evaluate this and other LMs on Twitter-specific data, please refer to the main repository. A usage example is provided below.",
"## Computing tweet similarity",
"### BibTeX entry and citation info\n\nPlease cite the reference paper if you use this model.\n\n'''bibtex\n@inproceedings{barbieri-etal-2022-xlm,\n title = \"{XLM}-{T}: Multilingual Language Models in {T}witter for Sentiment Analysis and Beyond\",\n author = \"Barbieri, Francesco and\n Espinosa Anke, Luis and\n Camacho-Collados, Jose\",\n booktitle = \"Proceedings of the Thirteenth Language Resources and Evaluation Conference\",\n month = jun,\n year = \"2022\",\n address = \"Marseille, France\",\n publisher = \"European Language Resources Association\",\n url = \"URL\n pages = \"258--266\",\n abstract = \"Language models are ubiquitous in current NLP, and their multilingual capacity has recently attracted considerable attention. However, current analyses have almost exclusively focused on (multilingual variants of) standard benchmarks, and have relied on clean pre-training and task-specific corpora as multilingual signals. In this paper, we introduce XLM-T, a model to train and evaluate multilingual language models in Twitter. In this paper we provide: (1) a new strong multilingual baseline consisting of an XLM-R (Conneau et al. 2020) model pre-trained on millions of tweets in over thirty languages, alongside starter code to subsequently fine-tune on a target task; and (2) a set of unified sentiment analysis Twitter datasets in eight different languages and a XLM-T model trained on this dataset.\",\n}"
] |
token-classification
|
transformers
|
Med Labs Cariai
|
{}
|
cariai/medslabs
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #roberta #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
Med Labs Cariai
|
[] |
[
"TAGS\n#transformers #pytorch #jax #roberta #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
reinforcement-learning
|
stable-baselines3
|
# TODO: Fill this model card
|
{"tags": ["deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"]}
|
carlosaguayo/Simonini-ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #deep-reinforcement-learning #reinforcement-learning #region-us
|
# TODO: Fill this model card
|
[
"# TODO: Fill this model card"
] |
[
"TAGS\n#stable-baselines3 #deep-reinforcement-learning #reinforcement-learning #region-us \n",
"# TODO: Fill this model card"
] |
image-classification
|
keras
|
# Classify Cats and Dogs
VGG16 fine tuned to classify cats and dogs
Notebook
https://www.kaggle.com/carlosaguayo/cats-vs-dogs-transfer-learning-pre-trained-vgg16
### How to use
Here is how to use this model to classify an image as a cat or dog:
```python
from skimage import io
import cv2
import matplotlib.pyplot as plt
from huggingface_hub import from_pretrained_keras
%matplotlib inline
ROWS, COLS = 150, 150
model = from_pretrained_keras("carlosaguayo/cats_vs_dogs")
img_url = 'https://upload.wikimedia.org/wikipedia/commons/0/0c/About_The_Dog.jpg'
# img_url = 'https://upload.wikimedia.org/wikipedia/commons/c/c7/Tabby_cat_with_blue_eyes-3336579.jpg'
img = io.imread(img_url)
img = cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC)
img = img / 255.0
img = img.reshape(1,ROWS,COLS,3)
prediction = model.predict(img)[0][0]
if prediction >= 0.5:
print('I am {:.2%} sure this is a Cat'.format(prediction))
else:
print('I am {:.2%} sure this is a Dog'.format(1-prediction))
plt.imshow(img[0], 'Blues')
plt.axis("off")
plt.show()
```
|
{"tags": ["image-classification"], "widget": [{"src": "https://upload.wikimedia.org/wikipedia/commons/0/0c/About_The_Dog.jpg", "example_title": "Dog-1"}, {"src": "https://yt3.ggpht.com/ytc/AKedOLRvxGYSdEHqu0X4EYcJ2kq7BttRKBNpfwdHJf3FSg=s900-c-k-c0x00ffffff-no-rj", "example_title": "Dog-2"}, {"src": "https://upload.wikimedia.org/wikipedia/commons/c/c7/Tabby_cat_with_blue_eyes-3336579.jpg", "example_title": "Cat-1"}, {"src": "https://pixabay.com/get/g31cf3b945cf9b9144eb6c1ecf514b4db668875b75d0c615e0330aec74bef5edde11567ef4a6f5fdb61a828b8086a39d3a0e72fb326d78467786dcdde4e6fa23c5c4c309d0abc089a8663809c175aee22_1920.jpg", "example_title": "Cat-2"}]}
|
carlosaguayo/cats_vs_dogs
| null |
[
"keras",
"image-classification",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #image-classification #has_space #region-us
|
# Classify Cats and Dogs
VGG16 fine tuned to classify cats and dogs
Notebook
URL
### How to use
Here is how to use this model to classify an image as a cat or dog:
|
[
"# Classify Cats and Dogs\n\nVGG16 fine tuned to classify cats and dogs\n\nNotebook\n\nURL",
"### How to use\n\nHere is how to use this model to classify an image as a cat or dog:"
] |
[
"TAGS\n#keras #image-classification #has_space #region-us \n",
"# Classify Cats and Dogs\n\nVGG16 fine tuned to classify cats and dogs\n\nNotebook\n\nURL",
"### How to use\n\nHere is how to use this model to classify an image as a cat or dog:"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1689
- Accuracy: 0.9295
- F1: 0.9300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2853 | 1.0 | 250 | 0.1975 | 0.9235 | 0.9233 |
| 0.1568 | 2.0 | 500 | 0.1689 | 0.9295 | 0.9300 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9295, "name": "Accuracy"}, {"type": "f1", "value": 0.9299984897610097, "name": "F1"}]}]}]}
|
carlosaguayo/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1689
* Accuracy: 0.9295
* F1: 0.9300
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7197 | 0.54 | 500 | 1.4842 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["samsum"], "model-index": [{"name": "pegasus-samsum", "results": []}]}
|
carlosaguayo/pegasus-samsum
| null |
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #dataset-samsum #autotrain_compatible #endpoints_compatible #region-us
|
pegasus-samsum
==============
This model is a fine-tuned version of google/pegasus-cnn\_dailymail on the samsum dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4842
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #dataset-samsum #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Harry Potter Bot
|
{"tags": ["conversational"]}
|
cartyparty/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter Bot
|
[
"# Harry Potter Bot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter Bot"
] |
text-generation
|
transformers
|
# Iteration 1
|
{"tags": ["conversational"]}
|
cartyparty/DialoGPT-small-iteration1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Iteration 1
|
[
"# Iteration 1"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Iteration 1"
] |
text-generation
|
transformers
|
# inspired by greentext
|
{"tags": ["conversational"]}
|
cartyparty/DialoGPT-small-nerdherd
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# inspired by greentext
|
[
"# inspired by greentext"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# inspired by greentext"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-ner-tcp-ca
This model is a fine-tuned version of [cassandra-themis/camembert-base-juri](https://huggingface.co/cassandra-themis/camembert-base-juri) on the cassandra-themis/ner-tcp-ca full dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["cassandra-themis/ner-tcp-ca"], "widget": [{"text": "R\u00c9PUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nCOUR D'APPEL D'AIX EN PROVENCE\n\n\n\n10e Chambre\n\n\n\nARR\u00caT MIXTE\n\nDU 14 JUIN 2006\n\n\n\nNo/2006\n\n\n\n\n\nR\u00f4le No 99/09967\n\n\n\n\n\nJohn X...\n\nArlette Y... \u00e9pouse X...\n\nPatrick X...\n\n\n\n\n\nC/\n\n\n\nFONDS DE GARANTIE DES VICTIMES D'ACTES DE TERRORISME ET D'AUTRES INFRACTIONS\n\n\n\n\n\nD\u00e9cision d\u00e9f\u00e9r\u00e9e \u00e0 la Cour :\n\n\n\nD\u00e9cision rendue le 20 Avril 1999 par la Commission d'Indemnisation des Victimes d'Infractions P\u00e9nales pr\u00e8s le Tribunal de Grande Instance de MARSEILLE, enregistr\u00e9e\n\nau r\u00e9pertoire g\u00e9n\u00e9ral sous le no 98/00491.\n\n\n\n\n\nAPPELANTS\n\n\n\nMonsieur John X..., d\u00e9c\u00e9d\u00e9\n\nn\u00e9 le 17 Mars 1973 \u00e0 MARSEILLE (13000), demeurant ... - 13000 MARSEILLE\n\nrepr\u00e9sent\u00e9 par la SCP COHEN - GUEDJ, avou\u00e9s \u00e0 la Cour\n\n\n\nMadame Arlette Y... \u00e9pouse X...\n\nprise es qualit\u00e9 d'h\u00e9riti\u00e8re de John X..., d\u00e9c\u00e9d\u00e9 le 25/11/2001\n\nn\u00e9e le 18 Ao\u00fbt 1951 \u00e0 SAINT JEAN DE COLE (DORDOGNE), ... - 13012 MARSEILLE\n\nrepr\u00e9sent\u00e9e par la SCP COHEN - GUEDJ, avou\u00e9s \u00e0 la Cour,\n\nassist\u00e9e de la SELARL BAFFERT - FRUCTUS ET ASSOCIES, avocats au barreau de MARSEILLE\n\n\n\nMonsieur Patrick X...\n\npris en sa qualit\u00e9 d'h\u00e9ritier de John X..., d\u00e9c\u00e9d\u00e9 le 25/11/2001\n\nn\u00e9 le 12 Juin 1951 \u00e0 MARSEILLE (BOUCHES DU RH\u00d4NE), demeurant ... - 13012 MARSEILLE\n\nrepr\u00e9sent\u00e9 par la SCP COHEN - GUEDJ, avou\u00e9s \u00e0 la Cour,\n\nassist\u00e9 de la SELARL BAFFERT - FRUCTUS ET ASSOCIES, avocats au barreau de MARSEILLE\n\n\n\n\n\nINTIME\n\n\n\nFONDS DE GARANTIE DES VICTIMES D'ACTES DE TERRORISME ET D'AUTRES INFRACTIONS article L 422.1 du Code des Assurances, g\u00e9r\u00e9 par le Fonds de Garantie contre les Accidents de Circulation et de Chasse, dont le si\u00e8ge social est sis 64 rue Defrance 94300 VINCENNES, 39 bd Vincent Delpuech - les Bureaux du M\u00e9diterran\u00e9e - 13255 MARSEILLE\n\nrepr\u00e9sent\u00e9 par la SCP GIACOMETTI - DESOMBRE, avou\u00e9s \u00e0 la Cour,\n\nassist\u00e9 de Me Alain TUILLIER, avocat au barreau d'AIX EN PROVENCE\n\n\n\n\n\nCOMPOSITION DE LA COUR\n\n\n\nL'affaire a \u00e9t\u00e9 d\u00e9battue le 12 Avril 2006 en audience publique. Conform\u00e9ment \u00e0 l'article 785 du Nouveau Code de Proc\u00e9dure Civile, Mr RAJBAUT, Conseiller a fait un rapport oral de l'affaire \u00e0 l'audience avant les plaidoiries.\n\n\n\nLa Cour \u00e9tait compos\u00e9e de :\n\n\n\nMadame Elisabeth VIEUX, Pr\u00e9sidente\n\nMonsieur Benjamin RAJBAUT, Conseiller\n\nMadame Dominique KLOTZ, Conseiller\n\n\n\n\n\nqui en ont d\u00e9lib\u00e9r\u00e9\n\n\n\nGreffier lors des d\u00e9bats : Madame Genevi\u00e8ve JAUFFRES.\n\n\n\nLes parties ont \u00e9t\u00e9 avis\u00e9es que le prononc\u00e9 public de la d\u00e9cision aura lieu par mise \u00e0 disposition au greffe le 14 Juin 2006..\n\n\n\nMINIST\u00c8RE PUBLIC :\n\nAuquel l'affaire a \u00e9t\u00e9 r\u00e9guli\u00e8rement communiqu\u00e9e.\n\n", "example_title": "Exemple 1"}, {"text": "R\u00c9PUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nPhD / BLL\n\n\n\nNum\u00e9ro / 06\n\n\n\nCOUR D'APPEL DE PAU\n\n2\u00e8me CH-Section 1\n\n\n\nARR\u00caT DU 19 janvier 2006\n\n\n\nDossier : 04 / 03078\n\n\n\nNature affaire :\n\n\n\nAutres demandes relatives \u00e0 un bail d'habitation ou \u00e0 un bail professionnel\n\n\n\nAffaire :\n\n\n\nBerthe X... \u00e9pouse Y...\n\n\n\nC /\n\n\n\nDominique Z...,\n\nCorinne X...\n\n\n\nR\u00c9PUBLIQUE FRAN\u00c7AISE\n\n\n\nAU NOM DU PEUPLE FRAN\u00c7AIS\n\n\n\nA R R \u00ca T\n\n\n\nprononc\u00e9 par Monsieur GRANGER, conseiller,\n\nen vertu de l'article 452 du Nouveau Code de Proc\u00e9dure Civile,\n\n\n\nassist\u00e9 de Monsieur LASBIATES, Greffier,\n\n\n\n\u00e0 l'audience publique du 19 janvier 2006\n\ndate indiqu\u00e9e \u00e0 l'issue des d\u00e9bats.\n\n\n\n* * * * *\n\n\n\nAPRES D\u00c9BATS\n\n\n\n\u00e0 l'audience publique tenue le 24 Novembre 2005, devant :\n\n\n\nMonsieur DARRACQ, magistrat charg\u00e9 du rapport,\n\n\n\nassist\u00e9 de Monsieur LASBIATES, greffier pr\u00e9sent \u00e0 l'appel des causes,\n\n\n\nMonsieur DARRACQ, en application des articles 786 et 910 du Nouveau Code de Proc\u00e9dure Civile et \u00e0 d\u00e9faut d'opposition a tenu l'audience pour entendre les plaidoiries et en a rendu compte \u00e0 la Cour compos\u00e9e de :\n\n\n\nMonsieur PETRIAT, Conseiller faisant fonction de Pr\u00e9sident, par suite de l'emp\u00eachement l\u00e9gitime de tous les titulaires et des magistrats d\u00e9sign\u00e9s par ordonnance et se trouvant le magistrat du si\u00e8ge pr\u00e9sent le plus ancien dans l'ordre de nomination \u00e0 la Cour\n\n\n\nMonsieur GRANGER, Conseiller\n\nMonsieur DARRACQ, Vice-Pr\u00e9sident plac\u00e9, d\u00e9sign\u00e9 par ordonnance du 12 septembre 2005\n\n\n\nqui en ont d\u00e9lib\u00e9r\u00e9 conform\u00e9ment \u00e0 la loi.\n\n\n\ndans l'affaire opposant :\n\n\n\nAPPELANTE :\n\n\n\nMadame Berthe X... \u00e9pouse Y...\n\nn\u00e9e le 13 Juin 1942 \u00e0 ARCANGUES (64)\n\nde nationalit\u00e9 fran\u00e7aise\n\n...\n\n...\n\n12500 ESPALION\n\n\n\nrepr\u00e9sent\u00e9e par la S. C. P. LONGIN C. ET P., avou\u00e9s \u00e0 la Cour\n\nassist\u00e9e de Ma\u00eetre BLAZY-ANDRIEU, avocat au barreau de BAYONNE\n\n\n\nINTIMES :\n\n\n\nMonsieur Dominique Camille Z...\n\nn\u00e9 le 13 juin 1954 \u00e0 Chatou (78)\n\n...\n\n...\n\n64200 BIARRITZ\n\n\n\nMadame Corinne X...\n\nn\u00e9e le 3 juillet 1969 \u00e0 Bidart (64)\n\n...\n\n...\n\n64200 BIARRITZ\n\n\n\n(b\u00e9n\u00e9ficient d'une aide juridictionnelle Totale num\u00e9ro 2004 / 006320 du 24 / 02 / 2005 accord\u00e9e par le bureau d'aide juridictionnelle de PAU)\n\n\n\nrepr\u00e9sent\u00e9s par la S. C. P. F. PIAULT / M. LACRAMPE-CARRAZE, avou\u00e9s \u00e0 la Cour\n\nassist\u00e9s de Ma\u00eetre FOURGEAU, avocat au barreau de BAYONNE\n\n\n\nsur appel de la d\u00e9cision\n\nen date du 24 AOUT 2004\n\nrendue par le TRIBUNAL D'INSTANCE DE BIARRITZ", "example_title": "Exemple 2"}, {"text": "R\u00c9PUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nCOUR D'APPEL DE DOUAI\n\n\n\nTROISI\u00c8ME CHAMBRE\n\n\n\nARR\u00caT DU 26 / 01 / 2006\n\n\n\nBAUX RURAUX\n\n\n\nNo RG : 05 / 04854 jonction avec dossier RG No 05 / 04858\n\n\n\nTribunal paritaire des baux ruraux d'AVESNES SUR HELPE\n\ndu 27 Juillet 2005 jugements no 99 / 000010 et 04 / 000006\n\n\n\nAPPELANTE\n\nMadame Marie-No\u00eblle X... \u00e9pouse Y...\n\nDemeurant\n\n...\n\n59138 PONT SUR SAMBRE\n\n\n\nrepr\u00e9sent\u00e9e par Me STERLILN de la SCP JP STERLIN-C STERLIN, avocats au barreau d'AMIENS\n\n\n\nINTIM\u00c9S\n\nMonsieur Michel Z...\n\nDemeurant\n\n...\n\n59138 BACHANT\n\n\n\nrepr\u00e9sent\u00e9 par Me VILLESECHE de la SCP ROFFIAEN-LE FUR-VILLESECHE, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nMonsieur Avit X...\n\nDemeurant\n\n...\n\n59138 BACHANT\n\n\n\nrepr\u00e9sent\u00e9 par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nMadame Marie-Christine X... \u00e9pouse A...\n\nDemeurant\n\n...\n\n59750 FEIGNIES\n\n\n\nrepr\u00e9sent\u00e9e par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\n\n\nMadame Marie-Claire X... \u00e9pouse B...\n\nDemeurant\n\n...\n\n59550 PRISCHES\n\n\n\nrepr\u00e9sent\u00e9e par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\n\n\nMadame Marie-Antoinette X... \u00e9pouse C...\n\nDemeurant\n\n...\n\n59440 ST AUBIN\n\n\n\nrepr\u00e9sent\u00e9e par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nCOMPOSITION DE LA COUR LORS DES D\u00c9BATS ET DU D\u00c9LIB\u00c9R\u00c9\n\nMadame MERFELD, Pr\u00e9sident de chambre\n\nMadame CONVAIN, Conseiller\n\nMadame PAOLI, Conseiller\n\n---------------------\n\nGREFFIER LORS DES D\u00c9BATS : Madame GAMEZ\n\n", "example_title": "Exemple 3"}], "model-index": [{"name": "camembert-ner-tcp-ca", "results": []}]}
|
cassandra-themis/test_tcp_ca
| null |
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"generated_from_trainer",
"dataset:cassandra-themis/ner-tcp-ca",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #camembert #token-classification #generated_from_trainer #dataset-cassandra-themis/ner-tcp-ca #autotrain_compatible #endpoints_compatible #region-us
|
# camembert-ner-tcp-ca
This model is a fine-tuned version of cassandra-themis/camembert-base-juri on the cassandra-themis/ner-tcp-ca full dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
[
"# camembert-ner-tcp-ca\n\nThis model is a fine-tuned version of cassandra-themis/camembert-base-juri on the cassandra-themis/ner-tcp-ca full dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 48\n- eval_batch_size: 48\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 30.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #camembert #token-classification #generated_from_trainer #dataset-cassandra-themis/ner-tcp-ca #autotrain_compatible #endpoints_compatible #region-us \n",
"# camembert-ner-tcp-ca\n\nThis model is a fine-tuned version of cassandra-themis/camembert-base-juri on the cassandra-themis/ner-tcp-ca full dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 48\n- eval_batch_size: 48\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 30.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
Hugging Face's logo
---
language:
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
---
# afriberta_base
## Model description
AfriBERTa base is a pretrained multilingual language model with around 111 million parameters.
The model has 8 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.
The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for any downstream task.
For example, assuming we want to finetune this model on a token classification task, we do the following:
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_base")
>>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_base")
# we have to manually set the model max length because it is an imported sentencepiece model, which huggingface does not properly support right now
>>> tokenizer.model_max_length = 512
```
#### Limitations and bias
- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.
- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.
## Training data
The model was trained on an aggregation of datasets from the BBC news website and Common Crawl.
## Training procedure
For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta)
### BibTeX entry and citation info
```
@inproceedings{ogueji-etal-2021-small,
title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
author = "Ogueji, Kelechi and
Zhu, Yuxin and
Lin, Jimmy",
booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.mrl-1.11",
pages = "116--126",
}
```
|
{}
|
castorini/afriberta_base
| null |
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #xlm-roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Hugging Face's logo
---
language:
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
---
# afriberta_base
## Model description
AfriBERTa base is a pretrained multilingual language model with around 111 million parameters.
The model has 8 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.
The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for any downstream task.
For example, assuming we want to finetune this model on a token classification task, we do the following:
#### Limitations and bias
- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.
- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.
## Training data
The model was trained on an aggregation of datasets from the BBC news website and Common Crawl.
## Training procedure
For information on training procedures, please refer to the AfriBERTa [paper]() or repository
### BibTeX entry and citation info
|
[
"# afriberta_base",
"## Model description\nAfriBERTa base is a pretrained multilingual language model with around 111 million parameters.\nThe model has 8 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.\nThe model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.\nThe model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.",
"## Intended uses & limitations",
"#### How to use\nYou can use this model with Transformers for any downstream task. \nFor example, assuming we want to finetune this model on a token classification task, we do the following:",
"#### Limitations and bias\n- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.\n- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.",
"## Training data\nThe model was trained on an aggregation of datasets from the BBC news website and Common Crawl.",
"## Training procedure\nFor information on training procedures, please refer to the AfriBERTa [paper]() or repository",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# afriberta_base",
"## Model description\nAfriBERTa base is a pretrained multilingual language model with around 111 million parameters.\nThe model has 8 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.\nThe model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.\nThe model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.",
"## Intended uses & limitations",
"#### How to use\nYou can use this model with Transformers for any downstream task. \nFor example, assuming we want to finetune this model on a token classification task, we do the following:",
"#### Limitations and bias\n- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.\n- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.",
"## Training data\nThe model was trained on an aggregation of datasets from the BBC news website and Common Crawl.",
"## Training procedure\nFor information on training procedures, please refer to the AfriBERTa [paper]() or repository",
"### BibTeX entry and citation info"
] |
fill-mask
|
transformers
|
# afriberta_large
## Model description
AfriBERTa large is a pretrained multilingual language model with around 126 million parameters.
The model has 10 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.
The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for any downstream task.
For example, assuming we want to finetune this model on a token classification task, we do the following:
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_large")
>>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_large")
# we have to manually set the model max length because it is an imported sentencepiece model, which huggingface does not properly support right now
>>> tokenizer.model_max_length = 512
```
#### Limitations and bias
- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.
- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.
## Training data
The model was trained on an aggregation of datasets from the BBC news website and Common Crawl.
## Training procedure
For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta)
### BibTeX entry and citation info
```
@inproceedings{ogueji-etal-2021-small,
title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
author = "Ogueji, Kelechi and
Zhu, Yuxin and
Lin, Jimmy",
booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.mrl-1.11",
pages = "116--126",
}
```
|
{"language": ["om", "am", "rw", "rn", "ha", "ig", "so", "sw", "ti", "yo", "pcm", "multilingual"], "license": "mit", "datasets": ["castorini/afriberta-corpus"]}
|
castorini/afriberta_large
| null |
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"om",
"am",
"rw",
"rn",
"ha",
"ig",
"so",
"sw",
"ti",
"yo",
"pcm",
"multilingual",
"dataset:castorini/afriberta-corpus",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"om",
"am",
"rw",
"rn",
"ha",
"ig",
"so",
"sw",
"ti",
"yo",
"pcm",
"multilingual"
] |
TAGS
#transformers #pytorch #tf #xlm-roberta #fill-mask #om #am #rw #rn #ha #ig #so #sw #ti #yo #pcm #multilingual #dataset-castorini/afriberta-corpus #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# afriberta_large
## Model description
AfriBERTa large is a pretrained multilingual language model with around 126 million parameters.
The model has 10 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.
The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for any downstream task.
For example, assuming we want to finetune this model on a token classification task, we do the following:
#### Limitations and bias
- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.
- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.
## Training data
The model was trained on an aggregation of datasets from the BBC news website and Common Crawl.
## Training procedure
For information on training procedures, please refer to the AfriBERTa [paper]() or repository
### BibTeX entry and citation info
|
[
"# afriberta_large",
"## Model description\nAfriBERTa large is a pretrained multilingual language model with around 126 million parameters.\nThe model has 10 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.\nThe model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.\nThe model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.",
"## Intended uses & limitations",
"#### How to use\nYou can use this model with Transformers for any downstream task. \nFor example, assuming we want to finetune this model on a token classification task, we do the following:",
"#### Limitations and bias\n- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.\n- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.",
"## Training data\nThe model was trained on an aggregation of datasets from the BBC news website and Common Crawl.",
"## Training procedure\nFor information on training procedures, please refer to the AfriBERTa [paper]() or repository",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #om #am #rw #rn #ha #ig #so #sw #ti #yo #pcm #multilingual #dataset-castorini/afriberta-corpus #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# afriberta_large",
"## Model description\nAfriBERTa large is a pretrained multilingual language model with around 126 million parameters.\nThe model has 10 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.\nThe model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.\nThe model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.",
"## Intended uses & limitations",
"#### How to use\nYou can use this model with Transformers for any downstream task. \nFor example, assuming we want to finetune this model on a token classification task, we do the following:",
"#### Limitations and bias\n- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.\n- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.",
"## Training data\nThe model was trained on an aggregation of datasets from the BBC news website and Common Crawl.",
"## Training procedure\nFor information on training procedures, please refer to the AfriBERTa [paper]() or repository",
"### BibTeX entry and citation info"
] |
fill-mask
|
transformers
|
Hugging Face's logo
---
language:
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
---
# afriberta_small
## Model description
AfriBERTa small is a pretrained multilingual language model with around 97 million parameters.
The model has 4 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.
The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for any downstream task.
For example, assuming we want to finetune this model on a token classification task, we do the following:
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_small")
>>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_small")
# we have to manually set the model max length because it is an imported trained sentencepiece model, which huggingface does not properly support right now
>>> tokenizer.model_max_length = 512
```
#### Limitations and bias
- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.
- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.
## Training data
The model was trained on an aggregation of datasets from the BBC news website and Common Crawl.
## Training procedure
For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta)
### BibTeX entry and citation info
```
@inproceedings{ogueji-etal-2021-small,
title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
author = "Ogueji, Kelechi and
Zhu, Yuxin and
Lin, Jimmy",
booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.mrl-1.11",
pages = "116--126",
}
```
|
{}
|
castorini/afriberta_small
| null |
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #xlm-roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Hugging Face's logo
---
language:
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
---
# afriberta_small
## Model description
AfriBERTa small is a pretrained multilingual language model with around 97 million parameters.
The model has 4 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.
The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for any downstream task.
For example, assuming we want to finetune this model on a token classification task, we do the following:
#### Limitations and bias
- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.
- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.
## Training data
The model was trained on an aggregation of datasets from the BBC news website and Common Crawl.
## Training procedure
For information on training procedures, please refer to the AfriBERTa [paper]() or repository
### BibTeX entry and citation info
|
[
"# afriberta_small",
"## Model description\nAfriBERTa small is a pretrained multilingual language model with around 97 million parameters.\nThe model has 4 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.\nThe model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.\nThe model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.",
"## Intended uses & limitations",
"#### How to use\nYou can use this model with Transformers for any downstream task. \nFor example, assuming we want to finetune this model on a token classification task, we do the following:",
"#### Limitations and bias\n- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.\n- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.",
"## Training data\nThe model was trained on an aggregation of datasets from the BBC news website and Common Crawl.",
"## Training procedure\nFor information on training procedures, please refer to the AfriBERTa [paper]() or repository",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# afriberta_small",
"## Model description\nAfriBERTa small is a pretrained multilingual language model with around 97 million parameters.\nThe model has 4 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.\nThe model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.\nThe model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.",
"## Intended uses & limitations",
"#### How to use\nYou can use this model with Transformers for any downstream task. \nFor example, assuming we want to finetune this model on a token classification task, we do the following:",
"#### Limitations and bias\n- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.\n- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.",
"## Training data\nThe model was trained on an aggregation of datasets from the BBC news website and Common Crawl.",
"## Training procedure\nFor information on training procedures, please refer to the AfriBERTa [paper]() or repository",
"### BibTeX entry and citation info"
] |
null |
transformers
|
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf)
For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
|
{}
|
castorini/ance-dpr-context-multi
| null |
[
"transformers",
"pytorch",
"dpr",
"arxiv:2007.00808",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2007.00808"
] |
[] |
TAGS
#transformers #pytorch #dpr #arxiv-2007.00808 #endpoints_compatible #region-us
|
This model is converted from the original ANCE repo and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval
For more details on how to use it, check our experiments in Pyserini
|
[] |
[
"TAGS\n#transformers #pytorch #dpr #arxiv-2007.00808 #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf)
For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
|
{}
|
castorini/ance-dpr-question-multi
| null |
[
"transformers",
"pytorch",
"dpr",
"feature-extraction",
"arxiv:2007.00808",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2007.00808"
] |
[] |
TAGS
#transformers #pytorch #dpr #feature-extraction #arxiv-2007.00808 #endpoints_compatible #has_space #region-us
|
This model is converted from the original ANCE repo and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval
For more details on how to use it, check our experiments in Pyserini
|
[] |
[
"TAGS\n#transformers #pytorch #dpr #feature-extraction #arxiv-2007.00808 #endpoints_compatible #has_space #region-us \n"
] |
null |
transformers
|
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf)
For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
|
{}
|
castorini/ance-msmarco-doc-firstp
| null |
[
"transformers",
"pytorch",
"roberta",
"arxiv:2007.00808",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2007.00808"
] |
[] |
TAGS
#transformers #pytorch #roberta #arxiv-2007.00808 #endpoints_compatible #region-us
|
This model is converted from the original ANCE repo and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval
For more details on how to use it, check our experiments in Pyserini
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #arxiv-2007.00808 #endpoints_compatible #region-us \n"
] |
null |
transformers
|
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf)
For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
|
{}
|
castorini/ance-msmarco-doc-maxp
| null |
[
"transformers",
"pytorch",
"roberta",
"arxiv:2007.00808",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2007.00808"
] |
[] |
TAGS
#transformers #pytorch #roberta #arxiv-2007.00808 #endpoints_compatible #has_space #region-us
|
This model is converted from the original ANCE repo and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval
For more details on how to use it, check our experiments in Pyserini
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #arxiv-2007.00808 #endpoints_compatible #has_space #region-us \n"
] |
null |
transformers
|
# Model Card for ance-msmarco-passage
Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.
# Model Details
## Model Description
Pyserini is primarily designed to provide effective, reproducible, and easy-to-use first-stage retrieval in a multi-stage ranking architecture
- **Developed by:** Castorini
- **Shared by [Optional]:** Hugging Face
- **Model type:** Information retrieval
- **Language(s) (NLP):** en
- **License:** More information needed
- **Related Models:** More information needed
- **Parent Model:** RoBERTa
- **Resources for more information:**
- [GitHub Repo](https://github.com/castorini/pyserini)
- [Associated Paper](https://dl.acm.org/doi/pdf/10.1145/3404835.3463238)
# Uses
## Direct Use
More information needed
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
More information needed
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
More information needed
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The model creators note in the [associated Paper](https://dl.acm.org/doi/pdf/10.1145/3404835.3463238) that:
> bag-of-words ranking with BM25 (the default ranking model) on the MS MARCO passage corpus (comprising 8.8M passages)
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
For bag-of-words sparse retrieval, we have built in Anserini (written in Java) custom parsers and ingestion pipelines for common document formats used in IR research,
# Citation
**BibTeX:**
```bibtex
@INPROCEEDINGS{Lin_etal_SIGIR2021_Pyserini,
author = "Jimmy Lin and Xueguang Ma and Sheng-Chieh Lin and Jheng-Hong Yang and Ronak Pradeep and Rodrigo Nogueira",
title = "{Pyserini}: A {Python} Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations",
booktitle = "Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)",
year = 2021,
pages = "2356--2362",
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Castorini in collaboration with Ezi Ozoani and the Hugging Face team.
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AnceEncoder
tokenizer = AutoTokenizer.from_pretrained("castorini/ance-msmarco-passage")
model = AnceEncoder.from_pretrained("castorini/ance-msmarco-passage")
```
</details>
|
{"language": ["en"]}
|
castorini/ance-msmarco-passage
| null |
[
"transformers",
"pytorch",
"roberta",
"en",
"arxiv:1910.09700",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1910.09700"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #en #arxiv-1910.09700 #endpoints_compatible #has_space #region-us
|
# Model Card for ance-msmarco-passage
Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.
# Model Details
## Model Description
Pyserini is primarily designed to provide effective, reproducible, and easy-to-use first-stage retrieval in a multi-stage ranking architecture
- Developed by: Castorini
- Shared by [Optional]: Hugging Face
- Model type: Information retrieval
- Language(s) (NLP): en
- License: More information needed
- Related Models: More information needed
- Parent Model: RoBERTa
- Resources for more information:
- GitHub Repo
- Associated Paper
# Uses
## Direct Use
More information needed
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
More information needed
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
More information needed
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The model creators note in the associated Paper that:
> bag-of-words ranking with BM25 (the default ranking model) on the MS MARCO passage corpus (comprising 8.8M passages)
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
For bag-of-words sparse retrieval, we have built in Anserini (written in Java) custom parsers and ingestion pipelines for common document formats used in IR research,
BibTeX:
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Castorini in collaboration with Ezi Ozoani and the Hugging Face team.
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
</details>
|
[
"# Model Card for ance-msmarco-passage\n \n \nPyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.",
"# Model Details",
"## Model Description\n \nPyserini is primarily designed to provide effective, reproducible, and easy-to-use first-stage retrieval in a multi-stage ranking architecture\n \n- Developed by: Castorini\n- Shared by [Optional]: Hugging Face\n- Model type: Information retrieval\n- Language(s) (NLP): en\n- License: More information needed\n- Related Models: More information needed\n - Parent Model: RoBERTa\n- Resources for more information: \n - GitHub Repo \n - Associated Paper",
"# Uses",
"## Direct Use\n \nMore information needed",
"## Downstream Use [Optional]\n \nMore information needed",
"## Out-of-Scope Use\n \nMore information needed",
"# Bias, Risks, and Limitations\n \n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\n \nMore information needed",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\n \nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nThe model creators note in the associated Paper that:\n> bag-of-words ranking with BM25 (the default ranking model) on the MS MARCO passage corpus (comprising 8.8M passages)",
"### Factors\n \nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nMore information needed",
"### Software\n \nFor bag-of-words sparse retrieval, we have built in Anserini (written in Java) custom parsers and ingestion pipelines for common document formats used in IR research,\n \n \nBibTeX:",
"# Glossary [optional]\n \nMore information needed",
"# More Information [optional]\n \nMore information needed",
"# Model Card Authors [optional]\n \nCastorini in collaboration with Ezi Ozoani and the Hugging Face team.",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] |
[
"TAGS\n#transformers #pytorch #roberta #en #arxiv-1910.09700 #endpoints_compatible #has_space #region-us \n",
"# Model Card for ance-msmarco-passage\n \n \nPyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.",
"# Model Details",
"## Model Description\n \nPyserini is primarily designed to provide effective, reproducible, and easy-to-use first-stage retrieval in a multi-stage ranking architecture\n \n- Developed by: Castorini\n- Shared by [Optional]: Hugging Face\n- Model type: Information retrieval\n- Language(s) (NLP): en\n- License: More information needed\n- Related Models: More information needed\n - Parent Model: RoBERTa\n- Resources for more information: \n - GitHub Repo \n - Associated Paper",
"# Uses",
"## Direct Use\n \nMore information needed",
"## Downstream Use [Optional]\n \nMore information needed",
"## Out-of-Scope Use\n \nMore information needed",
"# Bias, Risks, and Limitations\n \n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\n \nMore information needed",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\n \nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nThe model creators note in the associated Paper that:\n> bag-of-words ranking with BM25 (the default ranking model) on the MS MARCO passage corpus (comprising 8.8M passages)",
"### Factors\n \nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nMore information needed",
"### Software\n \nFor bag-of-words sparse retrieval, we have built in Anserini (written in Java) custom parsers and ingestion pipelines for common document formats used in IR research,\n \n \nBibTeX:",
"# Glossary [optional]\n \nMore information needed",
"# More Information [optional]\n \nMore information needed",
"# Model Card Authors [optional]\n \nCastorini in collaboration with Ezi Ozoani and the Hugging Face team.",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] |
fill-mask
|
transformers
|
## About
Here we share a pretrained BERT model that is aware of math tokens. The math tokens are treated specially and tokenized using [pya0](https://github.com/approach0/pya0), which adds very limited new tokens for latex markup (total vocabulary is just 31,061).
This model is trained on 4 x 2 Tesla V100 with a total batch size of 64, using Math StackExchange data with 2.7 million sentence pairs trained for 7 epochs.
### Usage
Download and try it out
```sh
pip install pya0==0.3.2
wget https://vault.cs.uwaterloo.ca/s/gqstFZmWHCLGXe3/download -O ckpt.tar.gz
mkdir -p ckpt
tar xzf ckpt.tar.gz -C ckpt --strip-components=1
python test.py --test_file test.txt
```
### Test file format
Modify the test examples in `test.txt` to play with it.
The test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups). A zero means no additional mask positions.
### Example output

### Upload to huggingface
This repo is hosted on [Github](https://github.com/approach0/azbert), and only mirrored at [huggingface](https://huggingface.co/castorini/azbert-base).
To upload to huggingface, use the `upload2hgf.sh` script.
Before runnig this script, be sure to check:
* check points for model and tokenizer are created under `./ckpt` folder
* model contains all the files needed: `config.json` and `pytorch_model.bin`
* tokenizer contains all the files needed: `added_tokens.json`, `special_tokens_map.json`, `tokenizer_config.json`, `vocab.txt` and `tokenizer.json`
* no `tokenizer_file` field in `tokenizer_config.json` (sometimes it is located locally at `~/.cache`)
* `git-lfs` is installed
* having git-remote named `hgf` reference to `https://huggingface.co/castorini/azbert-base`
|
{"language": "en", "license": "mit", "tags": ["azbert", "pretraining", "fill-mask"], "widget": [{"text": "$f$ $($ $x$ [MASK] $y$ $)$", "example_title": "mathy"}, {"text": "$x$ [MASK] $x$ $equal$ $2$ $x$", "example_title": "mathy"}, {"text": "Proof by [MASK] that $n$ $fact$ $gt$ $3$ $n$ for $n$ $gt$ $6$", "example_title": "mathy"}, {"text": "Proof by induction that $n$ [MASK] $gt$ $3$ $n$ for $n$ $gt$ $6$", "example_title": "mathy"}, {"text": "The goal of life is [MASK].", "example_title": "philosophical"}]}
|
castorini/azbert-base
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"pretraining",
"azbert",
"fill-mask",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tensorboard #bert #pretraining #azbert #fill-mask #en #license-mit #endpoints_compatible #region-us
|
## About
Here we share a pretrained BERT model that is aware of math tokens. The math tokens are treated specially and tokenized using pya0, which adds very limited new tokens for latex markup (total vocabulary is just 31,061).
This model is trained on 4 x 2 Tesla V100 with a total batch size of 64, using Math StackExchange data with 2.7 million sentence pairs trained for 7 epochs.
### Usage
Download and try it out
### Test file format
Modify the test examples in 'URL' to play with it.
The test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups). A zero means no additional mask positions.
### Example output

* 'git-lfs' is installed
* having git-remote named 'hgf' reference to 'URL
|
[
"## About\nHere we share a pretrained BERT model that is aware of math tokens. The math tokens are treated specially and tokenized using pya0, which adds very limited new tokens for latex markup (total vocabulary is just 31,061).\n\nThis model is trained on 4 x 2 Tesla V100 with a total batch size of 64, using Math StackExchange data with 2.7 million sentence pairs trained for 7 epochs.",
"### Usage\nDownload and try it out",
"### Test file format\nModify the test examples in 'URL' to play with it.\n\nThe test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups). A zero means no additional mask positions.",
"### Example output\n\n* 'git-lfs' is installed\n* having git-remote named 'hgf' reference to 'URL"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #pretraining #azbert #fill-mask #en #license-mit #endpoints_compatible #region-us \n",
"## About\nHere we share a pretrained BERT model that is aware of math tokens. The math tokens are treated specially and tokenized using pya0, which adds very limited new tokens for latex markup (total vocabulary is just 31,061).\n\nThis model is trained on 4 x 2 Tesla V100 with a total batch size of 64, using Math StackExchange data with 2.7 million sentence pairs trained for 7 epochs.",
"### Usage\nDownload and try it out",
"### Test file format\nModify the test examples in 'URL' to play with it.\n\nThe test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups). A zero means no additional mask positions.",
"### Example output\n\n* 'git-lfs' is installed\n* having git-remote named 'hgf' reference to 'URL"
] |
null |
transformers
|
This model is converted from the original BPR [repo](https://github.com/studio-ousia/bpr) and fitted into Pyserini:
> Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv:2106.00882.
|
{}
|
castorini/bpr-nq-ctx-encoder
| null |
[
"transformers",
"pytorch",
"dpr",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #dpr #endpoints_compatible #region-us
|
This model is converted from the original BPR repo and fitted into Pyserini:
> Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv:2106.00882.
|
[] |
[
"TAGS\n#transformers #pytorch #dpr #endpoints_compatible #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.