pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR66 - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2680
- Wer: 0.3467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0304 | 4.81 | 500 | 1.5676 | 1.0554 |
| 1.5263 | 9.61 | 1000 | 0.4693 | 0.8023 |
| 1.5299 | 14.42 | 1500 | 0.4368 | 0.7311 |
| 1.5063 | 19.23 | 2000 | 0.4360 | 0.7302 |
| 1.455 | 24.04 | 2500 | 0.4213 | 0.6692 |
| 1.4755 | 28.84 | 3000 | 0.4329 | 0.5943 |
| 1.352 | 33.65 | 3500 | 0.4074 | 0.5765 |
| 1.3122 | 38.46 | 4000 | 0.3866 | 0.5630 |
| 1.2799 | 43.27 | 4500 | 0.3860 | 0.5480 |
| 1.212 | 48.08 | 5000 | 0.3590 | 0.5317 |
| 1.1645 | 52.88 | 5500 | 0.3283 | 0.4757 |
| 1.0854 | 57.69 | 6000 | 0.3162 | 0.4687 |
| 1.0292 | 62.5 | 6500 | 0.3126 | 0.4416 |
| 0.9607 | 67.31 | 7000 | 0.2990 | 0.4066 |
| 0.9156 | 72.12 | 7500 | 0.2870 | 0.4009 |
| 0.8329 | 76.92 | 8000 | 0.2791 | 0.3909 |
| 0.7979 | 81.73 | 8500 | 0.2770 | 0.3670 |
| 0.7144 | 86.54 | 9000 | 0.2841 | 0.3661 |
| 0.6997 | 91.35 | 9500 | 0.2721 | 0.3485 |
| 0.6568 | 96.15 | 10000 | 0.2681 | 0.3437 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["te"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "openslr_SLR66", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["openslr", "SLR66"], "metrics": ["wer"], "model-index": [{"name": "xls-r-300m-te", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Open SLR", "type": "openslr", "args": "SLR66"}, "metrics": [{"type": "wer", "value": 24.695121951219512, "name": "Test WER"}, {"type": "cer", "value": 4.861934182322532, "name": "Test CER"}]}]}]}
|
chmanoj/xls-r-300m-te
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"openslr_SLR66",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"te",
"dataset:openslr",
"dataset:SLR66",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"te"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #openslr_SLR66 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #te #dataset-openslr #dataset-SLR66 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the OPENSLR\_SLR66 - NA dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2680
* Wer: 0.3467
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 10.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #openslr_SLR66 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #te #dataset-openslr #dataset-SLR66 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8786
- Wer: 1.3460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
{"language": ["ab"], "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
chmanoj/xls-r-demo-test
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ab"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us
|
#
This model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8786
- Wer: 1.3460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
[
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 156.8786\n- Wer: 1.3460",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu113\n- Datasets 1.18.1.dev0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us \n",
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 156.8786\n- Wer: 1.3460",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu113\n- Datasets 1.18.1.dev0\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53 in Thai Language (Train with deepcut tokenizer)
|
{"language": "th", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning"], "datasets": ["common_voice"]}
|
chompk/wav2vec2-large-xlsr-thai-tokenized
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning",
"th",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"th"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning #th #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53 in Thai Language (Train with deepcut tokenizer)
|
[
"# Wav2Vec2-Large-XLSR-53 in Thai Language (Train with deepcut tokenizer)"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning #th #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53 in Thai Language (Train with deepcut tokenizer)"
] |
text2text-generation
|
transformers
|
Test English-Dhivehi/Dhivehi-English NMT
Would need a lot more data to get accurate translations.
|
{}
|
chopey/testmntdv
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Test English-Dhivehi/Dhivehi-English NMT
Would need a lot more data to get accurate translations.
|
[] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null |
These models were made for my course project in NLP and AI special course at the University of Latvia during my first semester of study.
|
{}
|
chrisAS12/specseminars
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
These models were made for my course project in NLP and AI special course at the University of Latvia during my first semester of study.
|
[] |
[
"TAGS\n#region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Fon
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on [Fon (or Fongbe)](https://en.wikipedia.org/wiki/Fon_language) using the [Fon Dataset](https://github.com/laleye/pyFongbe/tree/master/data).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import json
import random
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
#Load test_dataset from saved files in folder
from datasets import load_dataset, load_metric
#for test
for root, dirs, files in os.walk(test/):
test_dataset= load_dataset("json", data_files=[os.path.join(root,i) for i in files],split="train")
#Remove unnecessary chars
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”]'
def remove_special_characters(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
test_dataset = test_dataset.map(remove_special_characters)
processor = Wav2Vec2Processor.from_pretrained("chrisjay/wav2vec2-large-xlsr-53-fon")
model = Wav2Vec2ForCTC.from_pretrained("chrisjay/wav2vec2-large-xlsr-53-fon")
#No need for resampling because audio dataset already at 16kHz
#resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"]=speech_array.squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on our unique Fon test data.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
for root, dirs, files in os.walk(test/):
test_dataset = load_dataset("json", data_files=[os.path.join(root,i) for i in files],split="train")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”]'
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
test_dataset = test_dataset.map(remove_special_characters)
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("chrisjay/wav2vec2-large-xlsr-53-fon")
model = Wav2Vec2ForCTC.from_pretrained("chrisjay/wav2vec2-large-xlsr-53-fon")
model.to("cuda")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["sentence"]
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
#Evaluation on test dataset
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 14.97 %
## Training
The [Fon dataset](https://github.com/laleye/pyFongbe/tree/master/data) was split into `train`(8235 samples), `validation`(1107 samples), and `test`(1061 samples).
The script used for training can be found [here](https://colab.research.google.com/drive/11l6qhJCYnPTG1TQZ8f3EvKB9z12TQi4g?usp=sharing)
# Collaborators on this project
- Chris C. Emezue ([Twitter](https://twitter.com/ChrisEmezue))|(chris.emezue@gmail.com)
- Bonaventure F.P. Dossou (HuggingFace Username: [bonadossou](https://huggingface.co/bonadossou))|([Twitter](https://twitter.com/bonadossou))|(femipancrace.dossou@gmail.com)
## This is a joint project continuing our research on [OkwuGbé: End-to-End Speech Recognition for Fon and Igbo](https://arxiv.org/abs/2103.07762)
|
{"language": "fon", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["fon_dataset"], "metrics": ["wer"], "model-index": [{"name": "Fon XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "fon", "type": "fon_dataset", "args": "fon"}, "metrics": [{"type": "wer", "value": 14.97, "name": "Test WER"}]}]}]}
|
chrisjay/fonxlsr
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"fon",
"dataset:fon_dataset",
"arxiv:2103.07762",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2103.07762"
] |
[
"fon"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hf-asr-leaderboard #fon #dataset-fon_dataset #arxiv-2103.07762 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Fon
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Fon (or Fongbe) using the Fon Dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on our unique Fon test data.
Test Result: 14.97 %
## Training
The Fon dataset was split into 'train'(8235 samples), 'validation'(1107 samples), and 'test'(1061 samples).
The script used for training can be found here
# Collaborators on this project
- Chris C. Emezue (Twitter)|(URL@URL)
- Bonaventure F.P. Dossou (HuggingFace Username: bonadossou)|(Twitter)|(URL@URL)
## This is a joint project continuing our research on OkwuGbé: End-to-End Speech Recognition for Fon and Igbo
|
[
"# Wav2Vec2-Large-XLSR-53-Fon\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Fon (or Fongbe) using the Fon Dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on our unique Fon test data. \n\n\n\nTest Result: 14.97 %",
"## Training\n\nThe Fon dataset was split into 'train'(8235 samples), 'validation'(1107 samples), and 'test'(1061 samples).\n\nThe script used for training can be found here",
"# Collaborators on this project\n\n - Chris C. Emezue (Twitter)|(URL@URL)\n - Bonaventure F.P. Dossou (HuggingFace Username: bonadossou)|(Twitter)|(URL@URL)",
"## This is a joint project continuing our research on OkwuGbé: End-to-End Speech Recognition for Fon and Igbo"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hf-asr-leaderboard #fon #dataset-fon_dataset #arxiv-2103.07762 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Fon\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Fon (or Fongbe) using the Fon Dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on our unique Fon test data. \n\n\n\nTest Result: 14.97 %",
"## Training\n\nThe Fon dataset was split into 'train'(8235 samples), 'validation'(1107 samples), and 'test'(1061 samples).\n\nThe script used for training can be found here",
"# Collaborators on this project\n\n - Chris C. Emezue (Twitter)|(URL@URL)\n - Bonaventure F.P. Dossou (HuggingFace Username: bonadossou)|(Twitter)|(URL@URL)",
"## This is a joint project continuing our research on OkwuGbé: End-to-End Speech Recognition for Fon and Igbo"
] |
null | null |
# Interacting with the Masakhane Benchmark Models
I created this demo for very easy interaction with the [benchmark models on Masakhane](https://github.com/masakhane-io/masakhane-mt/tree/master/benchmarks) which were trained with [JoeyNMT](https://github.com/chrisemezue/joeynmt)(my forked version).
To access the space click [here](https://huggingface.co/spaces/chrisjay/masakhane-benchmarks).
To include your language, all you need to do is:
1. Create a folder in the format *src-tgt/main* for your language pair, if it does not exist.
2. Inside the *main* folder put the following files:
1. model checkpoint. Rename it to `best.ckpt`.
2. `config.yaml` file. This is the JoeyNMT config file which loads the model an pre-processing parameters.
3. `src_vocab.txt` file.
4. `trg_vocab.txt` file.
The space currently supports these languages:
| source language | target language |
|:---------------:|:---------------:|
| English | Swahili |
| English | Afrikaans |
| English | Arabic |
| English | Urhobo |
| English | Ẹ̀dó |
| Efik | English |
| English | Hausa |
| English | Igbo |
| English | Fon |
| English | Twi |
| English | Dendi |
| English | Ẹ̀sán |
| English | Isoko |
| English | Kamba |
| English | Luo |
| English | Southern Ndebele |
| English | Tshivenda |
| Shona | English |
| Swahili | English |
| Yoruba | English |
TO DO:
1. Include more languages from the benchmark.
|
{"language": "african-languages", "license": "apache-2.0", "tags": ["african-languages", "machine-translation", "text"]}
|
chrisjay/masakhane_benchmarks
| null |
[
"african-languages",
"machine-translation",
"text",
"license:apache-2.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"african-languages"
] |
TAGS
#african-languages #machine-translation #text #license-apache-2.0 #has_space #region-us
|
Interacting with the Masakhane Benchmark Models
===============================================
I created this demo for very easy interaction with the benchmark models on Masakhane which were trained with JoeyNMT(my forked version).
To access the space click here.
To include your language, all you need to do is:
1. Create a folder in the format *src-tgt/main* for your language pair, if it does not exist.
2. Inside the *main* folder put the following files:
1. model checkpoint. Rename it to 'URL'.
2. 'URL' file. This is the JoeyNMT config file which loads the model an pre-processing parameters.
3. 'src\_vocab.txt' file.
4. 'trg\_vocab.txt' file.
The space currently supports these languages:
TO DO:
1. Include more languages from the benchmark.
|
[] |
[
"TAGS\n#african-languages #machine-translation #text #license-apache-2.0 #has_space #region-us \n"
] |
text-classification
|
spacy
|
Text statistics including readability and formality.
| Feature | Description |
| --- | --- |
| **Name** | `en_statistics` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.1.1,<3.2.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `syllables`, `formality`, `readability` |
| **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `syllables`, `formality`, `readability` |
| **Vectors** | 684830 keys, 20000 unique vectors (300 dimensions) |
| **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br />[GloVe Common Crawl](https://nlp.stanford.edu/projects/glove/) (Jeffrey Pennington, Richard Socher, and Christopher D. Manning) |
| **License** | `MIT` |
| **Author** | [Chris Knowles](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (96 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
| **`senter`** | `I`, `S` |
</details>
|
{"language": ["en"], "license": "mit", "tags": ["spacy", "text-classification"], "model-index": [{"name": "en_statistics", "results": []}]}
|
chrisknowles/en_statistics
| null |
[
"spacy",
"text-classification",
"en",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#spacy #text-classification #en #license-mit #region-us
|
Text statistics including readability and formality.
### Label Scheme
View label scheme (96 labels for 3 components)
|
[
"### Label Scheme\n\n\n\nView label scheme (96 labels for 3 components)"
] |
[
"TAGS\n#spacy #text-classification #en #license-mit #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (96 labels for 3 components)"
] |
token-classification
|
spacy
|
Check style on English text (currently passive text).
| Feature | Description |
| --- | --- |
| **Name** | `en_stylecheck` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.1.1,<3.2.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner`, `stylecheck` |
| **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner`, `stylecheck` |
| **Vectors** | 684830 keys, 20000 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (115 labels for 5 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
| **`senter`** | `I`, `S` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
| **`entity_ruler`** | `PASSIVE` |
</details>
|
{"language": ["en"], "license": "mit", "tags": ["spacy", "token-classification"], "model-index": [{"name": "en_stylecheck", "results": []}]}
|
chrisknowles/en_stylecheck
| null |
[
"spacy",
"token-classification",
"en",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#spacy #token-classification #en #license-mit #region-us
|
Check style on English text (currently passive text).
### Label Scheme
View label scheme (115 labels for 5 components)
|
[
"### Label Scheme\n\n\n\nView label scheme (115 labels for 5 components)"
] |
[
"TAGS\n#spacy #token-classification #en #license-mit #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (115 labels for 5 components)"
] |
text-generation
|
transformers
|
[DistilGPT2](https://huggingface.co/distilgpt2) English language model fine-tuned on mathematical proofs extracted from [arXiv.org](https://arxiv.org) LaTeX sources from 1992 to 2020.
Proofs have been cleaned up a bit. In particular, they use
* `CITE` for any citation
* `REF` for any reference
* `MATH` for any LaTeX mathematical formula
* `CASE:` for any `\item` or labeled subcase.
|
{"widget": [{"text": "Let MATH be given."}, {"text": "If MATH is a nonempty"}, {"text": "By the inductive hypothesis,"}]}
|
christopherastone/distilgpt2-proofs
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
DistilGPT2 English language model fine-tuned on mathematical proofs extracted from URL LaTeX sources from 1992 to 2020.
Proofs have been cleaned up a bit. In particular, they use
* 'CITE' for any citation
* 'REF' for any reference
* 'MATH' for any LaTeX mathematical formula
* 'CASE:' for any '\item' or labeled subcase.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-cola
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1729
- Accuracy: 0.9755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5119 | 1.0 | 625 | 0.2386 | 0.922 |
| 0.2536 | 2.0 | 1250 | 0.2055 | 0.949 |
| 0.1718 | 3.0 | 1875 | 0.1733 | 0.969 |
| 0.0562 | 4.0 | 2500 | 0.1661 | 0.974 |
| 0.0265 | 5.0 | 3125 | 0.1729 | 0.9755 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model_index": [{"name": "bert-base-multilingual-cased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9755}}]}]}
|
chrommium/bert-base-multilingual-cased-finetuned-news-headlines
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-multilingual-cased-finetuned-cola
===========================================
This model is a fine-tuned version of bert-base-multilingual-cased on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1729
* Accuracy: 0.9755
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-headlines_X
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2535
- Accuracy: 0.952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 157 | 0.2759 | 0.912 |
| No log | 2.0 | 314 | 0.2538 | 0.936 |
| No log | 3.0 | 471 | 0.2556 | 0.945 |
| 0.1908 | 4.0 | 628 | 0.2601 | 0.95 |
| 0.1908 | 5.0 | 785 | 0.2535 | 0.952 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"]}
|
chrommium/rubert-base-cased-sentence-finetuned-headlines_X
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #model-index #autotrain_compatible #endpoints_compatible #region-us
|
rubert-base-cased-sentence-finetuned-headlines\_X
=================================================
This model is a fine-tuned version of DeepPavlov/rubert-base-cased-sentence on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2535
* Accuracy: 0.952
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-sent_in_news_sents
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9506
- Accuracy: 0.7224
- F1: 0.5137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 1.0045 | 0.6690 | 0.1388 |
| No log | 2.0 | 162 | 0.9574 | 0.6228 | 0.2980 |
| No log | 3.0 | 243 | 1.0259 | 0.6477 | 0.3208 |
| No log | 4.0 | 324 | 1.1262 | 0.6619 | 0.4033 |
| No log | 5.0 | 405 | 1.3377 | 0.6299 | 0.3909 |
| No log | 6.0 | 486 | 1.5716 | 0.6868 | 0.3624 |
| 0.6085 | 7.0 | 567 | 1.6286 | 0.6762 | 0.4130 |
| 0.6085 | 8.0 | 648 | 1.6450 | 0.6940 | 0.4775 |
| 0.6085 | 9.0 | 729 | 1.7108 | 0.7224 | 0.4920 |
| 0.6085 | 10.0 | 810 | 1.8792 | 0.7046 | 0.5028 |
| 0.6085 | 11.0 | 891 | 1.8670 | 0.7153 | 0.4992 |
| 0.6085 | 12.0 | 972 | 1.8856 | 0.7153 | 0.4934 |
| 0.0922 | 13.0 | 1053 | 1.9506 | 0.7224 | 0.5137 |
| 0.0922 | 14.0 | 1134 | 2.0363 | 0.7189 | 0.4761 |
| 0.0922 | 15.0 | 1215 | 2.0601 | 0.7224 | 0.5053 |
| 0.0922 | 16.0 | 1296 | 2.0813 | 0.7153 | 0.5038 |
| 0.0922 | 17.0 | 1377 | 2.0960 | 0.7189 | 0.5065 |
| 0.0922 | 18.0 | 1458 | 2.1060 | 0.7224 | 0.5098 |
| 0.0101 | 19.0 | 1539 | 2.1153 | 0.7260 | 0.5086 |
| 0.0101 | 20.0 | 1620 | 2.1187 | 0.7260 | 0.5086 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"]}
|
chrommium/rubert-base-cased-sentence-finetuned-sent_in_news_sents
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #model-index #autotrain_compatible #endpoints_compatible #region-us
|
rubert-base-cased-sentence-finetuned-sent\_in\_news\_sents
==========================================================
This model is a fine-tuned version of DeepPavlov/rubert-base-cased-sentence on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9506
* Accuracy: 0.7224
* F1: 0.5137
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 14
* eval\_batch\_size: 14
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.10.3
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 14\n* eval\\_batch\\_size: 14\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.3\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 14\n* eval\\_batch\\_size: 14\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.3\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-sent_in_ru
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3503
- Accuracy: 0.6884
- F1: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 15
- eval_batch_size: 15
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 441 | 0.7397 | 0.6630 | 0.6530 |
| 0.771 | 2.0 | 882 | 0.7143 | 0.6909 | 0.6905 |
| 0.5449 | 3.0 | 1323 | 0.8385 | 0.6897 | 0.6870 |
| 0.3795 | 4.0 | 1764 | 0.8851 | 0.6939 | 0.6914 |
| 0.3059 | 5.0 | 2205 | 1.0728 | 0.6933 | 0.6953 |
| 0.2673 | 6.0 | 2646 | 1.0673 | 0.7060 | 0.7020 |
| 0.2358 | 7.0 | 3087 | 1.5200 | 0.6830 | 0.6829 |
| 0.2069 | 8.0 | 3528 | 1.3439 | 0.7024 | 0.7016 |
| 0.2069 | 9.0 | 3969 | 1.3545 | 0.6830 | 0.6833 |
| 0.1724 | 10.0 | 4410 | 1.5591 | 0.6927 | 0.6902 |
| 0.1525 | 11.0 | 4851 | 1.6425 | 0.6818 | 0.6823 |
| 0.131 | 12.0 | 5292 | 1.8999 | 0.6836 | 0.6775 |
| 0.1253 | 13.0 | 5733 | 1.6959 | 0.6884 | 0.6877 |
| 0.1132 | 14.0 | 6174 | 1.9561 | 0.6776 | 0.6803 |
| 0.0951 | 15.0 | 6615 | 2.0356 | 0.6763 | 0.6754 |
| 0.1009 | 16.0 | 7056 | 1.7995 | 0.6842 | 0.6741 |
| 0.1009 | 17.0 | 7497 | 2.0638 | 0.6884 | 0.6811 |
| 0.0817 | 18.0 | 7938 | 2.1686 | 0.6884 | 0.6859 |
| 0.0691 | 19.0 | 8379 | 2.0874 | 0.6878 | 0.6889 |
| 0.0656 | 20.0 | 8820 | 2.1772 | 0.6854 | 0.6817 |
| 0.0652 | 21.0 | 9261 | 2.4018 | 0.6872 | 0.6896 |
| 0.0608 | 22.0 | 9702 | 2.2074 | 0.6770 | 0.6656 |
| 0.0677 | 23.0 | 10143 | 2.2101 | 0.6848 | 0.6793 |
| 0.0559 | 24.0 | 10584 | 2.2920 | 0.6848 | 0.6835 |
| 0.0524 | 25.0 | 11025 | 2.3503 | 0.6884 | 0.6875 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "rubert-base-cased-sentence-finetuned-sent_in_ru", "results": []}]}
|
chrommium/rubert-base-cased-sentence-finetuned-sent_in_ru
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
rubert-base-cased-sentence-finetuned-sent\_in\_ru
=================================================
This model is a fine-tuned version of DeepPavlov/rubert-base-cased-sentence on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3503
* Accuracy: 0.6884
* F1: 0.6875
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 15
* eval\_batch\_size: 15
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 25
### Training results
### Framework versions
* Transformers 4.11.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 15\n* eval\\_batch\\_size: 15\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 25",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 15\n* eval\\_batch\\_size: 15\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 25",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7056
- Accuracy: 0.7301
- F1: 0.5210
## Model examples
Model responds to label X in news text. For exaple:
For 'Газпром отозвал лицензию у X, сообщает Финам' the model will return negative label -3
For 'X отозвал лицензию у Сбербанка, сообщает Финам' the model will return neutral label 0
For 'Газпром отозвал лицензию у Сбербанка, сообщает X' the model will return neutral label 0
For 'X демонстрирует высокую прибыль, сообщает Финам' the model will return positive label 1
## Simple example of News preprocessing for Russian before BERT
```
from natasha import (
Segmenter,
MorphVocab,
NewsEmbedding,
NewsMorphTagger,
NewsSyntaxParser,
NewsNERTagger,
PER,
NamesExtractor,
Doc
)
segmenter = Segmenter()
emb = NewsEmbedding()
morph_tagger = NewsMorphTagger(emb)
syntax_parser = NewsSyntaxParser(emb)
morph_vocab = MorphVocab()
### ----------------------------- key sentences block -----------------------------
def find_synax_tokens_with_order(doc, start, tokens, text_arr, full_str):
''' Находит все синтаксические токены, соответствующие заданному набору простых токенов (найденные
для определенной NER другими функциями).
Возвращает словарь найденных синтаксических токенов (ключ - идентификатор токена, состоящий
из номера предложения и номера токена внутри предложения).
Начинает поиск с указанной позиции в списке синтаксических токенов, дополнительно возвращает
позицию остановки, с которой нужно продолжить поиск следующей NER.
'''
found = []
in_str = False
str_candidate = ''
str_counter = 0
if len(text_arr) == 0:
return [], start
for i in range(start, len(doc.syntax.tokens)):
t = doc.syntax.tokens[i]
if in_str:
str_counter += 1
if str_counter < len(text_arr) and t.text == text_arr[str_counter]:
str_candidate += t.text
found.append(t)
if str_candidate == full_str:
return found, i+1
else:
in_str = False
str_candidate = ''
str_counter = 0
found = []
if t.text == text_arr[0]:
found.append(t)
str_candidate = t.text
if str_candidate == full_str:
return found, i+1
in_str = True
return [], len(doc.syntax.tokens)
def find_tokens_in_diap_with_order(doc, start_token, diap):
''' Находит все простые токены (без синтаксической информации), которые попадают в
указанный диапазон. Эти диапазоны мы получаем из разметки NER.
Возвращает набор найденных токенов и в виде массива токенов, и в виде массива строчек.
Начинает поиск с указанной позиции в строке и дополнительно возвращает позицию остановки.
'''
found_tokens = []
found_text = []
full_str = ''
next_i = 0
for i in range(start_token, len(doc.tokens)):
t = doc.tokens[i]
if t.start > diap[-1]:
next_i = i
break
if t.start in diap:
found_tokens.append(t)
found_text.append(t.text)
full_str += t.text
return found_tokens, found_text, full_str, next_i
def add_found_arr_to_dict(found, dict_dest):
for synt in found:
dict_dest.update({synt.id: synt})
return dict_dest
def make_all_syntax_dict(doc):
all_syntax = {}
for synt in doc.syntax.tokens:
all_syntax.update({synt.id: synt})
return all_syntax
def is_consiquent(id_1, id_2):
''' Проверяет идут ли токены друг за другом без промежутка по ключам. '''
id_1_list = id_1.split('_')
id_2_list = id_2.split('_')
if id_1_list[0] != id_2_list[0]:
return False
return int(id_1_list[1]) + 1 == int(id_2_list[1])
def replace_found_to(found, x_str):
''' Заменяет последовательность токенов NER на «заглушку». '''
prev_id = '0_0'
for synt in found:
if is_consiquent(prev_id, synt.id):
synt.text = ''
else:
synt.text = x_str
prev_id = synt.id
def analyze_doc(text):
''' Запускает Natasha для анализа документа. '''
doc = Doc(text)
doc.segment(segmenter)
doc.tag_morph(morph_tagger)
doc.parse_syntax(syntax_parser)
ner_tagger = NewsNERTagger(emb)
doc.tag_ner(ner_tagger)
return doc
def find_non_sym_syntax_short(entity_name, doc, add_X=False, x_str='X'):
''' Отыскивает заданную сущность в тексте, среди всех NER (возможно, в другой грамматической форме).
entity_name - сущность, которую ищем;
doc - документ, в котором сделан препроцессинг Natasha;
add_X - сделать ли замену сущности на «заглушку»;
x_str - текст замены.
Возвращает:
all_found_syntax - словарь всех подходящих токенов образующих искомые сущности, в котором
в случае надобности произведена замена NER на «заглушку»;
all_syntax - словарь всех токенов.
'''
all_found_syntax = {}
current_synt_number = 0
current_tok_number = 0
# идем по всем найденным NER
for span in doc.spans:
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
diap = range(span.start, span.stop)
# создаем словарь всех синтаксических элементов (ключ -- id из номера предложения и номера внутри предложения)
all_syntax = make_all_syntax_dict(doc)
# находим все простые токены внутри NER
found_tokens, found_text, full_str, current_tok_number = find_tokens_in_diap_with_order(doc, current_tok_number,
diap)
# по найденным простым токенам находим все синтаксические токены внутри данного NER
found, current_synt_number = find_synax_tokens_with_order(doc, current_synt_number, found_tokens, found_text,
full_str)
# если текст NER совпадает с указанной сущностью, то делаем замену
if entity_name.find(span.normal) >= 0 or span.normal.find(entity_name) >= 0:
if add_X:
replace_found_to(found, x_str)
all_found_syntax = add_found_arr_to_dict(found, all_found_syntax)
return all_found_syntax, all_syntax
def key_sentences(all_found_syntax):
''' Находит номера предложений с искомой NER. '''
key_sent_numb = {}
for synt in all_found_syntax.keys():
key_sent_numb.update({synt.split('_')[0]: 1})
return key_sent_numb
def openinig_punct(x):
opennings = ['«', '(']
return x in opennings
def key_sentences_str(entitiy_name, doc, add_X=False, x_str='X', return_all=True):
''' Составляет окончательный текст, в котором есть только предложения, где есть ключевая сущность,
эта сущность, если указано, заменяется на «заглушку».
'''
all_found_syntax, all_syntax = find_non_sym_syntax_short(entitiy_name, doc, add_X, x_str)
key_sent_numb = key_sentences(all_found_syntax)
str_ret = ''
for s in all_syntax.keys():
if (s.split('_')[0] in key_sent_numb.keys()) or (return_all):
to_add = all_syntax[s]
if s in all_found_syntax.keys():
to_add = all_found_syntax[s]
else:
if to_add.rel == 'punct' and not openinig_punct(to_add.text):
str_ret = str_ret.rstrip()
str_ret += to_add.text
if (not openinig_punct(to_add.text)) and (to_add.text != ''):
str_ret += ' '
return str_ret
### ----------------------------- key entities block -----------------------------
def find_synt(doc, synt_id):
for synt in doc.syntax.tokens:
if synt.id == synt_id:
return synt
return None
def is_subj(doc, synt, recursion_list=[]):
''' Сообщает является ли слово подлежащим или частью сложного подлежащего. '''
if synt.rel == 'nsubj':
return True
if synt.rel == 'appos':
found_head = find_synt(doc, synt.head_id)
if found_head.id in recursion_list:
return False
return is_subj(doc, found_head, recursion_list + [synt.id])
return False
def find_subjects_in_syntax(doc):
''' Выдает словарик, в котором для каждой NER написано, является ли он
подлежащим в предложении.
Выдает стартовую позицию NER и было ли оно подлежащим (или appos)
'''
found_subjects = {}
current_synt_number = 0
current_tok_number = 0
for span in doc.spans:
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
found_subjects.update({span.start: 0})
diap = range(span.start, span.stop)
found_tokens, found_text, full_str, current_tok_number = find_tokens_in_diap_with_order(doc,
current_tok_number,
diap)
found, current_synt_number = find_synax_tokens_with_order(doc, current_synt_number, found_tokens,
found_text, full_str)
found_subjects.update({span.start: 0})
for synt in found:
if is_subj(doc, synt):
found_subjects.update({span.start: 1})
return found_subjects
def entity_weight(lst, c=1):
return c*lst[0]+lst[1]
def determine_subject(found_subjects, doc, new_agency_list, return_best=True, threshold=0.75):
''' Определяет ключевую NER и список самых важных NER, основываясь на том, сколько
раз каждая из них встречается в текста вообще и сколько раз в роли подлежащего '''
objects_arr = []
objects_arr_ners = []
should_continue = False
for span in doc.spans:
should_continue = False
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
if span.normal in new_agency_list:
continue
for i in range(len(objects_arr)):
t, lst = objects_arr[i]
if t.find(span.normal) >= 0:
lst[0] += 1
lst[1] += found_subjects[span.start]
should_continue = True
break
if span.normal.find(t) >= 0:
objects_arr[i] = (span.normal, [lst[0]+1, lst[1]+found_subjects[span.start]])
should_continue = True
break
if should_continue:
continue
objects_arr.append((span.normal, [1, found_subjects[span.start]]))
objects_arr_ners.append(span.normal)
max_weight = 0
opt_ent = 0
for obj in objects_arr:
t, lst = obj
w = entity_weight(lst)
if max_weight < w:
max_weight = w
opt_ent = t
if not return_best:
return opt_ent, objects_arr_ners
bests = []
for obj in objects_arr:
t, lst = obj
w = entity_weight(lst)
if max_weight*threshold < w:
bests.append(t)
return opt_ent, bests
text = '''В офисах Сбера начали тестировать технологию помощи посетителям в экстренных ситуациях. «Зеленая кнопка» будет
в зонах круглосуточного обслуживания офисов банка в Воронеже, Санкт-Петербурге, Подольске, Пскове, Орле и Ярославле.
В них находятся стенды с сенсорными кнопками, обеспечивающие связь с операторами центра мониторинга службы безопасности
банка. Получив сигнал о помощи, оператор центра может подключиться к объекту по голосовой связи. С помощью камер
видеонаблюдения он оценит обстановку и при необходимости вызовет полицию или скорую помощь. «Зеленой кнопкой» можно
воспользоваться в нерабочее для отделения время, если возникла угроза жизни или здоровью. В остальных случаях помочь
клиентам готовы сотрудники отделения банка. «Одно из направлений нашей работы в области ESG и устойчивого развития
— это забота об обществе. И здоровье людей как высшая ценность является его основой. Поэтому задача банка в области
безопасности гораздо масштабнее, чем обеспечение только финансовой безопасности клиентов. Этот пилотный проект
приурочен к 180-летию Сбербанка: мы хотим, чтобы, приходя в банк, клиент чувствовал, что его жизнь и безопасность
— наша ценность», — отметил заместитель председателя правления Сбербанка Станислав Кузнецов.'''
doc = analyze_doc(text)
key_entity = determine_subject(find_subjects_in_syntax(doc), doc, [])[0]
text_for_model = key_sentences_str(key_entity, doc, add_X=True, x_str='X', return_all=False)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 176 | 0.9504 | 0.6903 | 0.2215 |
| No log | 2.0 | 352 | 0.9065 | 0.7159 | 0.4760 |
| 0.8448 | 3.0 | 528 | 0.9687 | 0.7045 | 0.4774 |
| 0.8448 | 4.0 | 704 | 1.2436 | 0.7045 | 0.4686 |
| 0.8448 | 5.0 | 880 | 1.4809 | 0.7273 | 0.4630 |
| 0.2074 | 6.0 | 1056 | 1.5866 | 0.7330 | 0.5185 |
| 0.2074 | 7.0 | 1232 | 1.7056 | 0.7301 | 0.5210 |
| 0.2074 | 8.0 | 1408 | 1.6982 | 0.7415 | 0.5056 |
| 0.0514 | 9.0 | 1584 | 1.8088 | 0.7273 | 0.5203 |
| 0.0514 | 10.0 | 1760 | 1.9250 | 0.7102 | 0.4879 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "sbert_large-finetuned-sent_in_news_sents", "results": []}]}
|
chrommium/sbert_large-finetuned-sent_in_news_sents
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
sbert\_large-finetuned-sent\_in\_news\_sents
============================================
This model is a fine-tuned version of sberbank-ai/sbert\_large\_nlu\_ru on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7056
* Accuracy: 0.7301
* F1: 0.5210
Model examples
--------------
Model responds to label X in news text. For exaple:
For 'Газпром отозвал лицензию у X, сообщает Финам' the model will return negative label -3
For 'X отозвал лицензию у Сбербанка, сообщает Финам' the model will return neutral label 0
For 'Газпром отозвал лицензию у Сбербанка, сообщает X' the model will return neutral label 0
For 'X демонстрирует высокую прибыль, сообщает Финам' the model will return positive label 1
Simple example of News preprocessing for Russian before BERT
------------------------------------------------------------
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 6
* eval\_batch\_size: 6
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.11.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents_3lab
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9443
- Accuracy: 0.8580
- F1: 0.6199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 264 | 0.6137 | 0.8608 | 0.3084 |
| 0.524 | 2.0 | 528 | 0.6563 | 0.8722 | 0.4861 |
| 0.524 | 3.0 | 792 | 0.7110 | 0.8494 | 0.4687 |
| 0.2225 | 4.0 | 1056 | 0.7323 | 0.8608 | 0.6015 |
| 0.2225 | 5.0 | 1320 | 0.9604 | 0.8551 | 0.6185 |
| 0.1037 | 6.0 | 1584 | 0.8801 | 0.8523 | 0.5535 |
| 0.1037 | 7.0 | 1848 | 0.9443 | 0.8580 | 0.6199 |
| 0.0479 | 8.0 | 2112 | 1.0048 | 0.8608 | 0.6168 |
| 0.0479 | 9.0 | 2376 | 0.9757 | 0.8551 | 0.6097 |
| 0.0353 | 10.0 | 2640 | 1.0743 | 0.8580 | 0.6071 |
| 0.0353 | 11.0 | 2904 | 1.1216 | 0.8580 | 0.6011 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "sbert_large-finetuned-sent_in_news_sents_3lab", "results": []}]}
|
chrommium/sbert_large-finetuned-sent_in_news_sents_3lab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
sbert\_large-finetuned-sent\_in\_news\_sents\_3lab
==================================================
This model is a fine-tuned version of sberbank-ai/sbert\_large\_nlu\_ru on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9443
* Accuracy: 0.8580
* F1: 0.6199
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 17
### Training results
### Framework versions
* Transformers 4.11.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 17",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 17",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-sent_in_news
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8872
- Accuracy: 0.7273
- F1: 0.5125
## Model description
Модель ассиметрична, реагирует на метку X в тексте новости.
Попробуйте следующие примеры:
a) Агентство X понизило рейтинг банка Fitch.
b) Агентство Fitch понизило рейтинг банка X.
a) Компания Финам показала рекордную прибыль, говорят аналитики компании X.
b) Компания X показала рекордную прибыль, говорят аналитики компании Финам.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 106 | 1.2526 | 0.6108 | 0.1508 |
| No log | 2.0 | 212 | 1.1553 | 0.6648 | 0.1141 |
| No log | 3.0 | 318 | 1.1150 | 0.6591 | 0.1247 |
| No log | 4.0 | 424 | 1.0007 | 0.6705 | 0.1383 |
| 1.1323 | 5.0 | 530 | 0.9267 | 0.6733 | 0.2027 |
| 1.1323 | 6.0 | 636 | 1.0869 | 0.6335 | 0.4084 |
| 1.1323 | 7.0 | 742 | 1.1224 | 0.6932 | 0.4586 |
| 1.1323 | 8.0 | 848 | 1.2535 | 0.6307 | 0.3424 |
| 1.1323 | 9.0 | 954 | 1.4288 | 0.6932 | 0.4881 |
| 0.5252 | 10.0 | 1060 | 1.5856 | 0.6932 | 0.4739 |
| 0.5252 | 11.0 | 1166 | 1.7101 | 0.6733 | 0.4530 |
| 0.5252 | 12.0 | 1272 | 1.7330 | 0.6903 | 0.4750 |
| 0.5252 | 13.0 | 1378 | 1.8872 | 0.7273 | 0.5125 |
| 0.5252 | 14.0 | 1484 | 1.8797 | 0.7301 | 0.5033 |
| 0.1252 | 15.0 | 1590 | 1.9339 | 0.7330 | 0.5024 |
| 0.1252 | 16.0 | 1696 | 1.9632 | 0.7301 | 0.4967 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "xlm-roberta-large-finetuned-sent_in_news", "results": []}]}
|
chrommium/xlm-roberta-large-finetuned-sent_in_news
| null |
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
xlm-roberta-large-finetuned-sent\_in\_news
==========================================
This model is a fine-tuned version of xlm-roberta-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8872
* Accuracy: 0.7273
* F1: 0.5125
Model description
-----------------
Модель ассиметрична, реагирует на метку X в тексте новости.
Попробуйте следующие примеры:
a) Агентство X понизило рейтинг банка Fitch.
b) Агентство Fitch понизило рейтинг банка X.
a) Компания Финам показала рекордную прибыль, говорят аналитики компании X.
b) Компания X показала рекордную прибыль, говорят аналитики компании Финам.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 10
* eval\_batch\_size: 10
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 16
### Training results
### Framework versions
* Transformers 4.11.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 16",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 16",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
[blenderbot-400M-distill](https://huggingface.co/facebook/blenderbot-400M-distill) fine-tuned on the [ESConv dataset](https://github.com/thu-coai/Emotional-Support-Conversation). Usage example:
```python
import torch
from transformers import AutoTokenizer
from transformers.models.blenderbot import BlenderbotTokenizer, BlenderbotForConditionalGeneration
def _norm(x):
return ' '.join(x.strip().split())
tokenizer = BlenderbotTokenizer.from_pretrained('thu-coai/blenderbot-400M-esconv')
model = BlenderbotForConditionalGeneration.from_pretrained('thu-coai/blenderbot-400M-esconv')
model.eval()
utterances = [
"I am having a lot of anxiety about quitting my current job. It is too stressful but pays well",
"What makes your job stressful for you?",
"I have to deal with many people in hard financial situations and it is upsetting",
"Do you help your clients to make it to a better financial situation?",
"I do, but often they are not going to get back to what they want. Many people are going to lose their home when safeguards are lifted",
]
input_sequence = ' '.join([' ' + e for e in utterances]) + tokenizer.eos_token # add space prefix and separate utterances with two spaces
input_ids = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(input_sequence))[-128:]
input_ids = torch.LongTensor([input_ids])
model_output = model.generate(input_ids, num_beams=1, do_sample=True, top_p=0.9, num_return_sequences=5, return_dict=False)
generation = tokenizer.batch_decode(model_output, skip_special_tokens=True)
generation = [_norm(e) for e in generation]
print(generation)
utterances.append(generation[0]) # for future loop
```
Please kindly cite the [original paper](https://arxiv.org/abs/2106.01144) if you use this model:
```bib
@inproceedings{liu-etal-2021-towards,
title={Towards Emotional Support Dialog Systems},
author={Liu, Siyang and
Zheng, Chujie and
Demasi, Orianna and
Sabour, Sahand and
Li, Yu and
Yu, Zhou and
Jiang, Yong and
Huang, Minlie},
booktitle={Proceedings of the 59th annual meeting of the Association for Computational Linguistics},
year={2021}
}
```
|
{"language": ["en"], "tags": ["pytorch", "coai"], "pipeline_tag": "conversational"}
|
thu-coai/blenderbot-400M-esconv
| null |
[
"transformers",
"pytorch",
"safetensors",
"blenderbot",
"text2text-generation",
"coai",
"conversational",
"en",
"arxiv:2106.01144",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.01144"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #blenderbot #text2text-generation #coai #conversational #en #arxiv-2106.01144 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
blenderbot-400M-distill fine-tuned on the ESConv dataset. Usage example:
Please kindly cite the original paper if you use this model:
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #blenderbot #text2text-generation #coai #conversational #en #arxiv-2106.01144 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
null |
transformers
|
## EnDR-BERT
EnDR-BERT - Multilingual, Cased, which pretrained on the english collection of consumer comments on drug administration from [2]. Pre-training was based on the [original BERT code](https://github.com/google-research/bert) provided by Google. In particular, Multi-BERT was for used for initialization and all the parameters are the same as in Multi-BERT. Training details are described in our paper. \
link: https://yadi.sk/d/-PTn0xhk1PqvgQ
## Citing & Authors
If you find this repository helpful, feel free to cite our publication:
[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.//Bioinformatics. - 2020.
preprint: https://arxiv.org/abs/2004.03659
```
@article{10.1093/bioinformatics/btaa675,
author = {Tutubalina, Elena and Alimova, Ilseyar and Miftahutdinov, Zulfat and Sakhovskiy, Andrey and Malykh, Valentin and Nikolenko, Sergey},
title = "{The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews}",
journal = {Bioinformatics},
year = {2020},
month = {07},
issn = {1367-4803},
doi = {10.1093/bioinformatics/btaa675},
url = {https://doi.org/10.1093/bioinformatics/btaa675},
note = {btaa675},
eprint = {https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btaa675/33539752/btaa675.pdf},
}
```
[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.//Russian Chemical Bulletin. – 2017. – Т. 66. – №. 11. – С. 2180-2189.
[link to paper](https://www.researchgate.net/profile/Elena_Tutubalina/publication/323751823_Using_semantic_analysis_of_texts_for_the_identification_of_drugs_with_similar_therapeutic_effects/links/5bf7cfc3299bf1a0202cbc1f/Using-semantic-analysis-of-texts-for-the-identification-of-drugs-with-similar-therapeutic-effects.pdf)
```
@article{tutubalina2017using,
title={Using semantic analysis of texts for the identification of drugs with similar therapeutic effects},
author={Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE},
journal={Russian Chemical Bulletin},
volume={66},
number={11},
pages={2180--2189},
year={2017},
publisher={Springer}
}
```
|
{"language": ["ru", "en"], "tags": ["bio", "med", "biomedical"]}
|
cimm-kzn/endr-bert
| null |
[
"transformers",
"pytorch",
"bio",
"med",
"biomedical",
"ru",
"en",
"arxiv:2004.03659",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2004.03659"
] |
[
"ru",
"en"
] |
TAGS
#transformers #pytorch #bio #med #biomedical #ru #en #arxiv-2004.03659 #endpoints_compatible #region-us
|
## EnDR-BERT
EnDR-BERT - Multilingual, Cased, which pretrained on the english collection of consumer comments on drug administration from [2]. Pre-training was based on the original BERT code provided by Google. In particular, Multi-BERT was for used for initialization and all the parameters are the same as in Multi-BERT. Training details are described in our paper. \
link: URL
## Citing & Authors
If you find this repository helpful, feel free to cite our publication:
[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.//Bioinformatics. - 2020.
preprint: URL
[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.//Russian Chemical Bulletin. – 2017. – Т. 66. – №. 11. – С. 2180-2189.
link to paper
|
[
"## EnDR-BERT\n\n EnDR-BERT - Multilingual, Cased, which pretrained on the english collection of consumer comments on drug administration from [2]. Pre-training was based on the original BERT code provided by Google. In particular, Multi-BERT was for used for initialization and all the parameters are the same as in Multi-BERT. Training details are described in our paper. \\\n link: URL\n\n \n ## Citing & Authors\n\n If you find this repository helpful, feel free to cite our publication:\n\n [1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.//Bioinformatics. - 2020. \n\n preprint: URL\n \n [2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.//Russian Chemical Bulletin. – 2017. – Т. 66. – №. 11. – С. 2180-2189.\n link to paper"
] |
[
"TAGS\n#transformers #pytorch #bio #med #biomedical #ru #en #arxiv-2004.03659 #endpoints_compatible #region-us \n",
"## EnDR-BERT\n\n EnDR-BERT - Multilingual, Cased, which pretrained on the english collection of consumer comments on drug administration from [2]. Pre-training was based on the original BERT code provided by Google. In particular, Multi-BERT was for used for initialization and all the parameters are the same as in Multi-BERT. Training details are described in our paper. \\\n link: URL\n\n \n ## Citing & Authors\n\n If you find this repository helpful, feel free to cite our publication:\n\n [1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.//Bioinformatics. - 2020. \n\n preprint: URL\n \n [2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.//Russian Chemical Bulletin. – 2017. – Т. 66. – №. 11. – С. 2180-2189.\n link to paper"
] |
null |
transformers
|
## EnRuDR-BERT
EnRuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews) and english collection of consumer comments on drug administration from [2]. Pre-training was based on the [original BERT code](https://github.com/google-research/bert) provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \
link: https://yadi.sk/d/-PTn0xhk1PqvgQ
## Citing & Authors
If you find this repository helpful, feel free to cite our publication:
[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.//Bioinformatics. - 2020.
preprint: https://arxiv.org/abs/2004.03659
```
@article{10.1093/bioinformatics/btaa675,
author = {Tutubalina, Elena and Alimova, Ilseyar and Miftahutdinov, Zulfat and Sakhovskiy, Andrey and Malykh, Valentin and Nikolenko, Sergey},
title = "{The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews}",
journal = {Bioinformatics},
year = {2020},
month = {07},
issn = {1367-4803},
doi = {10.1093/bioinformatics/btaa675},
url = {https://doi.org/10.1093/bioinformatics/btaa675},
note = {btaa675},
eprint = {https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btaa675/33539752/btaa675.pdf},
}
```
[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.//Russian Chemical Bulletin. – 2017. – Т. 66. – №. 11. – С. 2180-2189.
[link to paper](https://www.researchgate.net/profile/Elena_Tutubalina/publication/323751823_Using_semantic_analysis_of_texts_for_the_identification_of_drugs_with_similar_therapeutic_effects/links/5bf7cfc3299bf1a0202cbc1f/Using-semantic-analysis-of-texts-for-the-identification-of-drugs-with-similar-therapeutic-effects.pdf)
```
@article{tutubalina2017using,
title={Using semantic analysis of texts for the identification of drugs with similar therapeutic effects},
author={Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE},
journal={Russian Chemical Bulletin},
volume={66},
number={11},
pages={2180--2189},
year={2017},
publisher={Springer}
}
```
|
{"language": ["ru", "en"], "tags": ["bio", "med", "biomedical"]}
|
cimm-kzn/enrudr-bert
| null |
[
"transformers",
"pytorch",
"bio",
"med",
"biomedical",
"ru",
"en",
"arxiv:2004.03659",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2004.03659"
] |
[
"ru",
"en"
] |
TAGS
#transformers #pytorch #bio #med #biomedical #ru #en #arxiv-2004.03659 #endpoints_compatible #region-us
|
## EnRuDR-BERT
EnRuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews) and english collection of consumer comments on drug administration from [2]. Pre-training was based on the original BERT code provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \
link: URL
## Citing & Authors
If you find this repository helpful, feel free to cite our publication:
[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.//Bioinformatics. - 2020.
preprint: URL
[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.//Russian Chemical Bulletin. – 2017. – Т. 66. – №. 11. – С. 2180-2189.
link to paper
|
[
"## EnRuDR-BERT\n\nEnRuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews) and english collection of consumer comments on drug administration from [2]. Pre-training was based on the original BERT code provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \\\n link: URL",
"## Citing & Authors\n\nIf you find this repository helpful, feel free to cite our publication:\n\n[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.//Bioinformatics. - 2020. \n \n preprint: URL\n\n[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.//Russian Chemical Bulletin. – 2017. – Т. 66. – №. 11. – С. 2180-2189.\n link to paper"
] |
[
"TAGS\n#transformers #pytorch #bio #med #biomedical #ru #en #arxiv-2004.03659 #endpoints_compatible #region-us \n",
"## EnRuDR-BERT\n\nEnRuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews) and english collection of consumer comments on drug administration from [2]. Pre-training was based on the original BERT code provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \\\n link: URL",
"## Citing & Authors\n\nIf you find this repository helpful, feel free to cite our publication:\n\n[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.//Bioinformatics. - 2020. \n \n preprint: URL\n\n[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.//Russian Chemical Bulletin. – 2017. – Т. 66. – №. 11. – С. 2180-2189.\n link to paper"
] |
null |
transformers
|
## RuDR-BERT
RuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews). Pre-training was based on the [original BERT code](https://github.com/google-research/bert) provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \
link: https://yadi.sk/d/-PTn0xhk1PqvgQ
## Citing & Authors
If you find this repository helpful, feel free to cite our publication:
[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.
preprint: https://arxiv.org/abs/2004.03659
```
@article{10.1093/bioinformatics/btaa675,
author = {Tutubalina, Elena and Alimova, Ilseyar and Miftahutdinov, Zulfat and Sakhovskiy, Andrey and Malykh, Valentin and Nikolenko, Sergey},
title = "{The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews}",
journal = {Bioinformatics},
year = {2020},
month = {07},
issn = {1367-4803},
doi = {10.1093/bioinformatics/btaa675},
url = {https://doi.org/10.1093/bioinformatics/btaa675},
note = {btaa675},
eprint = {https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btaa675/33539752/btaa675.pdf},
}
```
[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.
[link to paper](https://www.researchgate.net/profile/Elena_Tutubalina/publication/323751823_Using_semantic_analysis_of_texts_for_the_identification_of_drugs_with_similar_therapeutic_effects/links/5bf7cfc3299bf1a0202cbc1f/Using-semantic-analysis-of-texts-for-the-identification-of-drugs-with-similar-therapeutic-effects.pdf)
```
@article{tutubalina2017using,
title={Using semantic analysis of texts for the identification of drugs with similar therapeutic effects},
author={Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE},
journal={Russian Chemical Bulletin},
volume={66},
number={11},
pages={2180--2189},
year={2017},
publisher={Springer}
}
```
|
{"language": ["ru"], "tags": ["bio", "med", "biomedical"]}
|
cimm-kzn/rudr-bert
| null |
[
"transformers",
"pytorch",
"bio",
"med",
"biomedical",
"ru",
"arxiv:2004.03659",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2004.03659"
] |
[
"ru"
] |
TAGS
#transformers #pytorch #bio #med #biomedical #ru #arxiv-2004.03659 #endpoints_compatible #region-us
|
## RuDR-BERT
RuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews). Pre-training was based on the original BERT code provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \
link: URL
## Citing & Authors
If you find this repository helpful, feel free to cite our publication:
[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.
preprint: URL
[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.
link to paper
|
[
"## RuDR-BERT\n\nRuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews). Pre-training was based on the original BERT code provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \\\n link: URL",
"## Citing & Authors\n\nIf you find this repository helpful, feel free to cite our publication:\n\n[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews. \n \n preprint: URL\n\n[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.\n link to paper"
] |
[
"TAGS\n#transformers #pytorch #bio #med #biomedical #ru #arxiv-2004.03659 #endpoints_compatible #region-us \n",
"## RuDR-BERT\n\nRuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews). Pre-training was based on the original BERT code provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \\\n link: URL",
"## Citing & Authors\n\nIf you find this repository helpful, feel free to cite our publication:\n\n[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews. \n \n preprint: URL\n\n[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.\n link to paper"
] |
null | null |
End-2-End with english
|
{}
|
cjcu/End2End-asr
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
End-2-End with english
|
[] |
[
"TAGS\n#region-us \n"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta_base-finetuned-tydiqa
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 192 | 2.1359 |
| No log | 2.0 | 384 | 2.3409 |
| 0.8353 | 3.0 | 576 | 2.3728 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"language": ["sw"], "tags": ["generated_from_trainer"], "datasets": ["tydiqa"], "model-index": [{"name": "afriberta_base-finetuned-tydiqa", "results": []}]}
|
cjrowe/afriberta_base-finetuned-tydiqa
| null |
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"sw",
"dataset:tydiqa",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sw"
] |
TAGS
#transformers #pytorch #tensorboard #xlm-roberta #question-answering #generated_from_trainer #sw #dataset-tydiqa #endpoints_compatible #region-us
|
afriberta\_base-finetuned-tydiqa
================================
This model is a fine-tuned version of castorini/afriberta\_base on the tydiqa dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3728
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #question-answering #generated_from_trainer #sw #dataset-tydiqa #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlu_sherlock_model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -947, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "nlu_sherlock_model", "results": []}]}
|
ckenlam/nlu_sherlock_model
| null |
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #roberta #fill-mask #generated_from_keras_callback #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# nlu_sherlock_model
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -947, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# nlu_sherlock_model\n\nThis model is a fine-tuned version of roberta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -947, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.8.0\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #roberta #fill-mask #generated_from_keras_callback #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# nlu_sherlock_model\n\nThis model is a fine-tuned version of roberta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -947, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.8.0\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlu_sherlock_model_20220220
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -955, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "nlu_sherlock_model_20220220", "results": []}]}
|
ckenlam/nlu_sherlock_model_20220220
| null |
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #roberta #fill-mask #generated_from_keras_callback #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# nlu_sherlock_model_20220220
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -955, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# nlu_sherlock_model_20220220\n\nThis model is a fine-tuned version of roberta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -955, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.8.0\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #roberta #fill-mask #generated_from_keras_callback #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# nlu_sherlock_model_20220220\n\nThis model is a fine-tuned version of roberta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -955, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.8.0\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
token-classification
|
transformers
|
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-ner')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "token-classification", "albert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/albert-base-chinese-ner
| null |
[
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP ALBERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CKIP ALBERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
token-classification
|
transformers
|
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-pos')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "token-classification", "albert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/albert-base-chinese-pos
| null |
[
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP ALBERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CKIP ALBERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
token-classification
|
transformers
|
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-ws')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "token-classification", "albert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/albert-base-chinese-ws
| null |
[
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP ALBERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CKIP ALBERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
fill-mask
|
transformers
|
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-base-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "lm-head", "albert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/albert-base-chinese
| null |
[
"transformers",
"pytorch",
"albert",
"fill-mask",
"lm-head",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #albert #fill-mask #lm-head #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP ALBERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #albert #fill-mask #lm-head #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CKIP ALBERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
token-classification
|
transformers
|
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese-ner')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "token-classification", "albert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/albert-tiny-chinese-ner
| null |
[
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP ALBERT Tiny Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CKIP ALBERT Tiny Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
token-classification
|
transformers
|
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese-pos')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "token-classification", "albert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/albert-tiny-chinese-pos
| null |
[
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP ALBERT Tiny Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CKIP ALBERT Tiny Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
token-classification
|
transformers
|
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese-ws')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "token-classification", "albert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/albert-tiny-chinese-ws
| null |
[
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP ALBERT Tiny Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #albert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# CKIP ALBERT Tiny Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
fill-mask
|
transformers
|
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "lm-head", "albert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/albert-tiny-chinese
| null |
[
"transformers",
"pytorch",
"albert",
"fill-mask",
"lm-head",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #albert #fill-mask #lm-head #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP ALBERT Tiny Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #albert #fill-mask #lm-head #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# CKIP ALBERT Tiny Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
token-classification
|
transformers
|
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-base-chinese-ner')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "token-classification", "bert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/bert-base-chinese-ner
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #bert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP BERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# CKIP BERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
token-classification
|
transformers
|
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-base-chinese-pos')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "token-classification", "bert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/bert-base-chinese-pos
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #bert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP BERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CKIP BERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
token-classification
|
transformers
|
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-base-chinese-ws')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "token-classification", "bert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/bert-base-chinese-ws
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #bert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP BERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #token-classification #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CKIP BERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
fill-mask
|
transformers
|
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-base-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "lm-head", "bert", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/bert-base-chinese
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"lm-head",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #lm-head #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP BERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #lm-head #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# CKIP BERT Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
text-generation
|
transformers
|
# CKIP GPT2 Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/gpt2-base-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
{"language": ["zh"], "license": "gpl-3.0", "tags": ["pytorch", "lm-head", "gpt2", "zh"], "thumbnail": "https://ckip.iis.sinica.edu.tw/files/ckip_logo.png"}
|
ckiplab/gpt2-base-chinese
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"lm-head",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #lm-head #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# CKIP GPT2 Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- URL
## Contributers
- Mu Yang at CKIP (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
For full usage and more information, please refer to URL
有關完整使用方法及其他資訊,請參見 URL 。
|
[
"# CKIP GPT2 Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #lm-head #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# CKIP GPT2 Base Chinese\n\nThis project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).\n\n這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。",
"## Homepage\n\n- URL",
"## Contributers\n\n- Mu Yang at CKIP (Author & Maintainer)",
"## Usage\n\nPlease use BertTokenizerFast as tokenizer instead of AutoTokenizer.\n\n請使用 BertTokenizerFast 而非 AutoTokenizer。\n\n\n\nFor full usage and more information, please refer to URL\n\n有關完整使用方法及其他資訊,請參見 URL 。"
] |
fill-mask
|
transformers
|
# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by character-level tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.
The vocabulary size is 6144.
We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/).
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": ["wikipedia"], "widget": [{"text": "\u6771\u5317\u5927\u5b66\u3067[MASK]\u306e\u7814\u7a76\u3092\u3057\u3066\u3044\u307e\u3059\u3002"}]}
|
tohoku-nlp/bert-base-japanese-char-v2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831)
This is a BERT model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by character-level tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at cl-tohoku/bert-japanese.
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.
The vocabulary size is 6144.
We used 'fugashi' and 'unidic-lite' packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.
## Acknowledgments
This model is trained with Cloud TPUs provided by TensorFlow Research Cloud program.
|
[
"# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by character-level tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe models are trained on the Japanese version of Wikipedia.\nThe training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.\n\nThe generated corpus files are 4.0GB in total, containing approximately 30M sentences.\nWe used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.\nThe vocabulary size is 6144.\n\nWe used 'fugashi' and 'unidic-lite' packages for the tokenization.",
"## Training\n\nThe models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\nFor training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.\n\nFor training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.\nThe training took about 5 days to finish.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nThis model is trained with Cloud TPUs provided by TensorFlow Research Cloud program."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by character-level tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe models are trained on the Japanese version of Wikipedia.\nThe training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.\n\nThe generated corpus files are 4.0GB in total, containing approximately 30M sentences.\nWe used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.\nThe vocabulary size is 6144.\n\nWe used 'fugashi' and 'unidic-lite' packages for the tokenization.",
"## Training\n\nThe models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\nFor training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.\n\nFor training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.\nThe training took about 5 days to finish.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nThis model is trained with Cloud TPUs provided by TensorFlow Research Cloud program."
] |
fill-mask
|
transformers
|
# BERT base Japanese (character tokenization, whole word masking enabled)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into characters.
The vocabulary size is 4000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For the training of the MLM (masked language modeling) objective, we introduced the **Whole Word Masking** in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": ["wikipedia"], "widget": [{"text": "\u4ed9\u53f0\u306f\u300c[MASK]\u306e\u90fd\u300d\u3068\u547c\u3070\u308c\u3066\u3044\u308b\u3002"}]}
|
tohoku-nlp/bert-base-japanese-char-whole-word-masking
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT base Japanese (character tokenization, whole word masking enabled)
This is a BERT model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at cl-tohoku/bert-japanese.
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into characters.
The vocabulary size is 4000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For the training of the MLM (masked language modeling) objective, we introduced the Whole Word Masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
## Licenses
The pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.
## Acknowledgments
For training models, we used Cloud TPUs provided by TensorFlow Research Cloud program.
|
[
"# BERT base Japanese (character tokenization, whole word masking enabled)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe model is trained on Japanese Wikipedia as of September 1, 2019.\nTo generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.\nThe text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into characters.\nThe vocabulary size is 4000.",
"## Training\n\nThe model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\n\nFor the training of the MLM (masked language modeling) objective, we introduced the Whole Word Masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nFor training models, we used Cloud TPUs provided by TensorFlow Research Cloud program."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT base Japanese (character tokenization, whole word masking enabled)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe model is trained on Japanese Wikipedia as of September 1, 2019.\nTo generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.\nThe text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into characters.\nThe vocabulary size is 4000.",
"## Training\n\nThe model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\n\nFor the training of the MLM (masked language modeling) objective, we introduced the Whole Word Masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nFor training models, we used Cloud TPUs provided by TensorFlow Research Cloud program."
] |
fill-mask
|
transformers
|
# BERT base Japanese (character tokenization)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into characters.
The vocabulary size is 4000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": ["wikipedia"], "widget": [{"text": "\u4ed9\u53f0\u306f\u300c[MASK]\u306e\u90fd\u300d\u3068\u547c\u3070\u308c\u3066\u3044\u308b\u3002"}]}
|
tohoku-nlp/bert-base-japanese-char
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT base Japanese (character tokenization)
This is a BERT model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization.
The codes for the pretraining are available at cl-tohoku/bert-japanese.
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into characters.
The vocabulary size is 4000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
## Licenses
The pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.
## Acknowledgments
For training models, we used Cloud TPUs provided by TensorFlow Research Cloud program.
|
[
"# BERT base Japanese (character tokenization)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe model is trained on Japanese Wikipedia as of September 1, 2019.\nTo generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.\nThe text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into characters.\nThe vocabulary size is 4000.",
"## Training\n\nThe model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nFor training models, we used Cloud TPUs provided by TensorFlow Research Cloud program."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT base Japanese (character tokenization)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe model is trained on Japanese Wikipedia as of September 1, 2019.\nTo generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.\nThe text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into characters.\nThe vocabulary size is 4000.",
"## Training\n\nThe model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nFor training models, we used Cloud TPUs provided by TensorFlow Research Cloud program."
] |
fill-mask
|
transformers
|
# BERT base Japanese (unidic-lite with whole word masking, jawiki-20200831)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/).
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": ["wikipedia"], "widget": [{"text": "\u6771\u5317\u5927\u5b66\u3067[MASK]\u306e\u7814\u7a76\u3092\u3057\u3066\u3044\u307e\u3059\u3002"}]}
|
tohoku-nlp/bert-base-japanese-v2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# BERT base Japanese (unidic-lite with whole word masking, jawiki-20200831)
This is a BERT model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at cl-tohoku/bert-japanese.
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
We used 'fugashi' and 'unidic-lite' packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.
## Acknowledgments
This model is trained with Cloud TPUs provided by TensorFlow Research Cloud program.
|
[
"# BERT base Japanese (unidic-lite with whole word masking, jawiki-20200831)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by the WordPiece subword tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe models are trained on the Japanese version of Wikipedia.\nThe training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.\n\nThe generated corpus files are 4.0GB in total, containing approximately 30M sentences.\nWe used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.\nThe vocabulary size is 32768.\n\nWe used 'fugashi' and 'unidic-lite' packages for the tokenization.",
"## Training\n\nThe models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\nFor training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.\n\nFor training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.\nThe training took about 5 days to finish.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nThis model is trained with Cloud TPUs provided by TensorFlow Research Cloud program."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# BERT base Japanese (unidic-lite with whole word masking, jawiki-20200831)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by the WordPiece subword tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe models are trained on the Japanese version of Wikipedia.\nThe training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.\n\nThe generated corpus files are 4.0GB in total, containing approximately 30M sentences.\nWe used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.\nThe vocabulary size is 32768.\n\nWe used 'fugashi' and 'unidic-lite' packages for the tokenization.",
"## Training\n\nThe models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\nFor training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.\n\nFor training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.\nThe training took about 5 days to finish.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nThis model is trained with Cloud TPUs provided by TensorFlow Research Cloud program."
] |
fill-mask
|
transformers
|
# BERT base Japanese (IPA dictionary, whole word masking enabled)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For the training of the MLM (masked language modeling) objective, we introduced the **Whole Word Masking** in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": ["wikipedia"], "widget": [{"text": "\u6771\u5317\u5927\u5b66\u3067[MASK]\u306e\u7814\u7a76\u3092\u3057\u3066\u3044\u307e\u3059\u3002"}]}
|
tohoku-nlp/bert-base-japanese-whole-word-masking
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT base Japanese (IPA dictionary, whole word masking enabled)
This is a BERT model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at cl-tohoku/bert-japanese.
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For the training of the MLM (masked language modeling) objective, we introduced the Whole Word Masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
## Licenses
The pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.
## Acknowledgments
For training models, we used Cloud TPUs provided by TensorFlow Research Cloud program.
|
[
"# BERT base Japanese (IPA dictionary, whole word masking enabled)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe model is trained on Japanese Wikipedia as of September 1, 2019.\nTo generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.\nThe text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.\nThe vocabulary size is 32000.",
"## Training\n\nThe model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\n\nFor the training of the MLM (masked language modeling) objective, we introduced the Whole Word Masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nFor training models, we used Cloud TPUs provided by TensorFlow Research Cloud program."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT base Japanese (IPA dictionary, whole word masking enabled)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe model is trained on Japanese Wikipedia as of September 1, 2019.\nTo generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.\nThe text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.\nThe vocabulary size is 32000.",
"## Training\n\nThe model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\n\nFor the training of the MLM (masked language modeling) objective, we introduced the Whole Word Masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nFor training models, we used Cloud TPUs provided by TensorFlow Research Cloud program."
] |
fill-mask
|
transformers
|
# BERT base Japanese (IPA dictionary)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": ["wikipedia"], "widget": [{"text": "\u6771\u5317\u5927\u5b66\u3067[MASK]\u306e\u7814\u7a76\u3092\u3057\u3066\u3044\u307e\u3059\u3002"}]}
|
tohoku-nlp/bert-base-japanese
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT base Japanese (IPA dictionary)
This is a BERT model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.
The codes for the pretraining are available at cl-tohoku/bert-japanese.
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
## Licenses
The pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.
## Acknowledgments
For training models, we used Cloud TPUs provided by TensorFlow Research Cloud program.
|
[
"# BERT base Japanese (IPA dictionary)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe model is trained on Japanese Wikipedia as of September 1, 2019.\nTo generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.\nThe text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.\nThe vocabulary size is 32000.",
"## Training\n\nThe model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nFor training models, we used Cloud TPUs provided by TensorFlow Research Cloud program."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT base Japanese (IPA dictionary)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.",
"## Training Data\n\nThe model is trained on Japanese Wikipedia as of September 1, 2019.\nTo generate the training corpus, WikiExtractor is used to extract plain texts from a dump file of Wikipedia articles.\nThe text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.\nThe vocabulary size is 32000.",
"## Training\n\nThe model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nFor training models, we used Cloud TPUs provided by TensorFlow Research Cloud program."
] |
fill-mask
|
transformers
|
# BERT large Japanese (character-level tokenization with whole word masking, jawiki-20200831)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by character-level tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0).
## Model architecture
The model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.
The vocabulary size is 6144.
We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/).
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": ["wikipedia"], "widget": [{"text": "\u6771\u5317\u5927\u5b66\u3067[MASK]\u306e\u7814\u7a76\u3092\u3057\u3066\u3044\u307e\u3059\u3002"}]}
|
tohoku-nlp/bert-large-japanese-char
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT large Japanese (character-level tokenization with whole word masking, jawiki-20200831)
This is a BERT model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by character-level tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at cl-tohoku/bert-japanese.
## Model architecture
The model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.
The vocabulary size is 6144.
We used 'fugashi' and 'unidic-lite' packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.
## Acknowledgments
This model is trained with Cloud TPUs provided by TensorFlow Research Cloud program.
|
[
"# BERT large Japanese (character-level tokenization with whole word masking, jawiki-20200831)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by character-level tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.",
"## Training Data\n\nThe models are trained on the Japanese version of Wikipedia.\nThe training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.\n\nThe generated corpus files are 4.0GB in total, containing approximately 30M sentences.\nWe used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.\nThe vocabulary size is 6144.\n\nWe used 'fugashi' and 'unidic-lite' packages for the tokenization.",
"## Training\n\nThe models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\nFor training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.\n\nFor training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.\nThe training took about 5 days to finish.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nThis model is trained with Cloud TPUs provided by TensorFlow Research Cloud program."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT large Japanese (character-level tokenization with whole word masking, jawiki-20200831)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by character-level tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.",
"## Training Data\n\nThe models are trained on the Japanese version of Wikipedia.\nThe training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.\n\nThe generated corpus files are 4.0GB in total, containing approximately 30M sentences.\nWe used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.\nThe vocabulary size is 6144.\n\nWe used 'fugashi' and 'unidic-lite' packages for the tokenization.",
"## Training\n\nThe models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\nFor training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.\n\nFor training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.\nThe training took about 5 days to finish.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nThis model is trained with Cloud TPUs provided by TensorFlow Research Cloud program."
] |
fill-mask
|
transformers
|
# BERT large Japanese (unidic-lite with whole word masking, jawiki-20200831)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0).
## Model architecture
The model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/).
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": ["wikipedia"], "widget": [{"text": "\u6771\u5317\u5927\u5b66\u3067[MASK]\u306e\u7814\u7a76\u3092\u3057\u3066\u3044\u307e\u3059\u3002"}]}
|
tohoku-nlp/bert-large-japanese
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT large Japanese (unidic-lite with whole word masking, jawiki-20200831)
This is a BERT model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at cl-tohoku/bert-japanese.
## Model architecture
The model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
We used 'fugashi' and 'unidic-lite' packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.
## Acknowledgments
This model is trained with Cloud TPUs provided by TensorFlow Research Cloud program.
|
[
"# BERT large Japanese (unidic-lite with whole word masking, jawiki-20200831)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by the WordPiece subword tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.",
"## Training Data\n\nThe models are trained on the Japanese version of Wikipedia.\nThe training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.\n\nThe generated corpus files are 4.0GB in total, containing approximately 30M sentences.\nWe used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.\nThe vocabulary size is 32768.\n\nWe used 'fugashi' and 'unidic-lite' packages for the tokenization.",
"## Training\n\nThe models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\nFor training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.\n\nFor training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.\nThe training took about 5 days to finish.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nThis model is trained with Cloud TPUs provided by TensorFlow Research Cloud program."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT large Japanese (unidic-lite with whole word masking, jawiki-20200831)\n\nThis is a BERT model pretrained on texts in the Japanese language.\n\nThis version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in unidic-lite package), followed by the WordPiece subword tokenization.\nAdditionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.\n\nThe codes for the pretraining are available at cl-tohoku/bert-japanese.",
"## Model architecture\n\nThe model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.",
"## Training Data\n\nThe models are trained on the Japanese version of Wikipedia.\nThe training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.\n\nThe generated corpus files are 4.0GB in total, containing approximately 30M sentences.\nWe used the MeCab morphological parser with mecab-ipadic-NEologd dictionary to split texts into sentences.",
"## Tokenization\n\nThe texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.\nThe vocabulary size is 32768.\n\nWe used 'fugashi' and 'unidic-lite' packages for the tokenization.",
"## Training\n\nThe models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.\nFor training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.\n\nFor training of each model, we used a v3-8 instance of Cloud TPUs provided by TensorFlow Research Cloud program.\nThe training took about 5 days to finish.",
"## Licenses\n\nThe pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 3.0.",
"## Acknowledgments\n\nThis model is trained with Cloud TPUs provided by TensorFlow Research Cloud program."
] |
text-generation
|
transformers
|
# A somewhat positive chatbot
|
{"tags": ["conversational"]}
|
clairesb/kindness_bot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# A somewhat positive chatbot
|
[
"# A somewhat positive chatbot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# A somewhat positive chatbot"
] |
text-generation
|
transformers
|
# Affirmation Bot
|
{"tags": ["conversational"]}
|
clairesb/kindness_bot_repo
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Affirmation Bot
|
[
"# Affirmation Bot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Affirmation Bot"
] |
text-classification
|
transformers
|
# Multi-lingual sentiment prediction trained from COVID19-related tweets
Repository: [https://github.com/clampert/multilingual-sentiment-analysis/](https://github.com/clampert/multilingual-sentiment-analysis/)
Model trained on a large-scale (18437530 examples) dataset of
multi-lingual tweets that was collected between March 2020
and November 2021 using Twitter’s Streaming API with varying
COVID19-related keywords. Labels were auto-general based on
the presence of positive and negative emoticons. For details
on the dataset, see our IEEE BigData 2021 publication.
Base model is [sentence-transformers/stsb-xlm-r-multilingual](https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual).
It was finetuned for sequence classification with `positive`
and `negative` labels for two epochs (48 hours on 8xP100 GPUs).
## Citation
If you use our model your work, please cite:
```
@inproceedings{lampert2021overcoming,
title={Overcoming Rare-Language Discrimination in Multi-Lingual Sentiment Analysis},
author={Jasmin Lampert and Christoph H. Lampert},
booktitle={IEEE International Conference on Big Data (BigData)},
year={2021},
note={Special Session: Machine Learning on Big Data},
}
```
Enjoy!
|
{"language": "multilingual", "license": "apache-2.0", "tags": ["sentiment-analysis", "multilingual"], "pipeline_tag": "text-classification", "widget": [{"text": "I am very happy.", "example_title": "English"}, {"text": "Heute bin ich schlecht drauf.", "example_title": "Deutsch"}, {"text": "Quel cauchemard!", "example_title": "Francais"}, {"text": "\u0e09\u0e31\u0e19\u0e23\u0e31\u0e01\u0e24\u0e14\u0e39\u0e43\u0e1a\u0e44\u0e21\u0e49\u0e1c\u0e25\u0e34", "example_title": "\u0e20\u0e32\u0e29\u0e32\u0e44\u0e17\u0e22"}]}
|
clampert/multilingual-sentiment-covid19
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"sentiment-analysis",
"multilingual",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #xlm-roberta #text-classification #sentiment-analysis #multilingual #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Multi-lingual sentiment prediction trained from COVID19-related tweets
Repository: URL
Model trained on a large-scale (18437530 examples) dataset of
multi-lingual tweets that was collected between March 2020
and November 2021 using Twitter’s Streaming API with varying
COVID19-related keywords. Labels were auto-general based on
the presence of positive and negative emoticons. For details
on the dataset, see our IEEE BigData 2021 publication.
Base model is sentence-transformers/stsb-xlm-r-multilingual.
It was finetuned for sequence classification with 'positive'
and 'negative' labels for two epochs (48 hours on 8xP100 GPUs).
If you use our model your work, please cite:
Enjoy!
|
[
"# Multi-lingual sentiment prediction trained from COVID19-related tweets\n\nRepository: URL\n\nModel trained on a large-scale (18437530 examples) dataset of \nmulti-lingual tweets that was collected between March 2020 \nand November 2021 using Twitter’s Streaming API with varying\nCOVID19-related keywords. Labels were auto-general based on \nthe presence of positive and negative emoticons. For details\non the dataset, see our IEEE BigData 2021 publication. \n\nBase model is sentence-transformers/stsb-xlm-r-multilingual.\nIt was finetuned for sequence classification with 'positive' \nand 'negative' labels for two epochs (48 hours on 8xP100 GPUs). \n\nIf you use our model your work, please cite:\n\n\n\nEnjoy!"
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #sentiment-analysis #multilingual #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Multi-lingual sentiment prediction trained from COVID19-related tweets\n\nRepository: URL\n\nModel trained on a large-scale (18437530 examples) dataset of \nmulti-lingual tweets that was collected between March 2020 \nand November 2021 using Twitter’s Streaming API with varying\nCOVID19-related keywords. Labels were auto-general based on \nthe presence of positive and negative emoticons. For details\non the dataset, see our IEEE BigData 2021 publication. \n\nBase model is sentence-transformers/stsb-xlm-r-multilingual.\nIt was finetuned for sequence classification with 'positive' \nand 'negative' labels for two epochs (48 hours on 8xP100 GPUs). \n\nIf you use our model your work, please cite:\n\n\n\nEnjoy!"
] |
null | null |
# KGR10 FastText Polish word embeddings
Distributional language model (both textual and binary) for Polish (word embeddings) trained on KGR10 corpus (over 4 billion of words) using Fasttext with the following variants (all possible combinations):
- dimension: 100, 300
- method: skipgram, cbow
- tool: FastText, Magnitude
- source text: plain, plain.lower, plain.lemma, plain.lemma.lower
## Models
In the repository you can find 4 selected models, that were examined in the paper (see Citation).
A model that performed the best is the default model/config (see `default_config.json`).
## Usage
To use these embedding models easily, it is required to install [embeddings](https://github.com/CLARIN-PL/embeddings).
```bash
pip install clarinpl-embeddings
```
### Utilising the default model (the easiest way)
Word embedding:
```python
from embeddings.embedding.auto_flair import AutoFlairWordEmbedding
from flair.data import Sentence
sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.")
embedding = AutoFlairWordEmbedding.from_hub("clarin-pl/fastText-kgr10")
embedding.embed([sentence])
for token in sentence:
print(token)
print(token.embedding)
```
Document embedding (averaged over words):
```python
from embeddings.embedding.auto_flair import AutoFlairDocumentEmbedding
from flair.data import Sentence
sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.")
embedding = AutoFlairDocumentEmbedding.from_hub("clarin-pl/fastText-kgr10")
embedding.embed([sentence])
print(sentence.embedding)
```
### Customisable way
Word embedding:
```python
from embeddings.embedding.static.embedding import AutoStaticWordEmbedding
from embeddings.embedding.static.fasttext import KGR10FastTextConfig
from flair.data import Sentence
config = KGR10FastTextConfig(method='cbow', dimension=100)
embedding = AutoStaticWordEmbedding.from_config(config)
sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.")
embedding.embed([sentence])
for token in sentence:
print(token)
print(token.embedding)
```
Document embedding (averaged over words):
```python
from embeddings.embedding.static.embedding import AutoStaticDocumentEmbedding
from embeddings.embedding.static.fasttext import KGR10FastTextConfig
from flair.data import Sentence
config = KGR10FastTextConfig(method='cbow', dimension=100)
embedding = AutoStaticDocumentEmbedding.from_config(config)
sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.")
embedding.embed([sentence])
print(sentence.embedding)
```
## Citation
The link below leads to the NextCloud directory with all variants of embeddings. If you use it, please cite the following article:
```
@article{kocon2018embeddings,
author = {Koco\'{n}, Jan and Gawor, Micha{\l}},
title = {Evaluating {KGR10} {P}olish word embeddings in the recognition of temporal
expressions using {BiLSTM-CRF}},
journal = {Schedae Informaticae},
volume = {27},
year = {2018},
url = {http://www.ejournals.eu/Schedae-Informaticae/2018/Volume-27/art/13931/},
doi = {10.4467/20838476SI.18.008.10413}
}
```
|
{"language": "pl", "tags": ["fastText"], "datasets": ["kgr10"]}
|
clarin-pl/fastText-kgr10
| null |
[
"fastText",
"pl",
"dataset:kgr10",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pl"
] |
TAGS
#fastText #pl #dataset-kgr10 #region-us
|
# KGR10 FastText Polish word embeddings
Distributional language model (both textual and binary) for Polish (word embeddings) trained on KGR10 corpus (over 4 billion of words) using Fasttext with the following variants (all possible combinations):
- dimension: 100, 300
- method: skipgram, cbow
- tool: FastText, Magnitude
- source text: plain, URL, URL, URL
## Models
In the repository you can find 4 selected models, that were examined in the paper (see Citation).
A model that performed the best is the default model/config (see 'default_config.json').
## Usage
To use these embedding models easily, it is required to install embeddings.
### Utilising the default model (the easiest way)
Word embedding:
Document embedding (averaged over words):
### Customisable way
Word embedding:
Document embedding (averaged over words):
The link below leads to the NextCloud directory with all variants of embeddings. If you use it, please cite the following article:
|
[
"# KGR10 FastText Polish word embeddings\n\nDistributional language model (both textual and binary) for Polish (word embeddings) trained on KGR10 corpus (over 4 billion of words) using Fasttext with the following variants (all possible combinations):\n- dimension: 100, 300\n- method: skipgram, cbow\n- tool: FastText, Magnitude\n- source text: plain, URL, URL, URL",
"## Models\n\nIn the repository you can find 4 selected models, that were examined in the paper (see Citation). \nA model that performed the best is the default model/config (see 'default_config.json').",
"## Usage\n\nTo use these embedding models easily, it is required to install embeddings.",
"### Utilising the default model (the easiest way)\n\nWord embedding:\n\n\n\nDocument embedding (averaged over words):",
"### Customisable way\n\nWord embedding:\n\n\n\nDocument embedding (averaged over words):\n\n\n\n\nThe link below leads to the NextCloud directory with all variants of embeddings. If you use it, please cite the following article:"
] |
[
"TAGS\n#fastText #pl #dataset-kgr10 #region-us \n",
"# KGR10 FastText Polish word embeddings\n\nDistributional language model (both textual and binary) for Polish (word embeddings) trained on KGR10 corpus (over 4 billion of words) using Fasttext with the following variants (all possible combinations):\n- dimension: 100, 300\n- method: skipgram, cbow\n- tool: FastText, Magnitude\n- source text: plain, URL, URL, URL",
"## Models\n\nIn the repository you can find 4 selected models, that were examined in the paper (see Citation). \nA model that performed the best is the default model/config (see 'default_config.json').",
"## Usage\n\nTo use these embedding models easily, it is required to install embeddings.",
"### Utilising the default model (the easiest way)\n\nWord embedding:\n\n\n\nDocument embedding (averaged over words):",
"### Customisable way\n\nWord embedding:\n\n\n\nDocument embedding (averaged over words):\n\n\n\n\nThe link below leads to the NextCloud directory with all variants of embeddings. If you use it, please cite the following article:"
] |
fill-mask
|
transformers
|
# Work in Progress Polish RoBERTa
The model has been trained for about 5% time of the target. We will publish new increments as they will be trained.
The model pre-trained on KGR10 corpora.
More about model at [CLARIN-dspace](https://huggingface.co/clarin/roberta-polish-v1)
## Usage
## Huggingface model hub
## Acknowledgments
[CLARIN-PL and CLARIN-BIZ project](https://clarin-pl.eu/)
|
{}
|
clarin-pl/roberta-polish-kgr10
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Work in Progress Polish RoBERTa
The model has been trained for about 5% time of the target. We will publish new increments as they will be trained.
The model pre-trained on KGR10 corpora.
More about model at CLARIN-dspace
## Usage
## Huggingface model hub
## Acknowledgments
CLARIN-PL and CLARIN-BIZ project
|
[
"# Work in Progress Polish RoBERTa \n\nThe model has been trained for about 5% time of the target. We will publish new increments as they will be trained. \n\nThe model pre-trained on KGR10 corpora.\n\nMore about model at CLARIN-dspace",
"## Usage",
"## Huggingface model hub",
"## Acknowledgments\n\nCLARIN-PL and CLARIN-BIZ project"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Work in Progress Polish RoBERTa \n\nThe model has been trained for about 5% time of the target. We will publish new increments as they will be trained. \n\nThe model pre-trained on KGR10 corpora.\n\nMore about model at CLARIN-dspace",
"## Usage",
"## Huggingface model hub",
"## Acknowledgments\n\nCLARIN-PL and CLARIN-BIZ project"
] |
null | null |
# KGR10 word2vec Polish word embeddings
Distributional language models for Polish trained on the KGR10 corpora.
## Models
In the repository you can find two selected models, that were selected after evaluation (see table below).
A model that performed the best is the default model/config (see `default_config.json`).
|method|dimension|hs|mwe||
|---|---|---|---| --- |
|cbow|300|false|true| <-- default |
|skipgram|300|true|true|
## Usage
To use these embedding models easily, it is required to install [embeddings](https://github.com/CLARIN-PL/embeddings).
```bash
pip install clarinpl-embeddings
```
### Utilising the default model (the easiest way)
Word embedding:
```python
from embeddings.embedding.auto_flair import AutoFlairWordEmbedding
from flair.data import Sentence
sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.")
embedding = AutoFlairWordEmbedding.from_hub("clarin-pl/word2vec-kgr10")
embedding.embed([sentence])
for token in sentence:
print(token)
print(token.embedding)
```
Document embedding (averaged over words):
```python
from embeddings.embedding.auto_flair import AutoFlairDocumentEmbedding
from flair.data import Sentence
sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.")
embedding = AutoFlairDocumentEmbedding.from_hub("clarin-pl/word2vec-kgr10")
embedding.embed([sentence])
print(sentence.embedding)
```
### Customisable way
Word embedding:
```python
from embeddings.embedding.static.embedding import AutoStaticWordEmbedding
from embeddings.embedding.static.word2vec import KGR10Word2VecConfig
from flair.data import Sentence
config = KGR10Word2VecConfig(method='skipgram', hs=False)
embedding = AutoStaticWordEmbedding.from_config(config)
sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.")
embedding.embed([sentence])
for token in sentence:
print(token)
print(token.embedding)
```
Document embedding (averaged over words):
```python
from embeddings.embedding.static.embedding import AutoStaticDocumentEmbedding
from embeddings.embedding.static.word2vec import KGR10Word2VecConfig
from flair.data import Sentence
config = KGR10Word2VecConfig(method='skipgram', hs=False)
embedding = AutoStaticDocumentEmbedding.from_config(config)
sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.")
embedding.embed([sentence])
print(sentence.embedding)
```
## Citation
```
Piasecki, Maciej; Janz, Arkadiusz; Kaszewski, Dominik; et al., 2017, Word Embeddings for Polish, CLARIN-PL digital repository, http://hdl.handle.net/11321/442.
```
or
```
@misc{11321/442,
title = {Word Embeddings for Polish},
author = {Piasecki, Maciej and Janz, Arkadiusz and Kaszewski, Dominik and Czachor, Gabriela},
url = {http://hdl.handle.net/11321/442},
note = {{CLARIN}-{PL} digital repository},
copyright = {{GNU} {GPL3}},
year = {2017}
}
```
|
{"language": "pl", "tags": ["word2vec"], "datasets": ["KGR10"]}
|
clarin-pl/word2vec-kgr10
| null |
[
"word2vec",
"pl",
"dataset:KGR10",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pl"
] |
TAGS
#word2vec #pl #dataset-KGR10 #has_space #region-us
|
KGR10 word2vec Polish word embeddings
=====================================
Distributional language models for Polish trained on the KGR10 corpora.
Models
------
In the repository you can find two selected models, that were selected after evaluation (see table below).
A model that performed the best is the default model/config (see 'default\_config.json').
Usage
-----
To use these embedding models easily, it is required to install embeddings.
### Utilising the default model (the easiest way)
Word embedding:
Document embedding (averaged over words):
### Customisable way
Word embedding:
Document embedding (averaged over words):
or
|
[
"### Utilising the default model (the easiest way)\n\n\nWord embedding:\n\n\nDocument embedding (averaged over words):",
"### Customisable way\n\n\nWord embedding:\n\n\nDocument embedding (averaged over words):\n\n\nor"
] |
[
"TAGS\n#word2vec #pl #dataset-KGR10 #has_space #region-us \n",
"### Utilising the default model (the easiest way)\n\n\nWord embedding:\n\n\nDocument embedding (averaged over words):",
"### Customisable way\n\n\nWord embedding:\n\n\nDocument embedding (averaged over words):\n\n\nor"
] |
text-classification
|
transformers
|
# bcms-bertic-frenk-hate
Text classification model based on [`classla/bcms-bertic`](https://huggingface.co/classla/bcms-bertic) and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the Croatian subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
## Fine-tuning hyperparameters
Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
```python
model_args = {
"num_train_epochs": 12,
"learning_rate": 1e-5,
"train_batch_size": 74}
```
## Performance
The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
| model | average accuracy | average macro F1 |
|----------------------------|------------------|------------------|
| bcms-bertic-frenk-hate | 0.8313 | 0.8219 |
| EMBEDDIA/crosloengual-bert | 0.8054 | 0.796 |
| xlm-roberta-base | 0.7175 | 0.7049 |
| fasttext | 0.771 | 0.754 |
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with `crosloengual-bert`:
| test | accuracy p-value | macro F1 p-value |
|----------------|------------------|------------------|
| Wilcoxon | 0.00781 | 0.00781 |
| Mann Whithney | 0.00108 | 0.00108 |
| Student t-test | 2.43e-10 | 1.27e-10 |
Comparison with `xlm-roberta-base`:
| test | accuracy p-value | macro F1 p-value |
|----------------|------------------|------------------|
| Wilcoxon | 0.00781 | 0.00781 |
| Mann Whithney | 0.00107 | 0.00108 |
| Student t-test | 4.83e-11 | 5.61e-11 |
## Use examples
```python
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
"bert", "5roop/bcms-bertic-frenk-hate", use_cuda=True,
)
predictions, logit_output = model.predict(['Ne odbacujem da će RH primiti još migranata iz Afganistana, no neće biti novog vala',
"Potpredsjednik Vlade i ministar branitelja Tomo Medved komentirao je Vladine planove za zakonsku zabranu pozdrava 'za dom spremni' "])
predictions
### Output:
### array([0, 0])
```
## Citation
If you use the model, please cite the following paper on which the original model is based:
```
@inproceedings{ljubesic-lauc-2021-bertic,
title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
pages = "37--42",
}
```
and the dataset used for fine-tuning:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
```
|
{"language": "hr", "license": "cc-by-sa-4.0", "tags": ["text-classification", "hate-speech"], "widget": [{"text": "Potpredsjednik Vlade i ministar branitelja Tomo Medved komentirao je Vladine planove za zakonsku zabranu pozdrava 'za dom spremni'."}]}
|
classla/bcms-bertic-frenk-hate
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"hate-speech",
"hr",
"arxiv:1906.02045",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1906.02045"
] |
[
"hr"
] |
TAGS
#transformers #pytorch #safetensors #bert #text-classification #hate-speech #hr #arxiv-1906.02045 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
bcms-bertic-frenk-hate
======================
Text classification model based on 'classla/bcms-bertic' and fine-tuned on the FRENK dataset comprising of LGBT and migrant hatespeech. Only the Croatian subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
Fine-tuning hyperparameters
---------------------------
Fine-tuning was performed with 'simpletransformers'. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
Performance
-----------
The same pipeline was run with two other transformer models and 'fasttext' for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
model: bcms-bertic-frenk-hate, average accuracy: 0.8313, average macro F1: 0.8219
model: EMBEDDIA/crosloengual-bert, average accuracy: 0.8054, average macro F1: 0.796
model: xlm-roberta-base, average accuracy: 0.7175, average macro F1: 0.7049
model: fasttext, average accuracy: 0.771, average macro F1: 0.754
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with 'crosloengual-bert':
test: Wilcoxon, accuracy p-value: 0.00781, macro F1 p-value: 0.00781
test: Mann Whithney, accuracy p-value: 0.00108, macro F1 p-value: 0.00108
test: Student t-test, accuracy p-value: 2.43e-10, macro F1 p-value: 1.27e-10
Comparison with 'xlm-roberta-base':
test: Wilcoxon, accuracy p-value: 0.00781, macro F1 p-value: 0.00781
test: Mann Whithney, accuracy p-value: 0.00107, macro F1 p-value: 0.00108
test: Student t-test, accuracy p-value: 4.83e-11, macro F1 p-value: 5.61e-11
Use examples
------------
If you use the model, please cite the following paper on which the original model is based:
and the dataset used for fine-tuning:
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #hate-speech #hr #arxiv-1906.02045 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# BERTić* [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian
* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).
This is the smaller generator of the main [discriminator model](https://huggingface.co/classla/bcms-bertic), useful if you want to continue pre-training the discriminator model.
If you use the model, please cite the following paper:
```
@inproceedings{ljubesic-lauc-2021-bertic,
title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
pages = "37--42",
}
```
|
{"language": ["hr", "bs", "sr", "cnr", "hbs"], "license": "apache-2.0", "tags": ["masked-lm"], "widget": [{"text": "Zovem se Marko i radim u [MASK]."}]}
|
classla/bcms-bertic-generator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"masked-lm",
"hr",
"bs",
"sr",
"cnr",
"hbs",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hr",
"bs",
"sr",
"cnr",
"hbs"
] |
TAGS
#transformers #pytorch #electra #pretraining #masked-lm #hr #bs #sr #cnr #hbs #license-apache-2.0 #endpoints_compatible #region-us
|
# BERTić* [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian
* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).
This is the smaller generator of the main discriminator model, useful if you want to continue pre-training the discriminator model.
If you use the model, please cite the following paper:
|
[
"# BERTić* [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian\n\n* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).\n\nThis is the smaller generator of the main discriminator model, useful if you want to continue pre-training the discriminator model.\n\nIf you use the model, please cite the following paper:"
] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #masked-lm #hr #bs #sr #cnr #hbs #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BERTić* [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian\n\n* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).\n\nThis is the smaller generator of the main discriminator model, useful if you want to continue pre-training the discriminator model.\n\nIf you use the model, please cite the following paper:"
] |
token-classification
|
transformers
|
# The [BERTić](https://huggingface.co/classla/bcms-bertic)* [bert-ich] /bɜrtitʃ/ model fine-tuned for the task of named entity recognition in Bosnian, Croatian, Montenegrin and Serbian (BCMS)
* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).
This is a fine-tuned version of the [BERTić](https://huggingface.co/classla/bcms-bertic) model for the task of named entity recognition (PER, LOC, ORG, MISC). The fine-tuning was performed on the following datasets:
- the [hr500k](http://hdl.handle.net/11356/1183) dataset, 500 thousand tokens in size, standard Croatian
- the [SETimes.SR](http://hdl.handle.net/11356/1200) dataset, 87 thousand tokens in size, standard Serbian
- the [ReLDI-hr](http://hdl.handle.net/11356/1241) dataset, 89 thousand tokens in size, Internet (Twitter) Croatian
- the [ReLDI-sr](http://hdl.handle.net/11356/1240) dataset, 92 thousand tokens in size, Internet (Twitter) Serbian
The data was augmented with missing diacritics and standard data was additionally over-represented. The F1 obtained on dev data (train and test was merged into train) is 91.38. For a more detailed per-dataset evaluation of the BERTić model on the NER task have a look at the [main model page](https://huggingface.co/classla/bcms-bertic).
If you use this fine-tuned model, please cite the following paper:
```
@inproceedings{ljubesic-lauc-2021-bertic,
title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
pages = "37--42",
}
```
When running the model in `simpletransformers`, the order of labels has to be set as well.
```
from simpletransformers.ner import NERModel, NERArgs
model_args = NERArgs()
model_args.labels_list = ['B-LOC','B-MISC','B-ORG','B-PER','I-LOC','I-MISC','I-ORG','I-PER','O']
model = NERModel('electra', 'classla/bcms-bertic-ner', args=model_args)
```
|
{"language": ["hr", "bs", "sr", "cnr", "hbs"], "license": "apache-2.0", "widget": [{"text": "Zovem se Marko i \u017eivim u Zagrebu. Studirao sam u Beogradu na Filozofskom fakultetu. Obo\u017eavam album Moanin."}]}
|
classla/bcms-bertic-ner
| null |
[
"transformers",
"pytorch",
"safetensors",
"electra",
"token-classification",
"hr",
"bs",
"sr",
"cnr",
"hbs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hr",
"bs",
"sr",
"cnr",
"hbs"
] |
TAGS
#transformers #pytorch #safetensors #electra #token-classification #hr #bs #sr #cnr #hbs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# The BERTić* [bert-ich] /bɜrtitʃ/ model fine-tuned for the task of named entity recognition in Bosnian, Croatian, Montenegrin and Serbian (BCMS)
* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).
This is a fine-tuned version of the BERTić model for the task of named entity recognition (PER, LOC, ORG, MISC). The fine-tuning was performed on the following datasets:
- the hr500k dataset, 500 thousand tokens in size, standard Croatian
- the SETimes.SR dataset, 87 thousand tokens in size, standard Serbian
- the ReLDI-hr dataset, 89 thousand tokens in size, Internet (Twitter) Croatian
- the ReLDI-sr dataset, 92 thousand tokens in size, Internet (Twitter) Serbian
The data was augmented with missing diacritics and standard data was additionally over-represented. The F1 obtained on dev data (train and test was merged into train) is 91.38. For a more detailed per-dataset evaluation of the BERTić model on the NER task have a look at the main model page.
If you use this fine-tuned model, please cite the following paper:
When running the model in 'simpletransformers', the order of labels has to be set as well.
|
[
"# The BERTić* [bert-ich] /bɜrtitʃ/ model fine-tuned for the task of named entity recognition in Bosnian, Croatian, Montenegrin and Serbian (BCMS)\n\n* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).\n\nThis is a fine-tuned version of the BERTić model for the task of named entity recognition (PER, LOC, ORG, MISC). The fine-tuning was performed on the following datasets:\n\n- the hr500k dataset, 500 thousand tokens in size, standard Croatian\n- the SETimes.SR dataset, 87 thousand tokens in size, standard Serbian\n- the ReLDI-hr dataset, 89 thousand tokens in size, Internet (Twitter) Croatian\n- the ReLDI-sr dataset, 92 thousand tokens in size, Internet (Twitter) Serbian\n\nThe data was augmented with missing diacritics and standard data was additionally over-represented. The F1 obtained on dev data (train and test was merged into train) is 91.38. For a more detailed per-dataset evaluation of the BERTić model on the NER task have a look at the main model page.\n\nIf you use this fine-tuned model, please cite the following paper:\n\n\n\nWhen running the model in 'simpletransformers', the order of labels has to be set as well."
] |
[
"TAGS\n#transformers #pytorch #safetensors #electra #token-classification #hr #bs #sr #cnr #hbs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# The BERTić* [bert-ich] /bɜrtitʃ/ model fine-tuned for the task of named entity recognition in Bosnian, Croatian, Montenegrin and Serbian (BCMS)\n\n* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).\n\nThis is a fine-tuned version of the BERTić model for the task of named entity recognition (PER, LOC, ORG, MISC). The fine-tuning was performed on the following datasets:\n\n- the hr500k dataset, 500 thousand tokens in size, standard Croatian\n- the SETimes.SR dataset, 87 thousand tokens in size, standard Serbian\n- the ReLDI-hr dataset, 89 thousand tokens in size, Internet (Twitter) Croatian\n- the ReLDI-sr dataset, 92 thousand tokens in size, Internet (Twitter) Serbian\n\nThe data was augmented with missing diacritics and standard data was additionally over-represented. The F1 obtained on dev data (train and test was merged into train) is 91.38. For a more detailed per-dataset evaluation of the BERTić model on the NER task have a look at the main model page.\n\nIf you use this fine-tuned model, please cite the following paper:\n\n\n\nWhen running the model in 'simpletransformers', the order of labels has to be set as well."
] |
null |
transformers
|
# BERTić* [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian
* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).
This Electra model was trained on more than 8 billion tokens of Bosnian, Croatian, Montenegrin and Serbian text.
***new*** We have published a version of this model fine-tuned on the named entity recognition task ([bcms-bertic-ner](https://huggingface.co/classla/bcms-bertic-ner)) and on the hate speech detection task ([bcms-bertic-frenk-hate](https://huggingface.co/classla/bcms-bertic-frenk-hate)).
If you use the model, please cite the following paper:
```
@inproceedings{ljubesic-lauc-2021-bertic,
title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
pages = "37--42",
}
```
## Benchmarking
Comparing this model to [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) and [CroSloEngual BERT](https://huggingface.co/EMBEDDIA/crosloengual-bert) on the tasks of (1) part-of-speech tagging, (2) named entity recognition, (3) geolocation prediction, and (4) commonsense causal reasoning, shows the BERTić model to be superior to the other two.
### Part-of-speech tagging
Evaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (* p<=0.05, ** p<=0.01, *** p<=0.001, ***** p<=0.0001).
Dataset | Language | Variety | CLASSLA | mBERT | cseBERT | BERTić
---|---|---|---|---|---|---
hr500k | Croatian | standard | 93.87 | 94.60 | 95.74 | **95.81*****
reldi-hr | Croatian | internet non-standard | - | 88.87 | 91.63 | **92.28*****
SETimes.SR | Serbian | standard | 95.00 | 95.50 | **96.41** | 96.31
reldi-sr | Serbian | internet non-standard | - | 91.26 | 93.54 | **93.90*****
### Named entity recognition
Evaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (* p<=0.05, ** p<=0.01, *** p<=0.001, ***** p<=0.0001).
Dataset | Language | Variety | CLASSLA | mBERT | cseBERT | BERTić
---|---|---|---|---|---|---
hr500k | Croatian | standard | 80.13 | 85.67 | 88.98 | **89.21******
reldi-hr | Croatian | internet non-standard | - | 76.06 | 81.38 | **83.05******
SETimes.SR | Serbian | standard | 84.64 | **92.41** | 92.28 | 92.02
reldi-sr | Serbian | internet non-standard | - | 81.29 | 82.76 | **87.92******
### Geolocation prediction
The dataset comes from the VarDial 2020 evaluation campaign's shared task on [Social Media variety Geolocation prediction](https://sites.google.com/view/vardial2020/evaluation-campaign). The task is to predict the latitude and longitude of a tweet given its text.
Evaluation metrics are median and mean of distance between gold and predicted geolocations (lower is better). No statistical significance is computed due to large test set (39,723 instances). Centroid baseline predicts each text to be created in the centroid of the training dataset.
System | Median | Mean
---|---|---
centroid | 107.10 | 145.72
mBERT | 42.25 | 82.05
cseBERT | 40.76 | 81.88
BERTić | **37.96** | **79.30**
### Choice Of Plausible Alternatives
The dataset is a translation of the [COPA dataset](https://people.ict.usc.edu/~gordon/copa.html) into Croatian ([link to the dataset](http://hdl.handle.net/11356/1404)).
Evaluation metric is accuracy. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (* p<=0.05, ** p<=0.01, *** p<=0.001, ***** p<=0.0001).
System | Accuracy
---|---
random | 50.00
mBERT | 54.12
cseBERT | 61.80
BERTić | **65.76****
|
{"language": ["hr", "bs", "sr", "cnr", "hbs"], "license": "apache-2.0"}
|
classla/bcms-bertic
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"hr",
"bs",
"sr",
"cnr",
"hbs",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hr",
"bs",
"sr",
"cnr",
"hbs"
] |
TAGS
#transformers #pytorch #electra #pretraining #hr #bs #sr #cnr #hbs #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
BERTić\* [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian
===========================================================================================================
\* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).
This Electra model was trained on more than 8 billion tokens of Bosnian, Croatian, Montenegrin and Serbian text.
\*new\* We have published a version of this model fine-tuned on the named entity recognition task (bcms-bertic-ner) and on the hate speech detection task (bcms-bertic-frenk-hate).
If you use the model, please cite the following paper:
Benchmarking
------------
Comparing this model to multilingual BERT and CroSloEngual BERT on the tasks of (1) part-of-speech tagging, (2) named entity recognition, (3) geolocation prediction, and (4) commonsense causal reasoning, shows the BERTić model to be superior to the other two.
### Part-of-speech tagging
Evaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (\* p<=0.05, \*\* p<=0.01, \*\*\* p<=0.001, \*\*\*\*\* p<=0.0001).
### Named entity recognition
Evaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (\* p<=0.05, \*\* p<=0.01, \*\*\* p<=0.001, \*\*\*\*\* p<=0.0001).
### Geolocation prediction
The dataset comes from the VarDial 2020 evaluation campaign's shared task on Social Media variety Geolocation prediction. The task is to predict the latitude and longitude of a tweet given its text.
Evaluation metrics are median and mean of distance between gold and predicted geolocations (lower is better). No statistical significance is computed due to large test set (39,723 instances). Centroid baseline predicts each text to be created in the centroid of the training dataset.
System: centroid, Median: 107.10, Mean: 145.72
System: mBERT, Median: 42.25, Mean: 82.05
System: cseBERT, Median: 40.76, Mean: 81.88
System: BERTić, Median: 37.96, Mean: 79.30
### Choice Of Plausible Alternatives
The dataset is a translation of the COPA dataset into Croatian (link to the dataset).
Evaluation metric is accuracy. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (\* p<=0.05, \*\* p<=0.01, \*\*\* p<=0.001, \*\*\*\*\* p<=0.0001).
|
[
"### Part-of-speech tagging\n\n\nEvaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (\\* p<=0.05, \\*\\* p<=0.01, \\*\\*\\* p<=0.001, \\*\\*\\*\\*\\* p<=0.0001).",
"### Named entity recognition\n\n\nEvaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (\\* p<=0.05, \\*\\* p<=0.01, \\*\\*\\* p<=0.001, \\*\\*\\*\\*\\* p<=0.0001).",
"### Geolocation prediction\n\n\nThe dataset comes from the VarDial 2020 evaluation campaign's shared task on Social Media variety Geolocation prediction. The task is to predict the latitude and longitude of a tweet given its text.\n\n\nEvaluation metrics are median and mean of distance between gold and predicted geolocations (lower is better). No statistical significance is computed due to large test set (39,723 instances). Centroid baseline predicts each text to be created in the centroid of the training dataset.\n\n\nSystem: centroid, Median: 107.10, Mean: 145.72\nSystem: mBERT, Median: 42.25, Mean: 82.05\nSystem: cseBERT, Median: 40.76, Mean: 81.88\nSystem: BERTić, Median: 37.96, Mean: 79.30",
"### Choice Of Plausible Alternatives\n\n\nThe dataset is a translation of the COPA dataset into Croatian (link to the dataset).\n\n\nEvaluation metric is accuracy. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (\\* p<=0.05, \\*\\* p<=0.01, \\*\\*\\* p<=0.001, \\*\\*\\*\\*\\* p<=0.0001)."
] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #hr #bs #sr #cnr #hbs #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### Part-of-speech tagging\n\n\nEvaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (\\* p<=0.05, \\*\\* p<=0.01, \\*\\*\\* p<=0.001, \\*\\*\\*\\*\\* p<=0.0001).",
"### Named entity recognition\n\n\nEvaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (\\* p<=0.05, \\*\\* p<=0.01, \\*\\*\\* p<=0.001, \\*\\*\\*\\*\\* p<=0.0001).",
"### Geolocation prediction\n\n\nThe dataset comes from the VarDial 2020 evaluation campaign's shared task on Social Media variety Geolocation prediction. The task is to predict the latitude and longitude of a tweet given its text.\n\n\nEvaluation metrics are median and mean of distance between gold and predicted geolocations (lower is better). No statistical significance is computed due to large test set (39,723 instances). Centroid baseline predicts each text to be created in the centroid of the training dataset.\n\n\nSystem: centroid, Median: 107.10, Mean: 145.72\nSystem: mBERT, Median: 42.25, Mean: 82.05\nSystem: cseBERT, Median: 40.76, Mean: 81.88\nSystem: BERTić, Median: 37.96, Mean: 79.30",
"### Choice Of Plausible Alternatives\n\n\nThe dataset is a translation of the COPA dataset into Croatian (link to the dataset).\n\n\nEvaluation metric is accuracy. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (\\* p<=0.05, \\*\\* p<=0.01, \\*\\*\\* p<=0.001, \\*\\*\\*\\*\\* p<=0.0001)."
] |
text-classification
|
transformers
|
# roberta-base-frenk-hate
Text classification model based on [`roberta-base`](https://huggingface.co/roberta-base) and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the English subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
## Fine-tuning hyperparameters
Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
```python
model_args = {
"num_train_epochs": 6,
"learning_rate": 3e-6,
"train_batch_size": 69}
```
## Performance
The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
| model | average accuracy | average macro F1|
|---|---|---|
|roberta-base-frenk-hate|0.7915|0.7785|
|xlm-roberta-large |0.7904|0.77876|
|xlm-roberta-base |0.7577|0.7402|
|fasttext|0.725 |0.707 |
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with `xlm-roberta-base`:
| test | accuracy p-value | macro F1 p-value|
| --- | --- | --- |
|Wilcoxon|0.00781|0.00781|
|Mann Whithney U-test|0.00108|0.00108|
|Student t-test | 1.35e-08 | 1.05e-07|
Comparison with `xlm-roberta-large` yielded inconclusive results. `roberta-base` has average accuracy 0.7915, while `xlm-roberta-large` has average accuracy of 0.7904. If macro F1 scores were to be compared, `roberta-base` actually has lower average than `xlm-roberta-large`: 0.77852 vs 0.77876 respectively. The same statistical tests were performed with the premise that `roberta-base` has greater metrics, and the results are given below.
| test | accuracy p-value | macro F1 p-value|
| --- | --- | --- |
|Wilcoxon|0.188|0.406|
|Mann Whithey|0.375|0.649|
|Student t-test | 0.681| 0.934|
With reversed premise (i.e., that `xlm-roberta-large` has greater statistics) the Wilcoxon p-value for macro F1 scores for this case reaches 0.656, Mann-Whithey p-value is 0.399, and of course the Student p-value stays the same. It was therefore concluded that performance of the two models are not statistically significantly different from one another.
## Use examples
```python
from simpletransformers.classification import ClassificationModel
model_args = {
"num_train_epochs": 6,
"learning_rate": 3e-6,
"train_batch_size": 69}
model = ClassificationModel(
"roberta", "5roop/roberta-base-frenk-hate", use_cuda=True,
args=model_args
)
predictions, logit_output = model.predict(["Build the wall",
"Build the wall of trust"]
)
predictions
### Output:
### array([1, 0])
```
## Citation
If you use the model, please cite the following paper on which the original model is based:
```
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
and the dataset used for fine-tuning:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
```
|
{"language": "en", "license": "cc-by-sa-4.0", "tags": ["text-classification", "hate-speech"], "widget": [{"text": "Gay is okay."}]}
|
classla/roberta-base-frenk-hate
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"hate-speech",
"en",
"arxiv:1907.11692",
"arxiv:1906.02045",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692",
"1906.02045"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #text-classification #hate-speech #en #arxiv-1907.11692 #arxiv-1906.02045 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
roberta-base-frenk-hate
=======================
Text classification model based on 'roberta-base' and fine-tuned on the FRENK dataset comprising of LGBT and migrant hatespeech. Only the English subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
Fine-tuning hyperparameters
---------------------------
Fine-tuning was performed with 'simpletransformers'. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
Performance
-----------
The same pipeline was run with two other transformer models and 'fasttext' for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
model: roberta-base-frenk-hate, average accuracy: 0.7915, average macro F1: 0.7785
model: xlm-roberta-large, average accuracy: 0.7904, average macro F1: 0.77876
model: xlm-roberta-base, average accuracy: 0.7577, average macro F1: 0.7402
model: fasttext, average accuracy: 0.725, average macro F1: 0.707
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with 'xlm-roberta-base':
test: Wilcoxon, accuracy p-value: 0.00781, macro F1 p-value: 0.00781
test: Mann Whithney U-test, accuracy p-value: 0.00108, macro F1 p-value: 0.00108
test: Student t-test, accuracy p-value: 1.35e-08, macro F1 p-value: 1.05e-07
Comparison with 'xlm-roberta-large' yielded inconclusive results. 'roberta-base' has average accuracy 0.7915, while 'xlm-roberta-large' has average accuracy of 0.7904. If macro F1 scores were to be compared, 'roberta-base' actually has lower average than 'xlm-roberta-large': 0.77852 vs 0.77876 respectively. The same statistical tests were performed with the premise that 'roberta-base' has greater metrics, and the results are given below.
test: Wilcoxon, accuracy p-value: 0.188, macro F1 p-value: 0.406
test: Mann Whithey, accuracy p-value: 0.375, macro F1 p-value: 0.649
test: Student t-test, accuracy p-value: 0.681, macro F1 p-value: 0.934
With reversed premise (i.e., that 'xlm-roberta-large' has greater statistics) the Wilcoxon p-value for macro F1 scores for this case reaches 0.656, Mann-Whithey p-value is 0.399, and of course the Student p-value stays the same. It was therefore concluded that performance of the two models are not statistically significantly different from one another.
Use examples
------------
If you use the model, please cite the following paper on which the original model is based:
and the dataset used for fine-tuning:
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #hate-speech #en #arxiv-1907.11692 #arxiv-1906.02045 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-classification
|
transformers
|
Text classification model based on `EMBEDDIA/sloberta` and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the slovenian subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
## Fine-tuning hyperparameters
Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
```python
model_args = {
"num_train_epochs": 14,
"learning_rate": 1e-5,
"train_batch_size": 21,
}
```
## Performance
The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
| model | average accuracy | average macro F1|
|---|---|---|
|sloberta-frenk-hate|0.7785|0.7764|
|EMBEDDIA/crosloengual-bert |0.7616|0.7585|
|xlm-roberta-base |0.686|0.6827|
|fasttext|0.709 |0.701 |
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with `crosloengual-bert`:
| test | accuracy p-value | macro F1 p-value|
| --- | --- | --- |
|Wilcoxon|0.00781|0.00781|
|Mann Whithney U test|0.00163|0.00108|
|Student t-test |0.000101|3.95e-05|
Comparison with `xlm-roberta-base`:
| test | accuracy p-value | macro F1 p-value|
| --- | --- | --- |
|Wilcoxon|0.00781|0.00781|
|Mann Whithney U test|0.00108|0.00108|
|Student t-test |9.46e-11|6.94e-11|
## Use examples
```python
from simpletransformers.classification import ClassificationModel
model_args = {
"num_train_epochs": 6,
"learning_rate": 3e-6,
"train_batch_size": 69}
model = ClassificationModel(
"camembert", "5roop/sloberta-frenk-hate", use_cuda=True,
args=model_args
)
predictions, logit_output = model.predict(["Silva, ti si grda in neprijazna", "Naša hiša ima dimnik"])
predictions
### Output:
### array([1, 0])
```
## Citation
If you use the model, please cite the following paper on which the original model is based:
```
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
and the dataset used for fine-tuning:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
```
|
{"language": "sl", "license": "cc-by-sa-4.0", "tags": ["text-classification", "hate-speech"], "widget": [{"text": "Silva, ti si grda in neprijazna"}]}
|
classla/sloberta-frenk-hate
| null |
[
"transformers",
"pytorch",
"safetensors",
"camembert",
"text-classification",
"hate-speech",
"sl",
"arxiv:1907.11692",
"arxiv:1906.02045",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692",
"1906.02045"
] |
[
"sl"
] |
TAGS
#transformers #pytorch #safetensors #camembert #text-classification #hate-speech #sl #arxiv-1907.11692 #arxiv-1906.02045 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
Text classification model based on 'EMBEDDIA/sloberta' and fine-tuned on the FRENK dataset comprising of LGBT and migrant hatespeech. Only the slovenian subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
Fine-tuning hyperparameters
---------------------------
Fine-tuning was performed with 'simpletransformers'. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
Performance
-----------
The same pipeline was run with two other transformer models and 'fasttext' for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
model: sloberta-frenk-hate, average accuracy: 0.7785, average macro F1: 0.7764
model: EMBEDDIA/crosloengual-bert, average accuracy: 0.7616, average macro F1: 0.7585
model: xlm-roberta-base, average accuracy: 0.686, average macro F1: 0.6827
model: fasttext, average accuracy: 0.709, average macro F1: 0.701
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with 'crosloengual-bert':
test: Wilcoxon, accuracy p-value: 0.00781, macro F1 p-value: 0.00781
test: Mann Whithney U test, accuracy p-value: 0.00163, macro F1 p-value: 0.00108
test: Student t-test, accuracy p-value: 0.000101, macro F1 p-value: 3.95e-05
Comparison with 'xlm-roberta-base':
test: Wilcoxon, accuracy p-value: 0.00781, macro F1 p-value: 0.00781
test: Mann Whithney U test, accuracy p-value: 0.00108, macro F1 p-value: 0.00108
test: Student t-test, accuracy p-value: 9.46e-11, macro F1 p-value: 6.94e-11
Use examples
------------
If you use the model, please cite the following paper on which the original model is based:
and the dataset used for fine-tuning:
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #camembert #text-classification #hate-speech #sl #arxiv-1907.11692 #arxiv-1906.02045 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# wav2vec2-xls-r-parlaspeech-hr
This model for Croatian ASR is based on the [facebook/wav2vec2-xls-r-300m model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) and was fine-tuned with 300 hours of recordings and transcripts from the ASR Croatian parliament dataset [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494).
If you use this model, please cite the following paper:
Nikola Ljubešić, Danijel Koržinek, Peter Rupnik, Ivo-Pavao Jazbec. ParlaSpeech-HR -- a freely available ASR dataset for Croatian bootstrapped from the ParlaMint corpus. http://www.lrec-conf.org/proceedings/lrec2022/workshops/ParlaCLARINIII/pdf/2022.parlaclariniii-1.16.pdf
## Metrics
Evaluation is performed on the dev and test portions of the [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494) dataset.
|split|CER|WER|
|---|---|---|
|dev|0.0335|0.1046|
|test|0.0234|0.0761|
There are multiple models available, and in terms of CER and WER, the best-performing model is [wav2vec2-large-slavic-parlaspeech-hr-lm](https://huggingface.co/classla/wav2vec2-large-slavic-parlaspeech-hr-lm).
## Usage in `transformers`
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
import torch
import os
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained(
"classla/wav2vec2-xls-r-parlaspeech-hr")
model = Wav2Vec2ForCTC.from_pretrained("classla/wav2vec2-xls-r-parlaspeech-hr")
# download the example wav files:
os.system("wget https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020570a.flac.wav")
# read the wav file
speech, sample_rate = sf.read("00020570a.flac.wav")
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.to(device)
# remove the raw wav file
os.system("rm 00020570a.flac.wav")
# retrieve logits
logits = model.to(device)(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0]).lower()
# transcription: 'veliki broj poslovnih subjekata posluje sa minusom velik dio'
```
## Training hyperparameters
In fine-tuning, the following arguments were used:
| arg | value |
|-------------------------------|-------|
| `per_device_train_batch_size` | 16 |
| `gradient_accumulation_steps` | 4 |
| `num_train_epochs` | 8 |
| `learning_rate` | 3e-4 |
| `warmup_steps` | 500 |
|
{"language": "hr", "tags": ["audio", "automatic-speech-recognition", "parlaspeech"], "datasets": ["parlaspeech-hr"], "widget": [{"example_title": "example 1", "src": "https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/1800.m4a"}, {"example_title": "example 2", "src": "https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020578b.flac.wav"}]}
|
classla/wav2vec2-xls-r-parlaspeech-hr
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"parlaspeech",
"hr",
"dataset:parlaspeech-hr",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hr"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #parlaspeech #hr #dataset-parlaspeech-hr #endpoints_compatible #region-us
|
wav2vec2-xls-r-parlaspeech-hr
=============================
This model for Croatian ASR is based on the facebook/wav2vec2-xls-r-300m model and was fine-tuned with 300 hours of recordings and transcripts from the ASR Croatian parliament dataset ParlaSpeech-HR v1.0.
If you use this model, please cite the following paper:
Nikola Ljubešić, Danijel Koržinek, Peter Rupnik, Ivo-Pavao Jazbec. ParlaSpeech-HR -- a freely available ASR dataset for Croatian bootstrapped from the ParlaMint corpus. URL
Metrics
-------
Evaluation is performed on the dev and test portions of the ParlaSpeech-HR v1.0 dataset.
split: dev, CER: 0.0335, WER: 0.1046
split: test, CER: 0.0234, WER: 0.0761
There are multiple models available, and in terms of CER and WER, the best-performing model is wav2vec2-large-slavic-parlaspeech-hr-lm.
Usage in 'transformers'
-----------------------
Training hyperparameters
------------------------
In fine-tuning, the following arguments were used:
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #parlaspeech #hr #dataset-parlaspeech-hr #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# hiccupBot medium GPT
|
{"tags": ["conversational"]}
|
clayfox/DialoGPT-medium-Hiccup
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# hiccupBot medium GPT
|
[
"# hiccupBot medium GPT"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# hiccupBot medium GPT"
] |
text-generation
|
transformers
|
# HiccupBot DialoGPT Model
|
{"tags": ["conversational"]}
|
clayfox/DialoGPT-small-Hiccup
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# HiccupBot DialoGPT Model
|
[
"# HiccupBot DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# HiccupBot DialoGPT Model"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2101779
## Validation Metrics
- Loss: 0.282466858625412
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/clem/autonlp-test3-2101779
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("clem/autonlp-test3-2101779", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("clem/autonlp-test3-2101779", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["clem/autonlp-data-test3"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
clem/autonlp-test3-2101779
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:clem/autonlp-data-test3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-clem/autonlp-data-test3 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2101779
## Validation Metrics
- Loss: 0.282466858625412
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 2101779",
"## Validation Metrics\n\n- Loss: 0.282466858625412\n- Accuracy: 1.0\n- Precision: 1.0\n- Recall: 1.0\n- AUC: 1.0\n- F1: 1.0",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-clem/autonlp-data-test3 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 2101779",
"## Validation Metrics\n\n- Loss: 0.282466858625412\n- Accuracy: 1.0\n- Precision: 1.0\n- Recall: 1.0\n- AUC: 1.0\n- F1: 1.0",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2101782
## Validation Metrics
- Loss: 0.015991805121302605
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/clem/autonlp-test3-2101782
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("clem/autonlp-test3-2101782", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("clem/autonlp-test3-2101782", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["clem/autonlp-data-test3"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
clem/autonlp-test3-2101782
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:clem/autonlp-data-test3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-clem/autonlp-data-test3 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2101782
## Validation Metrics
- Loss: 0.015991805121302605
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 2101782",
"## Validation Metrics\n\n- Loss: 0.015991805121302605\n- Accuracy: 1.0\n- Precision: 1.0\n- Recall: 1.0\n- AUC: 1.0\n- F1: 1.0",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-clem/autonlp-data-test3 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 2101782",
"## Validation Metrics\n\n- Loss: 0.015991805121302605\n- Accuracy: 1.0\n- Precision: 1.0\n- Recall: 1.0\n- AUC: 1.0\n- F1: 1.0",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification Urgent/Not Urgent
## Validation Metrics
- Loss: 0.08956164121627808
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/clem/autonlp-test3-2101787
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("clem/autonlp-test3-2101787", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("clem/autonlp-test3-2101787", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["clem/autonlp-data-test3"], "widget": [{"text": "this can wait"}]}
|
clem/autonlp-test3-2101787
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:clem/autonlp-data-test3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-clem/autonlp-data-test3 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification Urgent/Not Urgent
## Validation Metrics
- Loss: 0.08956164121627808
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification Urgent/Not Urgent",
"## Validation Metrics\n\n- Loss: 0.08956164121627808\n- Accuracy: 1.0\n- Precision: 1.0\n- Recall: 1.0\n- AUC: 1.0\n- F1: 1.0",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-clem/autonlp-data-test3 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification Urgent/Not Urgent",
"## Validation Metrics\n\n- Loss: 0.08956164121627808\n- Accuracy: 1.0\n- Precision: 1.0\n- Recall: 1.0\n- AUC: 1.0\n- F1: 1.0",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
# Model Card for distilroberta-base-climate-commitment
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into paragraphs being about climate commitments and actions and paragraphs not being about climate commitments and actions.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-commitment model is fine-tuned on our [climatebert/climate_commitments_actions](https://huggingface.co/climatebert/climate_commitments_actions) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_commitments_actions"
model_name = "climatebert/distilroberta-base-climate-commitment"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
```
|
{"language": ["en"], "license": "apache-2.0", "datasets": ["climatebert/climate_commitments_actions"], "metrics": ["accuracy"]}
|
climatebert/distilroberta-base-climate-commitment
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:climatebert/climate_commitments_actions",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #text-classification #en #dataset-climatebert/climate_commitments_actions #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for distilroberta-base-climate-commitment
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into paragraphs being about climate commitments and actions and paragraphs not being about climate commitments and actions.
Using the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-commitment model is fine-tuned on our climatebert/climate_commitments_actions dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
|
[
"# Model Card for distilroberta-base-climate-commitment",
"## Model Description\n\nThis is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into paragraphs being about climate commitments and actions and paragraphs not being about climate commitments and actions.\n\nUsing the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-commitment model is fine-tuned on our climatebert/climate_commitments_actions dataset.\n\n*Note: This model is trained on paragraphs. It may not perform well on sentences.*",
"## How to Get Started With the Model\n\nYou can use the model with a pipeline for text classification:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #en #dataset-climatebert/climate_commitments_actions #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for distilroberta-base-climate-commitment",
"## Model Description\n\nThis is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into paragraphs being about climate commitments and actions and paragraphs not being about climate commitments and actions.\n\nUsing the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-commitment model is fine-tuned on our climatebert/climate_commitments_actions dataset.\n\n*Note: This model is trained on paragraphs. It may not perform well on sentences.*",
"## How to Get Started With the Model\n\nYou can use the model with a pipeline for text classification:"
] |
fill-mask
|
transformers
|
# Model Card for distilroberta-base-climate-d-s
## Model Description
This is the ClimateBERT language model based on the DIV-SELECT and SIM-SELECT sample selection strategy.
*Note: We generally recommend choosing the [distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model over this language model (unless you have good reasons not to).*
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
## Climate performance model card
| distilroberta-base-climate-d-s | |
|--------------------------------------------------------------------------|----------------|
| 1. Is the resulting model publicly available? | Yes |
| 2. How much time does the training of the final model take? | 48 hours |
| 3. How much time did all experiments take (incl. hyperparameter search)? | 350 hours |
| 4. What was the power of GPU and CPU? | 0.7 kW |
| 5. At which geo location were the computations performed? | Germany |
| 6. What was the energy mix at the geo location? | 470 gCO2eq/kWh |
| 7. How much CO2eq was emitted to train the final model? | 15.79 kg |
| 8. How much CO2eq was emitted for all experiments? | 115.15 kg |
| 9. What is the average CO2eq emission for the inference of one sample? | 0.62 mg |
| 10. Which positive environmental impact can be expected from this work? | This work can be categorized as a building block tools following Jin et al (2021). It supports the training of NLP models in the field of climate change and, thereby, have a positive environmental impact in the future. |
| 11. Comments | Block pruning could decrease CO2eq emissions |
## Citation Information
```bibtex
@inproceedings{wkbl2022climatebert,
title={{ClimateBERT: A Pretrained Language Model for Climate-Related Text}},
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
booktitle={Proceedings of AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges},
year={2022},
doi={https://doi.org/10.48550/arXiv.2212.13631},
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["climate"]}
|
climatebert/distilroberta-base-climate-d-s
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"climate",
"en",
"arxiv:2110.12010",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.12010"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #climate #en #arxiv-2110.12010 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Model Card for distilroberta-base-climate-d-s
=============================================
Model Description
-----------------
This is the ClimateBERT language model based on the DIV-SELECT and SIM-SELECT sample selection strategy.
*Note: We generally recommend choosing the distilroberta-base-climate-f language model over this language model (unless you have good reasons not to).*
Using the DistilRoBERTa model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our language model research paper.
Climate performance model card
------------------------------
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #climate #en #arxiv-2110.12010 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
# Model Card for distilroberta-base-climate-d
## Model Description
This is the ClimateBERT language model based on the DIV-SELECT sample selection strategy.
*Note: We generally recommend choosing the [distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model over this language model (unless you have good reasons not to).*
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
## Climate performance model card
| distilroberta-base-climate-d | |
|--------------------------------------------------------------------------|----------------|
| 1. Is the resulting model publicly available? | Yes |
| 2. How much time does the training of the final model take? | 48 hours |
| 3. How much time did all experiments take (incl. hyperparameter search)? | 350 hours |
| 4. What was the power of GPU and CPU? | 0.7 kW |
| 5. At which geo location were the computations performed? | Germany |
| 6. What was the energy mix at the geo location? | 470 gCO2eq/kWh |
| 7. How much CO2eq was emitted to train the final model? | 15.79 kg |
| 8. How much CO2eq was emitted for all experiments? | 115.15 kg |
| 9. What is the average CO2eq emission for the inference of one sample? | 0.62 mg |
| 10. Which positive environmental impact can be expected from this work? | This work can be categorized as a building block tools following Jin et al (2021). It supports the training of NLP models in the field of climate change and, thereby, have a positive environmental impact in the future. |
| 11. Comments | Block pruning could decrease CO2eq emissions |
## Citation Information
```bibtex
@inproceedings{wkbl2022climatebert,
title={{ClimateBERT: A Pretrained Language Model for Climate-Related Text}},
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
booktitle={Proceedings of AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges},
year={2022},
doi={https://doi.org/10.48550/arXiv.2212.13631},
}
```
|
{"language": "en", "license": "apache-2.0"}
|
climatebert/distilroberta-base-climate-d
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"en",
"arxiv:2110.12010",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.12010"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #en #arxiv-2110.12010 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Model Card for distilroberta-base-climate-d
===========================================
Model Description
-----------------
This is the ClimateBERT language model based on the DIV-SELECT sample selection strategy.
*Note: We generally recommend choosing the distilroberta-base-climate-f language model over this language model (unless you have good reasons not to).*
Using the DistilRoBERTa model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our language model research paper.
Climate performance model card
------------------------------
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #en #arxiv-2110.12010 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
# Model Card for distilroberta-base-climate-detector
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for detecting climate-related paragraphs.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-detector model is fine-tuned on our [climatebert/climate_detection](https://huggingface.co/climatebert/climate_detection) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_detection"
model_name = "climatebert/distilroberta-base-climate-detector"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
```
|
{"language": ["en"], "license": "apache-2.0", "datasets": ["climatebert/climate_detection"], "metrics": ["accuracy"]}
|
climatebert/distilroberta-base-climate-detector
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:climatebert/climate_detection",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #text-classification #en #dataset-climatebert/climate_detection #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Model Card for distilroberta-base-climate-detector
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for detecting climate-related paragraphs.
Using the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-detector model is fine-tuned on our climatebert/climate_detection dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
|
[
"# Model Card for distilroberta-base-climate-detector",
"## Model Description\n\nThis is the fine-tuned ClimateBERT language model with a classification head for detecting climate-related paragraphs.\n\nUsing the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-detector model is fine-tuned on our climatebert/climate_detection dataset.\n\n*Note: This model is trained on paragraphs. It may not perform well on sentences.*",
"## How to Get Started With the Model\n\nYou can use the model with a pipeline for text classification:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #en #dataset-climatebert/climate_detection #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Model Card for distilroberta-base-climate-detector",
"## Model Description\n\nThis is the fine-tuned ClimateBERT language model with a classification head for detecting climate-related paragraphs.\n\nUsing the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-detector model is fine-tuned on our climatebert/climate_detection dataset.\n\n*Note: This model is trained on paragraphs. It may not perform well on sentences.*",
"## How to Get Started With the Model\n\nYou can use the model with a pipeline for text classification:"
] |
fill-mask
|
transformers
|
# Model Card for distilroberta-base-climate-f
## Model Description
This is the ClimateBERT language model based on the FULL-SELECT sample selection strategy.
*Note: We generally recommend choosing this language model over those based on the other sample selection strategies (unless you have good reasons not to). This is also the only language model we will update from time to time.*
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
*Update September 2, 2022: Now additionally pre-trained on an even larger text corpus, comprising >2M paragraphs. If you are looking for the language model before the update (i.e. for reproducibility), just use an older commit like [6be4fbd](https://huggingface.co/climatebert/distilroberta-base-climate-f/tree/6be4fbd3fedfd78ccb3c730c1f166947fbc940ba).*
## Climate performance model card
| distilroberta-base-climate-f | |
|--------------------------------------------------------------------------|----------------|
| 1. Is the resulting model publicly available? | Yes |
| 2. How much time does the training of the final model take? | 48 hours |
| 3. How much time did all experiments take (incl. hyperparameter search)? | 350 hours |
| 4. What was the power of GPU and CPU? | 0.7 kW |
| 5. At which geo location were the computations performed? | Germany |
| 6. What was the energy mix at the geo location? | 470 gCO2eq/kWh |
| 7. How much CO2eq was emitted to train the final model? | 15.79 kg |
| 8. How much CO2eq was emitted for all experiments? | 115.15 kg |
| 9. What is the average CO2eq emission for the inference of one sample? | 0.62 mg |
| 10. Which positive environmental impact can be expected from this work? | This work can be categorized as a building block tools following Jin et al (2021). It supports the training of NLP models in the field of climate change and, thereby, have a positive environmental impact in the future. |
| 11. Comments | Block pruning could decrease CO2eq emissions |
## Citation Information
```bibtex
@inproceedings{wkbl2022climatebert,
title={{ClimateBERT: A Pretrained Language Model for Climate-Related Text}},
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
booktitle={Proceedings of AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges},
year={2022},
doi={https://doi.org/10.48550/arXiv.2212.13631},
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["climate"]}
|
climatebert/distilroberta-base-climate-f
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"climate",
"en",
"arxiv:2110.12010",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.12010"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #climate #en #arxiv-2110.12010 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Model Card for distilroberta-base-climate-f
===========================================
Model Description
-----------------
This is the ClimateBERT language model based on the FULL-SELECT sample selection strategy.
*Note: We generally recommend choosing this language model over those based on the other sample selection strategies (unless you have good reasons not to). This is also the only language model we will update from time to time.*
Using the DistilRoBERTa model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our language model research paper.
*Update September 2, 2022: Now additionally pre-trained on an even larger text corpus, comprising >2M paragraphs. If you are looking for the language model before the update (i.e. for reproducibility), just use an older commit like 6be4fbd.*
Climate performance model card
------------------------------
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #climate #en #arxiv-2110.12010 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# Model Card for distilroberta-base-climate-s
## Model Description
This is the ClimateBERT language model based on the SIM-SELECT sample selection strategy.
*Note: We generally recommend choosing the [distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model over this language model (unless you have good reasons not to).*
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
## Climate performance model card
| distilroberta-base-climate-s | |
|--------------------------------------------------------------------------|----------------|
| 1. Is the resulting model publicly available? | Yes |
| 2. How much time does the training of the final model take? | 48 hours |
| 3. How much time did all experiments take (incl. hyperparameter search)? | 350 hours |
| 4. What was the power of GPU and CPU? | 0.7 kW |
| 5. At which geo location were the computations performed? | Germany |
| 6. What was the energy mix at the geo location? | 470 gCO2eq/kWh |
| 7. How much CO2eq was emitted to train the final model? | 15.79 kg |
| 8. How much CO2eq was emitted for all experiments? | 115.15 kg |
| 9. What is the average CO2eq emission for the inference of one sample? | 0.62 mg |
| 10. Which positive environmental impact can be expected from this work? | This work can be categorized as a building block tools following Jin et al (2021). It supports the training of NLP models in the field of climate change and, thereby, have a positive environmental impact in the future. |
| 11. Comments | Block pruning could decrease CO2eq emissions |
## Citation Information
```bibtex
@inproceedings{wkbl2022climatebert,
title={{ClimateBERT: A Pretrained Language Model for Climate-Related Text}},
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
booktitle={Proceedings of AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges},
year={2022},
doi={https://doi.org/10.48550/arXiv.2212.13631},
}
```
|
{"language": "en", "license": "apache-2.0"}
|
climatebert/distilroberta-base-climate-s
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"en",
"arxiv:2110.12010",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.12010"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #en #arxiv-2110.12010 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Model Card for distilroberta-base-climate-s
===========================================
Model Description
-----------------
This is the ClimateBERT language model based on the SIM-SELECT sample selection strategy.
*Note: We generally recommend choosing the distilroberta-base-climate-f language model over this language model (unless you have good reasons not to).*
Using the DistilRoBERTa model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our language model research paper.
Climate performance model card
------------------------------
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #en #arxiv-2110.12010 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
# Model Card for distilroberta-base-climate-sentiment
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the climate-related sentiment classes opportunity, neutral, or risk.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-sentiment model is fine-tuned on our [climatebert/climate_sentiment](https://huggingface.co/climatebert/climate_sentiment) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_sentiment"
model_name = "climatebert/distilroberta-base-climate-sentiment"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
```
|
{"language": ["en"], "license": "apache-2.0", "datasets": ["climatebert/climate_sentiment"], "metrics": ["accuracy"]}
|
climatebert/distilroberta-base-climate-sentiment
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:climatebert/climate_sentiment",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #text-classification #en #dataset-climatebert/climate_sentiment #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for distilroberta-base-climate-sentiment
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the climate-related sentiment classes opportunity, neutral, or risk.
Using the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-sentiment model is fine-tuned on our climatebert/climate_sentiment dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
|
[
"# Model Card for distilroberta-base-climate-sentiment",
"## Model Description\n\nThis is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the climate-related sentiment classes opportunity, neutral, or risk.\n\nUsing the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-sentiment model is fine-tuned on our climatebert/climate_sentiment dataset.\n\n*Note: This model is trained on paragraphs. It may not perform well on sentences.*",
"## How to Get Started With the Model\n\nYou can use the model with a pipeline for text classification:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #en #dataset-climatebert/climate_sentiment #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for distilroberta-base-climate-sentiment",
"## Model Description\n\nThis is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the climate-related sentiment classes opportunity, neutral, or risk.\n\nUsing the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-sentiment model is fine-tuned on our climatebert/climate_sentiment dataset.\n\n*Note: This model is trained on paragraphs. It may not perform well on sentences.*",
"## How to Get Started With the Model\n\nYou can use the model with a pipeline for text classification:"
] |
text-classification
|
transformers
|
# Model Card for distilroberta-base-climate-specificity
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into specific and non-specific paragraphs.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-specificity model is fine-tuned on our [climatebert/climate_specificity](https://huggingface.co/climatebert/climate_specificity) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_specificity"
model_name = "climatebert/distilroberta-base-climate-specificity"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["climate"], "datasets": ["climatebert/climate_specificity"], "metrics": ["accuracy"]}
|
climatebert/distilroberta-base-climate-specificity
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"climate",
"en",
"dataset:climatebert/climate_specificity",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #text-classification #climate #en #dataset-climatebert/climate_specificity #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Model Card for distilroberta-base-climate-specificity
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into specific and non-specific paragraphs.
Using the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-specificity model is fine-tuned on our climatebert/climate_specificity dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
|
[
"# Model Card for distilroberta-base-climate-specificity",
"## Model Description\n\nThis is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into specific and non-specific paragraphs.\n\nUsing the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-specificity model is fine-tuned on our climatebert/climate_specificity dataset.\n\n*Note: This model is trained on paragraphs. It may not perform well on sentences.*",
"## How to Get Started With the Model\n\nYou can use the model with a pipeline for text classification:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #climate #en #dataset-climatebert/climate_specificity #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Model Card for distilroberta-base-climate-specificity",
"## Model Description\n\nThis is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into specific and non-specific paragraphs.\n\nUsing the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-specificity model is fine-tuned on our climatebert/climate_specificity dataset.\n\n*Note: This model is trained on paragraphs. It may not perform well on sentences.*",
"## How to Get Started With the Model\n\nYou can use the model with a pipeline for text classification:"
] |
text-classification
|
transformers
|
# Model Card for distilroberta-base-climate-tcfd
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the four TCFD recommendation categories ([fsb-tcfd.org](https://www.fsb-tcfd.org)).
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-tcfd model is fine-tuned on our [climatebert/tcfd_recommendations](https://huggingface.co/climatebert/tcfd_recommendations) dataset using only the four recommendation categories (i.e., we remove the non-climate-related class from the dataset).
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/tcfd_recommendations"
model_name = "climatebert/distilroberta-base-climate-tcfd"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["climate"], "datasets": ["climatebert/tcfd_recommendations"], "metrics": ["accuracy"]}
|
climatebert/distilroberta-base-climate-tcfd
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"climate",
"en",
"dataset:climatebert/tcfd_recommendations",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #text-classification #climate #en #dataset-climatebert/tcfd_recommendations #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for distilroberta-base-climate-tcfd
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the four TCFD recommendation categories (URL).
Using the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-tcfd model is fine-tuned on our climatebert/tcfd_recommendations dataset using only the four recommendation categories (i.e., we remove the non-climate-related class from the dataset).
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
|
[
"# Model Card for distilroberta-base-climate-tcfd",
"## Model Description\n\nThis is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the four TCFD recommendation categories (URL).\n\nUsing the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-tcfd model is fine-tuned on our climatebert/tcfd_recommendations dataset using only the four recommendation categories (i.e., we remove the non-climate-related class from the dataset).\n\n*Note: This model is trained on paragraphs. It may not perform well on sentences.*",
"## How to Get Started With the Model\n\nYou can use the model with a pipeline for text classification:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #climate #en #dataset-climatebert/tcfd_recommendations #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for distilroberta-base-climate-tcfd",
"## Model Description\n\nThis is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the four TCFD recommendation categories (URL).\n\nUsing the climatebert/distilroberta-base-climate-f language model as starting point, the distilroberta-base-climate-tcfd model is fine-tuned on our climatebert/tcfd_recommendations dataset using only the four recommendation categories (i.e., we remove the non-climate-related class from the dataset).\n\n*Note: This model is trained on paragraphs. It may not perform well on sentences.*",
"## How to Get Started With the Model\n\nYou can use the model with a pipeline for text classification:"
] |
null |
transformers
|
# CLIP-Italian
CLIP Italian is a CLIP-like Model for Italian. The CLIP model (Contrastive Language–Image Pre-training) was developed by researchers at OpenAI and is able to efficiently learn visual concepts from natural language supervision.
We fine-tuned a competitive Italian CLIP model with only ~1.4 million Italian image-text pairs. This model is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Training Data
We considered three main sources of data:
- [WIT](https://github.com/google-research-datasets/wit)
- [MSCOCO-IT](https://github.com/crux82/mscoco-it)
- [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/)
## Training Procedure
Preprocessing, hardware used, hyperparameters...
## Evaluation Performance
## Limitations
## Usage
## Team members
- Federico Bianchi ([vinid](https://huggingface.co/vinid))
- Raphael Pisoni ([4rtemi5](https://huggingface.co/4rtemi5))
- Giuseppe Attanasio ([g8a9](https://huggingface.co/g8a9))
- Silvia Terragni ([silviatti](https://huggingface.co/silviatti))
- Dario Balestri ([D3Reo](https://huggingface.co/D3Reo))
- Gabriele Sarti ([gsarti](https://huggingface.co/gsarti))
- Sri Lakshmi ([srisweet](https://huggingface.co/srisweet))
## Useful links
- [CLIP Blog post](https://openai.com/blog/clip/)
- [CLIP paper](https://arxiv.org/abs/2103.00020)
- [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md)
- [Community Week channel](https://discord.com/channels/858019234139602994/859711887520038933)
- [Hybrid CLIP example scripts](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects/hybrid_clip)
- [Model Repository](https://huggingface.co/clip-italian/clip-italian-final/)
|
{"language": "it", "tags": ["italian", "bert", "vit", "vision"], "datasets": ["wit", "ctl/conceptualCaptions", "mscoco-it"]}
|
clip-italian/clip-italian-final
| null |
[
"transformers",
"jax",
"hybrid-clip",
"italian",
"bert",
"vit",
"vision",
"it",
"dataset:wit",
"dataset:ctl/conceptualCaptions",
"dataset:mscoco-it",
"arxiv:2103.00020",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2103.00020"
] |
[
"it"
] |
TAGS
#transformers #jax #hybrid-clip #italian #bert #vit #vision #it #dataset-wit #dataset-ctl/conceptualCaptions #dataset-mscoco-it #arxiv-2103.00020 #endpoints_compatible #region-us
|
# CLIP-Italian
CLIP Italian is a CLIP-like Model for Italian. The CLIP model (Contrastive Language–Image Pre-training) was developed by researchers at OpenAI and is able to efficiently learn visual concepts from natural language supervision.
We fine-tuned a competitive Italian CLIP model with only ~1.4 million Italian image-text pairs. This model is part of the Flax/Jax Community Week, organized by HuggingFace and TPU usage sponsored by Google.
## Training Data
We considered three main sources of data:
- WIT
- MSCOCO-IT
- Conceptual Captions
## Training Procedure
Preprocessing, hardware used, hyperparameters...
## Evaluation Performance
## Limitations
## Usage
## Team members
- Federico Bianchi (vinid)
- Raphael Pisoni (4rtemi5)
- Giuseppe Attanasio (g8a9)
- Silvia Terragni (silviatti)
- Dario Balestri (D3Reo)
- Gabriele Sarti (gsarti)
- Sri Lakshmi (srisweet)
## Useful links
- CLIP Blog post
- CLIP paper
- Community Week README
- Community Week channel
- Hybrid CLIP example scripts
- Model Repository
|
[
"# CLIP-Italian\nCLIP Italian is a CLIP-like Model for Italian. The CLIP model (Contrastive Language–Image Pre-training) was developed by researchers at OpenAI and is able to efficiently learn visual concepts from natural language supervision. \n\nWe fine-tuned a competitive Italian CLIP model with only ~1.4 million Italian image-text pairs. This model is part of the Flax/Jax Community Week, organized by HuggingFace and TPU usage sponsored by Google.",
"## Training Data\nWe considered three main sources of data: \n- WIT\n- MSCOCO-IT\n- Conceptual Captions",
"## Training Procedure\nPreprocessing, hardware used, hyperparameters...",
"## Evaluation Performance",
"## Limitations",
"## Usage",
"## Team members\n- Federico Bianchi (vinid)\n- Raphael Pisoni (4rtemi5)\n- Giuseppe Attanasio (g8a9)\n- Silvia Terragni (silviatti)\n- Dario Balestri (D3Reo)\n- Gabriele Sarti (gsarti)\n- Sri Lakshmi (srisweet)",
"## Useful links\n- CLIP Blog post\n- CLIP paper\n- Community Week README\n- Community Week channel\n- Hybrid CLIP example scripts\n- Model Repository"
] |
[
"TAGS\n#transformers #jax #hybrid-clip #italian #bert #vit #vision #it #dataset-wit #dataset-ctl/conceptualCaptions #dataset-mscoco-it #arxiv-2103.00020 #endpoints_compatible #region-us \n",
"# CLIP-Italian\nCLIP Italian is a CLIP-like Model for Italian. The CLIP model (Contrastive Language–Image Pre-training) was developed by researchers at OpenAI and is able to efficiently learn visual concepts from natural language supervision. \n\nWe fine-tuned a competitive Italian CLIP model with only ~1.4 million Italian image-text pairs. This model is part of the Flax/Jax Community Week, organized by HuggingFace and TPU usage sponsored by Google.",
"## Training Data\nWe considered three main sources of data: \n- WIT\n- MSCOCO-IT\n- Conceptual Captions",
"## Training Procedure\nPreprocessing, hardware used, hyperparameters...",
"## Evaluation Performance",
"## Limitations",
"## Usage",
"## Team members\n- Federico Bianchi (vinid)\n- Raphael Pisoni (4rtemi5)\n- Giuseppe Attanasio (g8a9)\n- Silvia Terragni (silviatti)\n- Dario Balestri (D3Reo)\n- Gabriele Sarti (gsarti)\n- Sri Lakshmi (srisweet)",
"## Useful links\n- CLIP Blog post\n- CLIP paper\n- Community Week README\n- Community Week channel\n- Hybrid CLIP example scripts\n- Model Repository"
] |
feature-extraction
|
transformers
|
# Italian CLIP
Paper: [Contrastive Language-Image Pre-training for the Italian Language](https://arxiv.org/abs/2108.08688)
With a few tricks, we have been able to fine-tune a competitive Italian CLIP model with **only 1.4 million** training samples. Our Italian CLIP model is built upon the [Italian BERT](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) model provided by [dbmdz](https://huggingface.co/dbmdz) and the OpenAI [vision transformer](https://huggingface.co/openai/clip-vit-base-patch32).
Do you want to test our model right away? We got you covered! You just need to head to our [demo application](https://huggingface.co/spaces/clip-italian/clip-italian-demo).
The demo also contains all the details of the project, from training tricks to our most impressive results, and much more!
# Training data
We considered four main sources of data:
+ [WIT](https://github.com/google-research-datasets/wit) is an image-caption dataset collected from Wikipedia (see,
[Srinivasan et al., 2021](https://arxiv.org/pdf/2103.01913.pdf)).
+ [MSCOCO-IT](https://github.com/crux82/mscoco-it). This image-caption dataset comes from the work by [Scaiella et al., 2019](http://www.ai-lc.it/IJCoL/v5n2/IJCOL_5_2_3___scaiella_et_al.pdf).
+ [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/). This image-caption dataset comes from
the work by [Sharma et al., 2018](https://aclanthology.org/P18-1238.pdf).
+ [La Foto del Giorno](https://www.ilpost.it/foto-del-giorno/). This image-caption dataset is collected from [Il Post](https://www.ilpost.it/), a prominent Italian online newspaper.
We used better data augmentation, strategic training choices (we have way less data than the original CLIP paper), and backbone-freezing pre-training. For all the details on that, please refer to our [demo](https://huggingface.co/spaces/clip-italian/clip-italian-demo).
# Experiments
## Quantitative Evaluation
To better understand how well our clip-italian model works we run an experimental evaluation. Since this is the first clip-based model in Italian, we used the multilingual CLIP model as a comparison baseline.
### mCLIP
The multilingual CLIP (henceforth, mCLIP), is a model introduced by [Nils Reimers](https://www.sbert.net/docs/pretrained_models.html) in his
[sentence-transformer](https://www.sbert.net/index.html) library. mCLIP is based on a multilingual encoder
that was created through multilingual knowledge distillation (see [Reimers et al., 2020](https://aclanthology.org/2020.emnlp-main.365/)).
### Tasks
We selected two different tasks:
+ image-retrieval
+ zero-shot classification
### Reproducibiliy
Both experiments should be very easy to replicate, we share the two colab notebook we used to compute the two results
+ [Image Retrieval](https://colab.research.google.com/drive/1bLVwVKpAndpEDHqjzxVPr_9nGrSbuOQd?usp=sharing)
+ [ImageNet Zero Shot Evaluation](https://colab.research.google.com/drive/1zfWeVWY79XXH63Ci-pk8xxx3Vu_RRgW-?usp=sharing)
### Image Retrieval
This experiment is run against the MSCOCO-IT validation set (that we haven't used in training). Given in input
a caption, we search for the most similar image in the MSCOCO-IT validation set. As evaluation metrics
we use the MRR@K.
| MRR | CLIP-Italian | mCLIP |
| --------------- | ------------ |-------|
| MRR@1 | **0.3797** | 0.2874|
| MRR@5 | **0.5039** | 0.3957|
| MRR@10 | **0.5204** | 0.4129|
It is true that we used MSCOCO-IT in training, and this might give us an advantage. However the original CLIP model was trained
on 400million images (and some of them probably were from MSCOCO).
### Zero-shot image classification
This experiment replicates the original one run by OpenAI on zero-shot image classification on ImageNet.
To do this, we used DeepL to translate the image labels in ImageNet. We evaluate the models computing the accuracy at different levels.
| Accuracy | CLIP-Italian | mCLIP |
| --------------- | ------------ |-------|
| Accuracy@1 | **22.11** | 20.15 |
| Accuracy@5 | **43.69** | 36.57 |
| Accuracy@10 | **52.55** | 42.91 |
| Accuracy@100 | **81.08** | 67.11 |
Our results confirm that CLIP-Italian is very competitive and beats mCLIP on the two different task
we have been testing. Note, however, that our results are lower than those shown in the original OpenAI
paper (see, [Radford et al., 2021](https://arxiv.org/abs/2103.00020)). However, considering that our results are in line with those obtained by mCLIP we think that
the translated image labels might have had an impact on the final scores.
# Team members
- Federico Bianchi ([vinid](https://huggingface.co/vinid))
- Raphael Pisoni ([4rtemi5](https://huggingface.co/4rtemi5))
- Giuseppe Attanasio ([g8a9](https://huggingface.co/g8a9))
- Silvia Terragni ([silviatti](https://huggingface.co/silviatti))
- Dario Balestri ([D3Reo](https://huggingface.co/D3Reo))
- Gabriele Sarti ([gsarti](https://huggingface.co/gsarti))
- Sri Lakshmi ([srisweet](https://huggingface.co/srisweet))
|
{"language": "it", "license": "gpl-3.0", "tags": ["italian", "bert", "vit", "vision"], "datasets": ["wit", "ctl/conceptualCaptions", "mscoco-it"]}
|
clip-italian/clip-italian
| null |
[
"transformers",
"pytorch",
"jax",
"vision-text-dual-encoder",
"feature-extraction",
"italian",
"bert",
"vit",
"vision",
"it",
"dataset:wit",
"dataset:ctl/conceptualCaptions",
"dataset:mscoco-it",
"arxiv:2108.08688",
"arxiv:2103.01913",
"arxiv:2103.00020",
"license:gpl-3.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2108.08688",
"2103.01913",
"2103.00020"
] |
[
"it"
] |
TAGS
#transformers #pytorch #jax #vision-text-dual-encoder #feature-extraction #italian #bert #vit #vision #it #dataset-wit #dataset-ctl/conceptualCaptions #dataset-mscoco-it #arxiv-2108.08688 #arxiv-2103.01913 #arxiv-2103.00020 #license-gpl-3.0 #endpoints_compatible #has_space #region-us
|
Italian CLIP
============
Paper: Contrastive Language-Image Pre-training for the Italian Language
With a few tricks, we have been able to fine-tune a competitive Italian CLIP model with only 1.4 million training samples. Our Italian CLIP model is built upon the Italian BERT model provided by dbmdz and the OpenAI vision transformer.
Do you want to test our model right away? We got you covered! You just need to head to our demo application.
The demo also contains all the details of the project, from training tricks to our most impressive results, and much more!
Training data
=============
We considered four main sources of data:
* WIT is an image-caption dataset collected from Wikipedia (see,
Srinivasan et al., 2021).
* MSCOCO-IT. This image-caption dataset comes from the work by Scaiella et al., 2019.
* Conceptual Captions. This image-caption dataset comes from
the work by Sharma et al., 2018.
* La Foto del Giorno. This image-caption dataset is collected from Il Post, a prominent Italian online newspaper.
We used better data augmentation, strategic training choices (we have way less data than the original CLIP paper), and backbone-freezing pre-training. For all the details on that, please refer to our demo.
Experiments
===========
Quantitative Evaluation
-----------------------
To better understand how well our clip-italian model works we run an experimental evaluation. Since this is the first clip-based model in Italian, we used the multilingual CLIP model as a comparison baseline.
### mCLIP
The multilingual CLIP (henceforth, mCLIP), is a model introduced by Nils Reimers in his
sentence-transformer library. mCLIP is based on a multilingual encoder
that was created through multilingual knowledge distillation (see Reimers et al., 2020).
### Tasks
We selected two different tasks:
* image-retrieval
* zero-shot classification
### Reproducibiliy
Both experiments should be very easy to replicate, we share the two colab notebook we used to compute the two results
* Image Retrieval
* ImageNet Zero Shot Evaluation
### Image Retrieval
This experiment is run against the MSCOCO-IT validation set (that we haven't used in training). Given in input
a caption, we search for the most similar image in the MSCOCO-IT validation set. As evaluation metrics
we use the MRR@K.
MRR: MRR@1, CLIP-Italian: 0.3797, mCLIP: 0.2874
MRR: MRR@5, CLIP-Italian: 0.5039, mCLIP: 0.3957
MRR: MRR@10, CLIP-Italian: 0.5204, mCLIP: 0.4129
It is true that we used MSCOCO-IT in training, and this might give us an advantage. However the original CLIP model was trained
on 400million images (and some of them probably were from MSCOCO).
### Zero-shot image classification
This experiment replicates the original one run by OpenAI on zero-shot image classification on ImageNet.
To do this, we used DeepL to translate the image labels in ImageNet. We evaluate the models computing the accuracy at different levels.
Accuracy: Accuracy@1, CLIP-Italian: 22.11, mCLIP: 20.15
Accuracy: Accuracy@5, CLIP-Italian: 43.69, mCLIP: 36.57
Accuracy: Accuracy@10, CLIP-Italian: 52.55, mCLIP: 42.91
Accuracy: Accuracy@100, CLIP-Italian: 81.08, mCLIP: 67.11
Our results confirm that CLIP-Italian is very competitive and beats mCLIP on the two different task
we have been testing. Note, however, that our results are lower than those shown in the original OpenAI
paper (see, Radford et al., 2021). However, considering that our results are in line with those obtained by mCLIP we think that
the translated image labels might have had an impact on the final scores.
Team members
============
* Federico Bianchi (vinid)
* Raphael Pisoni (4rtemi5)
* Giuseppe Attanasio (g8a9)
* Silvia Terragni (silviatti)
* Dario Balestri (D3Reo)
* Gabriele Sarti (gsarti)
* Sri Lakshmi (srisweet)
|
[
"### mCLIP\n\n\nThe multilingual CLIP (henceforth, mCLIP), is a model introduced by Nils Reimers in his\nsentence-transformer library. mCLIP is based on a multilingual encoder\nthat was created through multilingual knowledge distillation (see Reimers et al., 2020).",
"### Tasks\n\n\nWe selected two different tasks:\n\n\n* image-retrieval\n* zero-shot classification",
"### Reproducibiliy\n\n\nBoth experiments should be very easy to replicate, we share the two colab notebook we used to compute the two results\n\n\n* Image Retrieval\n* ImageNet Zero Shot Evaluation",
"### Image Retrieval\n\n\nThis experiment is run against the MSCOCO-IT validation set (that we haven't used in training). Given in input\na caption, we search for the most similar image in the MSCOCO-IT validation set. As evaluation metrics\nwe use the MRR@K.\n\n\nMRR: MRR@1, CLIP-Italian: 0.3797, mCLIP: 0.2874\nMRR: MRR@5, CLIP-Italian: 0.5039, mCLIP: 0.3957\nMRR: MRR@10, CLIP-Italian: 0.5204, mCLIP: 0.4129\n\n\nIt is true that we used MSCOCO-IT in training, and this might give us an advantage. However the original CLIP model was trained\non 400million images (and some of them probably were from MSCOCO).",
"### Zero-shot image classification\n\n\nThis experiment replicates the original one run by OpenAI on zero-shot image classification on ImageNet.\nTo do this, we used DeepL to translate the image labels in ImageNet. We evaluate the models computing the accuracy at different levels.\n\n\nAccuracy: Accuracy@1, CLIP-Italian: 22.11, mCLIP: 20.15\nAccuracy: Accuracy@5, CLIP-Italian: 43.69, mCLIP: 36.57\nAccuracy: Accuracy@10, CLIP-Italian: 52.55, mCLIP: 42.91\nAccuracy: Accuracy@100, CLIP-Italian: 81.08, mCLIP: 67.11\n\n\nOur results confirm that CLIP-Italian is very competitive and beats mCLIP on the two different task\nwe have been testing. Note, however, that our results are lower than those shown in the original OpenAI\npaper (see, Radford et al., 2021). However, considering that our results are in line with those obtained by mCLIP we think that\nthe translated image labels might have had an impact on the final scores.\n\n\nTeam members\n============\n\n\n* Federico Bianchi (vinid)\n* Raphael Pisoni (4rtemi5)\n* Giuseppe Attanasio (g8a9)\n* Silvia Terragni (silviatti)\n* Dario Balestri (D3Reo)\n* Gabriele Sarti (gsarti)\n* Sri Lakshmi (srisweet)"
] |
[
"TAGS\n#transformers #pytorch #jax #vision-text-dual-encoder #feature-extraction #italian #bert #vit #vision #it #dataset-wit #dataset-ctl/conceptualCaptions #dataset-mscoco-it #arxiv-2108.08688 #arxiv-2103.01913 #arxiv-2103.00020 #license-gpl-3.0 #endpoints_compatible #has_space #region-us \n",
"### mCLIP\n\n\nThe multilingual CLIP (henceforth, mCLIP), is a model introduced by Nils Reimers in his\nsentence-transformer library. mCLIP is based on a multilingual encoder\nthat was created through multilingual knowledge distillation (see Reimers et al., 2020).",
"### Tasks\n\n\nWe selected two different tasks:\n\n\n* image-retrieval\n* zero-shot classification",
"### Reproducibiliy\n\n\nBoth experiments should be very easy to replicate, we share the two colab notebook we used to compute the two results\n\n\n* Image Retrieval\n* ImageNet Zero Shot Evaluation",
"### Image Retrieval\n\n\nThis experiment is run against the MSCOCO-IT validation set (that we haven't used in training). Given in input\na caption, we search for the most similar image in the MSCOCO-IT validation set. As evaluation metrics\nwe use the MRR@K.\n\n\nMRR: MRR@1, CLIP-Italian: 0.3797, mCLIP: 0.2874\nMRR: MRR@5, CLIP-Italian: 0.5039, mCLIP: 0.3957\nMRR: MRR@10, CLIP-Italian: 0.5204, mCLIP: 0.4129\n\n\nIt is true that we used MSCOCO-IT in training, and this might give us an advantage. However the original CLIP model was trained\non 400million images (and some of them probably were from MSCOCO).",
"### Zero-shot image classification\n\n\nThis experiment replicates the original one run by OpenAI on zero-shot image classification on ImageNet.\nTo do this, we used DeepL to translate the image labels in ImageNet. We evaluate the models computing the accuracy at different levels.\n\n\nAccuracy: Accuracy@1, CLIP-Italian: 22.11, mCLIP: 20.15\nAccuracy: Accuracy@5, CLIP-Italian: 43.69, mCLIP: 36.57\nAccuracy: Accuracy@10, CLIP-Italian: 52.55, mCLIP: 42.91\nAccuracy: Accuracy@100, CLIP-Italian: 81.08, mCLIP: 67.11\n\n\nOur results confirm that CLIP-Italian is very competitive and beats mCLIP on the two different task\nwe have been testing. Note, however, that our results are lower than those shown in the original OpenAI\npaper (see, Radford et al., 2021). However, considering that our results are in line with those obtained by mCLIP we think that\nthe translated image labels might have had an impact on the final scores.\n\n\nTeam members\n============\n\n\n* Federico Bianchi (vinid)\n* Raphael Pisoni (4rtemi5)\n* Giuseppe Attanasio (g8a9)\n* Silvia Terragni (silviatti)\n* Dario Balestri (D3Reo)\n* Gabriele Sarti (gsarti)\n* Sri Lakshmi (srisweet)"
] |
feature-extraction
|
transformers
|
# CoNTACT
### Model description
<u>Co</u>ntextual <u>N</u>eural <u>T</u>ransformer <u>A</u>dapted to <u>C</u>OVID-19 <u>T</u>weets or **CoNTACT** is a Dutch RobBERT model (```pdelobelle/robbert-v2-dutch-base```) adapted to the domain of COVID-19 tweets. The model was developed at [CLiPS](https://www.uantwerpen.be/en/research-groups/clips/) by Jens Lemmens, Jens Van Nooten, Tim Kreutz and Walter Daelemans. A full description of the model, the data that was used and the experiments that were conducted can be found in this ArXiv preprint: https://arxiv.org/abs/2203.07362
### Intended use
The model was developed with the intention of achieving high results on NLP tasks involving Dutch social media messages related to COVID-19.
### How to use
CoNTACT should be fine-tuned on a downstream task. This can be achieved by referring to ```clips/contact``` in the ```--model_name_or_path``` argument in Huggingface/Transformers' example scripts, or by loading CoNTACT (as shown below) and fine-tuning it using your own code:
```
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('clips/contact')
tokenizer = AutoTokenizer.from_pretrained('clips/contact')
...
```
### Training data
CoNTACT was trained on 2.8M Dutch tweets related to COVID-19 that were posted in 2021.
### Training Procedure
The model's pre-training phase was extended by performing Masked Language Modeling (MLM) on the training data described above. This was done for 4 epochs, using the largest possible batch size that fit working memory (32).
### Evaluation
The model was evaluated on two tasks using data from two social media platforms: Twitter and Facebook. Task 1 involved the binary classification of COVID-19 vaccine stance (hesitant vs. not hesitant), whereas task 2 consisted of the mulilabel, multiclass classification of arguments for vaccine hesitancy. CoNTACT outperformed out-of-the-box RobBERT in virtually all our experiments, and with statistical significance in most cases.
### How to cite
```
@misc{lemmens2022contact,
title={CoNTACT: A Dutch COVID-19 Adapted BERT for Vaccine Hesitancy and Argumentation Detection},
author={Jens Lemmens and Jens Van Nooten and Tim Kreutz and Walter Daelemans},
year={2022},
eprint={2203.07362},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{}
|
clips/contact
| null |
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2203.07362",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2203.07362"
] |
[] |
TAGS
#transformers #pytorch #roberta #feature-extraction #arxiv-2203.07362 #endpoints_compatible #region-us
|
# CoNTACT
### Model description
<u>Co</u>ntextual <u>N</u>eural <u>T</u>ransformer <u>A</u>dapted to <u>C</u>OVID-19 <u>T</u>weets or CoNTACT is a Dutch RobBERT model () adapted to the domain of COVID-19 tweets. The model was developed at CLiPS by Jens Lemmens, Jens Van Nooten, Tim Kreutz and Walter Daelemans. A full description of the model, the data that was used and the experiments that were conducted can be found in this ArXiv preprint: URL
### Intended use
The model was developed with the intention of achieving high results on NLP tasks involving Dutch social media messages related to COVID-19.
### How to use
CoNTACT should be fine-tuned on a downstream task. This can be achieved by referring to in the argument in Huggingface/Transformers' example scripts, or by loading CoNTACT (as shown below) and fine-tuning it using your own code:
### Training data
CoNTACT was trained on 2.8M Dutch tweets related to COVID-19 that were posted in 2021.
### Training Procedure
The model's pre-training phase was extended by performing Masked Language Modeling (MLM) on the training data described above. This was done for 4 epochs, using the largest possible batch size that fit working memory (32).
### Evaluation
The model was evaluated on two tasks using data from two social media platforms: Twitter and Facebook. Task 1 involved the binary classification of COVID-19 vaccine stance (hesitant vs. not hesitant), whereas task 2 consisted of the mulilabel, multiclass classification of arguments for vaccine hesitancy. CoNTACT outperformed out-of-the-box RobBERT in virtually all our experiments, and with statistical significance in most cases.
### How to cite
|
[
"# CoNTACT",
"### Model description\n\n<u>Co</u>ntextual <u>N</u>eural <u>T</u>ransformer <u>A</u>dapted to <u>C</u>OVID-19 <u>T</u>weets or CoNTACT is a Dutch RobBERT model () adapted to the domain of COVID-19 tweets. The model was developed at CLiPS by Jens Lemmens, Jens Van Nooten, Tim Kreutz and Walter Daelemans. A full description of the model, the data that was used and the experiments that were conducted can be found in this ArXiv preprint: URL",
"### Intended use\n\nThe model was developed with the intention of achieving high results on NLP tasks involving Dutch social media messages related to COVID-19.",
"### How to use\n\nCoNTACT should be fine-tuned on a downstream task. This can be achieved by referring to in the argument in Huggingface/Transformers' example scripts, or by loading CoNTACT (as shown below) and fine-tuning it using your own code:",
"### Training data\n\nCoNTACT was trained on 2.8M Dutch tweets related to COVID-19 that were posted in 2021.",
"### Training Procedure\n\nThe model's pre-training phase was extended by performing Masked Language Modeling (MLM) on the training data described above. This was done for 4 epochs, using the largest possible batch size that fit working memory (32).",
"### Evaluation\n\nThe model was evaluated on two tasks using data from two social media platforms: Twitter and Facebook. Task 1 involved the binary classification of COVID-19 vaccine stance (hesitant vs. not hesitant), whereas task 2 consisted of the mulilabel, multiclass classification of arguments for vaccine hesitancy. CoNTACT outperformed out-of-the-box RobBERT in virtually all our experiments, and with statistical significance in most cases.",
"### How to cite"
] |
[
"TAGS\n#transformers #pytorch #roberta #feature-extraction #arxiv-2203.07362 #endpoints_compatible #region-us \n",
"# CoNTACT",
"### Model description\n\n<u>Co</u>ntextual <u>N</u>eural <u>T</u>ransformer <u>A</u>dapted to <u>C</u>OVID-19 <u>T</u>weets or CoNTACT is a Dutch RobBERT model () adapted to the domain of COVID-19 tweets. The model was developed at CLiPS by Jens Lemmens, Jens Van Nooten, Tim Kreutz and Walter Daelemans. A full description of the model, the data that was used and the experiments that were conducted can be found in this ArXiv preprint: URL",
"### Intended use\n\nThe model was developed with the intention of achieving high results on NLP tasks involving Dutch social media messages related to COVID-19.",
"### How to use\n\nCoNTACT should be fine-tuned on a downstream task. This can be achieved by referring to in the argument in Huggingface/Transformers' example scripts, or by loading CoNTACT (as shown below) and fine-tuning it using your own code:",
"### Training data\n\nCoNTACT was trained on 2.8M Dutch tweets related to COVID-19 that were posted in 2021.",
"### Training Procedure\n\nThe model's pre-training phase was extended by performing Masked Language Modeling (MLM) on the training data described above. This was done for 4 epochs, using the largest possible batch size that fit working memory (32).",
"### Evaluation\n\nThe model was evaluated on two tasks using data from two social media platforms: Twitter and Facebook. Task 1 involved the binary classification of COVID-19 vaccine stance (hesitant vs. not hesitant), whereas task 2 consisted of the mulilabel, multiclass classification of arguments for vaccine hesitancy. CoNTACT outperformed out-of-the-box RobBERT in virtually all our experiments, and with statistical significance in most cases.",
"### How to cite"
] |
sentence-similarity
|
sentence-transformers
|
# MFAQ
We present a multilingual FAQ retrieval model trained on the [MFAQ dataset](https://huggingface.co/datasets/clips/mfaq), it ranks candidate answers according to a given question.
## Installation
```
pip install sentence-transformers transformers
```
## Usage
You can use MFAQ with sentence-transformers or directly with a HuggingFace model.
In both cases, questions need to be prepended with `<Q>`, and answers with `<A>`.
#### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
question = "<Q>How many models can I host on HuggingFace?"
answer_1 = "<A>All plans come with unlimited private models and datasets."
answer_2 = "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem."
answer_3 = "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job."
model = SentenceTransformer('clips/mfaq')
embeddings = model.encode([question, answer_1, answer_3, answer_3])
print(embeddings)
```
#### HuggingFace Transformers
```python
from transformers import AutoTokenizer, AutoModel
import torch
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
question = "<Q>How many models can I host on HuggingFace?"
answer_1 = "<A>All plans come with unlimited private models and datasets."
answer_2 = "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem."
answer_3 = "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job."
tokenizer = AutoTokenizer.from_pretrained('clips/mfaq')
model = AutoModel.from_pretrained('clips/mfaq')
# Tokenize sentences
encoded_input = tokenizer([question, answer_1, answer_3, answer_3], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Training
You can find the training script for the model [here](https://github.com/clips/mfaq).
## People
This model was developed by [Maxime De Bruyn](https://www.linkedin.com/in/maximedebruyn/), Ehsan Lotfi, Jeska Buhmann and Walter Daelemans.
## Citation information
```
@misc{debruyn2021mfaq,
title={MFAQ: a Multilingual FAQ Dataset},
author={Maxime De Bruyn and Ehsan Lotfi and Jeska Buhmann and Walter Daelemans},
year={2021},
eprint={2109.12870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["cs", "da", "de", "en", "es", "fi", "fr", "he", "hr", "hu", "id", "it", "nl", "no", "pl", "pt", "ro", "ru", "sv", "tr", "vi"], "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["clips/mfaq"], "pipeline_tag": "sentence-similarity", "widget": {"source_sentence": "<Q>How many models can I host on HuggingFace?", "sentences": ["<A>All plans come with unlimited private models and datasets.", "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem.", "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job."]}}
|
clips/mfaq
| null |
[
"sentence-transformers",
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"cs",
"da",
"de",
"en",
"es",
"fi",
"fr",
"he",
"hr",
"hu",
"id",
"it",
"nl",
"no",
"pl",
"pt",
"ro",
"ru",
"sv",
"tr",
"vi",
"dataset:clips/mfaq",
"arxiv:2109.12870",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.12870"
] |
[
"cs",
"da",
"de",
"en",
"es",
"fi",
"fr",
"he",
"hr",
"hu",
"id",
"it",
"nl",
"no",
"pl",
"pt",
"ro",
"ru",
"sv",
"tr",
"vi"
] |
TAGS
#sentence-transformers #pytorch #tf #xlm-roberta #feature-extraction #sentence-similarity #transformers #cs #da #de #en #es #fi #fr #he #hr #hu #id #it #nl #no #pl #pt #ro #ru #sv #tr #vi #dataset-clips/mfaq #arxiv-2109.12870 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# MFAQ
We present a multilingual FAQ retrieval model trained on the MFAQ dataset, it ranks candidate answers according to a given question.
## Installation
## Usage
You can use MFAQ with sentence-transformers or directly with a HuggingFace model.
In both cases, questions need to be prepended with '<Q>', and answers with '<A>'.
#### Sentence Transformers
#### HuggingFace Transformers
## Training
You can find the training script for the model here.
## People
This model was developed by Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann and Walter Daelemans.
information
|
[
"# MFAQ\n\nWe present a multilingual FAQ retrieval model trained on the MFAQ dataset, it ranks candidate answers according to a given question.",
"## Installation",
"## Usage\nYou can use MFAQ with sentence-transformers or directly with a HuggingFace model. \nIn both cases, questions need to be prepended with '<Q>', and answers with '<A>'.",
"#### Sentence Transformers",
"#### HuggingFace Transformers",
"## Training\nYou can find the training script for the model here.",
"## People\nThis model was developed by Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann and Walter Daelemans.\n\ninformation"
] |
[
"TAGS\n#sentence-transformers #pytorch #tf #xlm-roberta #feature-extraction #sentence-similarity #transformers #cs #da #de #en #es #fi #fr #he #hr #hu #id #it #nl #no #pl #pt #ro #ru #sv #tr #vi #dataset-clips/mfaq #arxiv-2109.12870 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# MFAQ\n\nWe present a multilingual FAQ retrieval model trained on the MFAQ dataset, it ranks candidate answers according to a given question.",
"## Installation",
"## Usage\nYou can use MFAQ with sentence-transformers or directly with a HuggingFace model. \nIn both cases, questions need to be prepended with '<Q>', and answers with '<A>'.",
"#### Sentence Transformers",
"#### HuggingFace Transformers",
"## Training\nYou can find the training script for the model here.",
"## People\nThis model was developed by Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann and Walter Daelemans.\n\ninformation"
] |
null |
transformers
|
## albert_chinese_small
### Overview
**Language model:** albert-small
**Model size:** 18.5M
**Language:** Chinese
**Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020)
**Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE)
### Results
For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE).
### Usage
**NOTE:**Since sentencepiece is not used in `albert_chinese_small` model, you have to call **BertTokenizer** instead of AlbertTokenizer !!!
```
import torch
from transformers import BertTokenizer, AlbertModel
tokenizer = BertTokenizer.from_pretrained("clue/albert_chinese_small")
albert = AlbertModel.from_pretrained("clue/albert_chinese_small")
```
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark
Website: https://www.cluebenchmarks.com/
|
{"language": "zh"}
|
clue/albert_chinese_small
| null |
[
"transformers",
"pytorch",
"albert",
"zh",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #albert #zh #endpoints_compatible #region-us
|
## albert_chinese_small
### Overview
Language model: albert-small
Model size: 18.5M
Language: Chinese
Training data: CLUECorpusSmall
Eval data: CLUE dataset
### Results
For results on downstream tasks like text classification, please refer to this repository.
### Usage
NOTE:Since sentencepiece is not used in 'albert_chinese_small' model, you have to call BertTokenizer instead of AlbertTokenizer !!!
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: URL
Website: URL
|
[
"## albert_chinese_small",
"### Overview\n\nLanguage model: albert-small\nModel size: 18.5M\nLanguage: Chinese\nTraining data: CLUECorpusSmall\nEval data: CLUE dataset",
"### Results\n\nFor results on downstream tasks like text classification, please refer to this repository.",
"### Usage\n\nNOTE:Since sentencepiece is not used in 'albert_chinese_small' model, you have to call BertTokenizer instead of AlbertTokenizer !!!",
"### About CLUE benchmark\n\nOrganization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.\n\nGithub: URL\nWebsite: URL"
] |
[
"TAGS\n#transformers #pytorch #albert #zh #endpoints_compatible #region-us \n",
"## albert_chinese_small",
"### Overview\n\nLanguage model: albert-small\nModel size: 18.5M\nLanguage: Chinese\nTraining data: CLUECorpusSmall\nEval data: CLUE dataset",
"### Results\n\nFor results on downstream tasks like text classification, please refer to this repository.",
"### Usage\n\nNOTE:Since sentencepiece is not used in 'albert_chinese_small' model, you have to call BertTokenizer instead of AlbertTokenizer !!!",
"### About CLUE benchmark\n\nOrganization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.\n\nGithub: URL\nWebsite: URL"
] |
null |
transformers
|
## albert_chinese_tiny
### Overview
**Language model:** albert-tiny
**Model size:** 16M
**Language:** Chinese
**Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020)
**Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE)
### Results
For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE).
### Usage
**NOTE:**Since sentencepiece is not used in `albert_chinese_tiny` model, you have to call **BertTokenizer** instead of AlbertTokenizer !!!
```
import torch
from transformers import BertTokenizer, AlbertModel
tokenizer = BertTokenizer.from_pretrained("clue/albert_chinese_tiny")
albert = AlbertModel.from_pretrained("clue/albert_chinese_tiny")
```
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark
Website: https://www.cluebenchmarks.com/
|
{"language": "zh"}
|
clue/albert_chinese_tiny
| null |
[
"transformers",
"pytorch",
"albert",
"zh",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #albert #zh #endpoints_compatible #region-us
|
## albert_chinese_tiny
### Overview
Language model: albert-tiny
Model size: 16M
Language: Chinese
Training data: CLUECorpusSmall
Eval data: CLUE dataset
### Results
For results on downstream tasks like text classification, please refer to this repository.
### Usage
NOTE:Since sentencepiece is not used in 'albert_chinese_tiny' model, you have to call BertTokenizer instead of AlbertTokenizer !!!
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: URL
Website: URL
|
[
"## albert_chinese_tiny",
"### Overview\n\nLanguage model: albert-tiny\nModel size: 16M\nLanguage: Chinese\nTraining data: CLUECorpusSmall\nEval data: CLUE dataset",
"### Results\n\nFor results on downstream tasks like text classification, please refer to this repository.",
"### Usage\n\nNOTE:Since sentencepiece is not used in 'albert_chinese_tiny' model, you have to call BertTokenizer instead of AlbertTokenizer !!!",
"### About CLUE benchmark\n\nOrganization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.\n\nGithub: URL\nWebsite: URL"
] |
[
"TAGS\n#transformers #pytorch #albert #zh #endpoints_compatible #region-us \n",
"## albert_chinese_tiny",
"### Overview\n\nLanguage model: albert-tiny\nModel size: 16M\nLanguage: Chinese\nTraining data: CLUECorpusSmall\nEval data: CLUE dataset",
"### Results\n\nFor results on downstream tasks like text classification, please refer to this repository.",
"### Usage\n\nNOTE:Since sentencepiece is not used in 'albert_chinese_tiny' model, you have to call BertTokenizer instead of AlbertTokenizer !!!",
"### About CLUE benchmark\n\nOrganization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.\n\nGithub: URL\nWebsite: URL"
] |
null |
transformers
|
# Introduction
This model was trained on TPU and the details are as follows:
## Model
##
| Model_name | params | size | Training_corpus | Vocab |
| :------------------------------------------ | :----- | :------- | :----------------- | :-----------: |
| **`RoBERTa-tiny-clue`** <br/>Super_small_model | 7.5M | 28.3M | **CLUECorpus2020** | **CLUEVocab** |
| **`RoBERTa-tiny-pair`** <br/>Super_small_sentence_pair_model | 7.5M | 28.3M | **CLUECorpus2020** | **CLUEVocab** |
| **`RoBERTa-tiny3L768-clue`** <br/>small_model | 38M | 110M | **CLUECorpus2020** | **CLUEVocab** |
| **`RoBERTa-tiny3L312-clue`** <br/>small_model | <7.5M | 24M | **CLUECorpus2020** | **CLUEVocab** |
| **`RoBERTa-large-clue`** <br/> Large_model | 290M | 1.20G | **CLUECorpus2020** | **CLUEVocab** |
| **`RoBERTa-large-pair`** <br/>Large_sentence_pair_model | 290M | 1.20G | **CLUECorpus2020** | **CLUEVocab** |
### Usage
With the help of[Huggingface-Transformers 2.5.1](https://github.com/huggingface/transformers), you could use these model as follows
```
tokenizer = BertTokenizer.from_pretrained("MODEL_NAME")
model = BertModel.from_pretrained("MODEL_NAME")
```
`MODEL_NAME`:
| Model_NAME | MODEL_LINK |
| -------------------------- | ------------------------------------------------------------ |
| **RoBERTa-tiny-clue** | [`clue/roberta_chinese_clue_tiny`](https://huggingface.co/clue/roberta_chinese_clue_tiny) |
| **RoBERTa-tiny-pair** | [`clue/roberta_chinese_pair_tiny`](https://huggingface.co/clue/roberta_chinese_pair_tiny) |
| **RoBERTa-tiny3L768-clue** | [`clue/roberta_chinese_3L768_clue_tiny`](https://huggingface.co/clue/roberta_chinese_3L768_clue_tiny) |
| **RoBERTa-tiny3L312-clue** | [`clue/roberta_chinese_3L312_clue_tiny`](https://huggingface.co/clue/roberta_chinese_3L312_clue_tiny) |
| **RoBERTa-large-clue** | [`clue/roberta_chinese_clue_large`](https://huggingface.co/clue/roberta_chinese_clue_large) |
| **RoBERTa-large-pair** | [`clue/roberta_chinese_pair_large`](https://huggingface.co/clue/roberta_chinese_pair_large) |
## Details
Please read <a href='https://arxiv.org/pdf/2003.01355'>https://arxiv.org/pdf/2003.01355.
Please visit our repository: https://github.com/CLUEbenchmark/CLUEPretrainedModels.git
|
{"language": "zh"}
|
clue/roberta_chinese_3L312_clue_tiny
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"zh",
"arxiv:2003.01355",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2003.01355"
] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #roberta #zh #arxiv-2003.01355 #endpoints_compatible #region-us
|
Introduction
============
This model was trained on TPU and the details are as follows:
Model
-----
### Usage
With the help ofHuggingface-Transformers 2.5.1, you could use these model as follows
'MODEL\_NAME':
Details
-------
Please read <a href='URL/URL
Please visit our repository: URL
|
[
"### Usage\n\n\nWith the help ofHuggingface-Transformers 2.5.1, you could use these model as follows\n\n\n'MODEL\\_NAME':\n\n\n\nDetails\n-------\n\n\nPlease read <a href='URL/URL\n\n\nPlease visit our repository: URL"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #zh #arxiv-2003.01355 #endpoints_compatible #region-us \n",
"### Usage\n\n\nWith the help ofHuggingface-Transformers 2.5.1, you could use these model as follows\n\n\n'MODEL\\_NAME':\n\n\n\nDetails\n-------\n\n\nPlease read <a href='URL/URL\n\n\nPlease visit our repository: URL"
] |
null |
transformers
|
## roberta_chinese_base
### Overview
**Language model:** roberta-base
**Model size:** 392M
**Language:** Chinese
**Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020)
**Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE)
### Results
For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE).
### Usage
**NOTE:** You have to call **BertTokenizer** instead of RobertaTokenizer !!!
```
import torch
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_base")
roberta = BertModel.from_pretrained("clue/roberta_chinese_base")
```
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark
Website: https://www.cluebenchmarks.com/
|
{"language": "zh"}
|
clue/roberta_chinese_base
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"zh",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #roberta #zh #endpoints_compatible #region-us
|
## roberta_chinese_base
### Overview
Language model: roberta-base
Model size: 392M
Language: Chinese
Training data: CLUECorpusSmall
Eval data: CLUE dataset
### Results
For results on downstream tasks like text classification, please refer to this repository.
### Usage
NOTE: You have to call BertTokenizer instead of RobertaTokenizer !!!
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: URL
Website: URL
|
[
"## roberta_chinese_base",
"### Overview\n\nLanguage model: roberta-base\nModel size: 392M\nLanguage: Chinese\nTraining data: CLUECorpusSmall\nEval data: CLUE dataset",
"### Results\n\nFor results on downstream tasks like text classification, please refer to this repository.",
"### Usage\n\nNOTE: You have to call BertTokenizer instead of RobertaTokenizer !!!",
"### About CLUE benchmark\n\nOrganization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.\n\nGithub: URL\nWebsite: URL"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #zh #endpoints_compatible #region-us \n",
"## roberta_chinese_base",
"### Overview\n\nLanguage model: roberta-base\nModel size: 392M\nLanguage: Chinese\nTraining data: CLUECorpusSmall\nEval data: CLUE dataset",
"### Results\n\nFor results on downstream tasks like text classification, please refer to this repository.",
"### Usage\n\nNOTE: You have to call BertTokenizer instead of RobertaTokenizer !!!",
"### About CLUE benchmark\n\nOrganization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.\n\nGithub: URL\nWebsite: URL"
] |
null |
transformers
|
## roberta_chinese_large
### Overview
**Language model:** roberta-large
**Model size:** 1.2G
**Language:** Chinese
**Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020)
**Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE)
### Results
For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE).
### Usage
**NOTE:** You have to call **BertTokenizer** instead of RobertaTokenizer !!!
```
import torch
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_large")
roberta = BertModel.from_pretrained("clue/roberta_chinese_large")
```
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark
Website: https://www.cluebenchmarks.com/
|
{"language": "zh"}
|
clue/roberta_chinese_large
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"zh",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #roberta #zh #endpoints_compatible #region-us
|
## roberta_chinese_large
### Overview
Language model: roberta-large
Model size: 1.2G
Language: Chinese
Training data: CLUECorpusSmall
Eval data: CLUE dataset
### Results
For results on downstream tasks like text classification, please refer to this repository.
### Usage
NOTE: You have to call BertTokenizer instead of RobertaTokenizer !!!
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: URL
Website: URL
|
[
"## roberta_chinese_large",
"### Overview\n\nLanguage model: roberta-large\nModel size: 1.2G\nLanguage: Chinese\nTraining data: CLUECorpusSmall\nEval data: CLUE dataset",
"### Results\n\nFor results on downstream tasks like text classification, please refer to this repository.",
"### Usage\n\nNOTE: You have to call BertTokenizer instead of RobertaTokenizer !!!",
"### About CLUE benchmark\n\nOrganization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.\n\nGithub: URL\nWebsite: URL"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #zh #endpoints_compatible #region-us \n",
"## roberta_chinese_large",
"### Overview\n\nLanguage model: roberta-large\nModel size: 1.2G\nLanguage: Chinese\nTraining data: CLUECorpusSmall\nEval data: CLUE dataset",
"### Results\n\nFor results on downstream tasks like text classification, please refer to this repository.",
"### Usage\n\nNOTE: You have to call BertTokenizer instead of RobertaTokenizer !!!",
"### About CLUE benchmark\n\nOrganization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.\n\nGithub: URL\nWebsite: URL"
] |
null |
transformers
|
## xlnet_chinese_large
### Overview
**Language model:** xlnet-large
**Model size:** 1.3G
**Language:** Chinese
**Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020)
**Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE)
### Results
For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE).
### Usage
```
import torch
from transformers import XLNetTokenizer,XLNetModel
tokenizer = XLNetTokenizer.from_pretrained("clue/xlnet_chinese_large")
xlnet = XLNetModel.from_pretrained("clue/xlnet_chinese_large")
```
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark
Website: https://www.cluebenchmarks.com/
|
{"language": "zh"}
|
clue/xlnet_chinese_large
| null |
[
"transformers",
"pytorch",
"xlnet",
"zh",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #xlnet #zh #endpoints_compatible #region-us
|
## xlnet_chinese_large
### Overview
Language model: xlnet-large
Model size: 1.3G
Language: Chinese
Training data: CLUECorpusSmall
Eval data: CLUE dataset
### Results
For results on downstream tasks like text classification, please refer to this repository.
### Usage
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: URL
Website: URL
|
[
"## xlnet_chinese_large",
"### Overview\n\nLanguage model: xlnet-large\nModel size: 1.3G\nLanguage: Chinese\nTraining data: CLUECorpusSmall\nEval data: CLUE dataset",
"### Results\n\nFor results on downstream tasks like text classification, please refer to this repository.",
"### Usage",
"### About CLUE benchmark\n\nOrganization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.\n\nGithub: URL\nWebsite: URL"
] |
[
"TAGS\n#transformers #pytorch #xlnet #zh #endpoints_compatible #region-us \n",
"## xlnet_chinese_large",
"### Overview\n\nLanguage model: xlnet-large\nModel size: 1.3G\nLanguage: Chinese\nTraining data: CLUECorpusSmall\nEval data: CLUE dataset",
"### Results\n\nFor results on downstream tasks like text classification, please refer to this repository.",
"### Usage",
"### About CLUE benchmark\n\nOrganization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.\n\nGithub: URL\nWebsite: URL"
] |
token-classification
|
transformers
|
DistilCamemBERT-NER
===================
We present DistilCamemBERT-NER, which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the NER (Named Entity Recognition) task for the French language. The work is inspired by [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) based on the [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which **divides the inference time by two** with the same consumption power thanks to [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base).
Dataset
-------
The dataset used is [wikiner_fr](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr), which represents ~170k sentences labeled in 5 categories :
* PER: personality ;
* LOC: location ;
* ORG: organization ;
* MISC: miscellaneous entities (movies title, books, etc.) ;
* O: background (Outside entity).
Evaluation results
------------------
| **class** | **precision (%)** | **recall (%)** | **f1 (%)** | **support (#sub-word)** |
| :------------: | :---------------: | :------------: | :--------: | :---------------------: |
| **global** | 98.17 | 98.19 | 98.18 | 378,776 |
| **PER** | 96.78 | 96.87 | 96.82 | 23,754 |
| **LOC** | 94.05 | 93.59 | 93.82 | 27,196 |
| **ORG** | 86.05 | 85.92 | 85.98 | 6,526 |
| **MISC** | 88.78 | 84.69 | 86.69 | 11,891 |
| **O** | 99.26 | 99.47 | 99.37 | 309,409 |
Benchmark
---------
This model performance is compared to 2 reference models (see below) with the metric f1 score. For the mean inference time measure, an AMD Ryzen 5 4500U @ 2.3GHz with 6 cores was used:
| **model** | **time (ms)** | **PER (%)** | **LOC (%)** | **ORG (%)** | **MISC (%)** | **O (%)** |
| :---------------------------------------------------------------------------------------------------------------: | :-----------: | :---------: | :---------: | :---------: | :-----------: | :-------: |
| [cmarkea/distilcamembert-base-ner](https://huggingface.co/cmarkea/distilcamembert-base-ner) | **43.44** | **96.82** | **93.82** | **85.98** | **86.69** | **99.37** |
| [Davlan/bert-base-multilingual-cased-ner-hrl](https://huggingface.co/Davlan/bert-base-multilingual-cased-ner-hrl) | 87.56 | 79.93 | 72.89 | 61.34 | n/a | 96.04 |
| [flair/ner-french](https://huggingface.co/flair/ner-french) | 314.96 | 82.91 | 76.17 | 70.96 | 76.29 | 97.65 |
How to use DistilCamemBERT-NER
------------------------------
```python
from transformers import pipeline
ner = pipeline(
task='ner',
model="cmarkea/distilcamembert-base-ner",
tokenizer="cmarkea/distilcamembert-base-ner",
aggregation_strategy="simple"
)
result = ner(
"Le Crédit Mutuel Arkéa est une banque Française, elle comprend le CMB "
"qui est une banque située en Bretagne et le CMSO qui est une banque "
"qui se situe principalement en Aquitaine. C'est sous la présidence de "
"Louis Lichou, dans les années 1980 que différentes filiales sont créées "
"au sein du CMB et forment les principales filiales du groupe qui "
"existent encore aujourd'hui (Federal Finance, Suravenir, Financo, etc.)."
)
result
[{'entity_group': 'ORG',
'score': 0.9974479,
'word': 'Crédit Mutuel Arkéa',
'start': 3,
'end': 22},
{'entity_group': 'LOC',
'score': 0.9000358,
'word': 'Française',
'start': 38,
'end': 47},
{'entity_group': 'ORG',
'score': 0.9788757,
'word': 'CMB',
'start': 66,
'end': 69},
{'entity_group': 'LOC',
'score': 0.99919766,
'word': 'Bretagne',
'start': 99,
'end': 107},
{'entity_group': 'ORG',
'score': 0.9594884,
'word': 'CMSO',
'start': 114,
'end': 118},
{'entity_group': 'LOC',
'score': 0.99935514,
'word': 'Aquitaine',
'start': 169,
'end': 178},
{'entity_group': 'PER',
'score': 0.99911094,
'word': 'Louis Lichou',
'start': 208,
'end': 220},
{'entity_group': 'ORG',
'score': 0.96226394,
'word': 'CMB',
'start': 291,
'end': 294},
{'entity_group': 'ORG',
'score': 0.9983959,
'word': 'Federal Finance',
'start': 374,
'end': 389},
{'entity_group': 'ORG',
'score': 0.9984454,
'word': 'Suravenir',
'start': 391,
'end': 400},
{'entity_group': 'ORG',
'score': 0.9985084,
'word': 'Financo',
'start': 402,
'end': 409}]
```
### Optimum + ONNX
```python
from optimum.onnxruntime import ORTModelForTokenClassification
from transformers import AutoTokenizer, pipeline
HUB_MODEL = "cmarkea/distilcamembert-base-nli"
tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL)
model = ORTModelForTokenClassification.from_pretrained(HUB_MODEL)
onnx_qa = pipeline("token-classification", model=model, tokenizer=tokenizer)
# Quantized onnx model
quantized_model = ORTModelForTokenClassification.from_pretrained(
HUB_MODEL, file_name="model_quantized.onnx"
)
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
```
|
{"language": "fr", "license": "mit", "datasets": ["Jean-Baptiste/wikiner_fr"], "widget": [{"text": "Boulanger, habitant \u00e0 Boulanger et travaillant dans le magasin Boulanger situ\u00e9 dans la ville de Boulanger. Boulanger a \u00e9crit le livre \u00e9ponyme Boulanger \u00e9dit\u00e9 par la maison d'\u00e9dition Boulanger."}, {"text": "Quentin Jerome Tarantino na\u00eet le 27 mars 1963 \u00e0 Knoxville, dans le Tennessee. Il est le fils de Connie McHugh, une infirmi\u00e8re, n\u00e9e le 3 septembre 1946, et de Tony Tarantino, acteur et musicien amateur n\u00e9 \u00e0 New York. Ce dernier est d'origine italienne par son p\u00e8re ; sa m\u00e8re a des ascendances irlandaises et cherokees. Il est pr\u00e9nomm\u00e9 d'apr\u00e8s Quint Asper, le personnage jou\u00e9 par Burt Reynolds dans la s\u00e9rie Gunsmoke et Quentin Compson, personnage du roman Le Bruit et la Fureur. Son p\u00e8re quitte le domicile familial avant m\u00eame sa naissance. En 1965, sa m\u00e8re d\u00e9m\u00e9nage \u00e0 Torrance, dans la banlieue sud de Los Angeles, et se remarie avec Curtis Zastoupil, un pianiste de bar, qui lui fait d\u00e9couvrir le cin\u00e9ma. Le couple divorce alors que le jeune Quentin a une dizaine d'ann\u00e9es."}]}
|
cmarkea/distilcamembert-base-ner
| null |
[
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"camembert",
"token-classification",
"fr",
"dataset:Jean-Baptiste/wikiner_fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #tf #onnx #safetensors #camembert #token-classification #fr #dataset-Jean-Baptiste/wikiner_fr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
DistilCamemBERT-NER
===================
We present DistilCamemBERT-NER, which is DistilCamemBERT fine-tuned for the NER (Named Entity Recognition) task for the French language. The work is inspired by Jean-Baptiste/camembert-ner based on the CamemBERT model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which divides the inference time by two with the same consumption power thanks to DistilCamemBERT.
Dataset
-------
The dataset used is wikiner\_fr, which represents ~170k sentences labeled in 5 categories :
* PER: personality ;
* LOC: location ;
* ORG: organization ;
* MISC: miscellaneous entities (movies title, books, etc.) ;
* O: background (Outside entity).
Evaluation results
------------------
Benchmark
---------
This model performance is compared to 2 reference models (see below) with the metric f1 score. For the mean inference time measure, an AMD Ryzen 5 4500U @ 2.3GHz with 6 cores was used:
How to use DistilCamemBERT-NER
------------------------------
### Optimum + ONNX
Citation
--------
|
[
"### Optimum + ONNX\n\n\nCitation\n--------"
] |
[
"TAGS\n#transformers #pytorch #tf #onnx #safetensors #camembert #token-classification #fr #dataset-Jean-Baptiste/wikiner_fr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Optimum + ONNX\n\n\nCitation\n--------"
] |
zero-shot-classification
|
transformers
|
DistilCamemBERT-NLI
===================
We present DistilCamemBERT-NLI, which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the Natural Language Inference (NLI) task for the french language, also known as recognizing textual entailment (RTE). This model is constructed on the XNLI dataset, which determines whether a premise entails, contradicts or neither entails or contradicts a hypothesis.
This modelization is close to [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue especially in the context of cross-encoding like this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power, thanks to DistilCamemBERT.
Dataset
-------
The dataset XNLI from [FLUE](https://huggingface.co/datasets/flue) comprises 392,702 premises with their hypothesis for the train and 5,010 couples for the test. The goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B?) and is a classification task (given two sentences, predict one of three labels). Sentence A is called *premise*, and sentence B is called *hypothesis*, then the goal of modelization is determined as follows:
$$P(premise=c\in\{contradiction, entailment, neutral\}\vert hypothesis)$$
Evaluation results
------------------
| **class** | **precision (%)** | **f1-score (%)** | **support** |
| :----------------: | :---------------: | :--------------: | :---------: |
| **global** | 77.70 | 77.45 | 5,010 |
| **contradiction** | 78.00 | 79.54 | 1,670 |
| **entailment** | 82.90 | 78.87 | 1,670 |
| **neutral** | 72.18 | 74.04 | 1,670 |
Benchmark
---------
We compare the [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) model to 2 other modelizations working on the french language. The first one [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) is based on well named [CamemBERT](https://huggingface.co/camembert-base), the french RoBERTa model and the second one [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) based on [mDeBERTav3](https://huggingface.co/microsoft/mdeberta-v3-base) a multilingual model. To compare the performances, the metrics of accuracy and [MCC (Matthews Correlation Coefficient)](https://en.wikipedia.org/wiki/Phi_coefficient) were used. We used an **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** for mean inference time measure.
| **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** |
| :--------------: | :-----------: | :--------------: | :------------: |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **51.35** | 77.45 | 66.24 |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 105.0 | 81.72 | 72.67 |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 299.18 | **83.43** | **75.15** |
Zero-shot classification
------------------------
The main advantage of such modelization is to create a zero-shot classifier allowing text classification without training. This task can be summarized by:
$$P(hypothesis=i\in\mathcal{C}|premise)=\frac{e^{P(premise=entailment\vert hypothesis=i)}}{\sum_{j\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis=j)}}$$
For this part, we use two datasets, the first one: [allocine](https://huggingface.co/datasets/allocine) used to train the sentiment analysis models. The dataset comprises two classes: "positif" and "négatif" appreciation of movie reviews. Here we use "Ce commentaire est {}." as the hypothesis template and "positif" and "négatif" as candidate labels.
| **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** |
| :--------------: | :-----------: | :--------------: | :------------: |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **195.54** | 80.59 | 63.71 |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 378.39 | **86.37** | **73.74** |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 520.58 | 84.97 | 70.05 |
The second one: [mlsum](https://huggingface.co/datasets/mlsum) used to train the summarization models. In this aim, we aggregate sub-topics and select a few of them. We use the articles summary part to predict their topics. In this case, the hypothesis template used is "C'est un article traitant de {}." and the candidate labels are: "économie", "politique", "sport" and "science".
| **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** |
| :--------------: | :-----------: | :--------------: | :------------: |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **217.77** | **79.30** | **70.55** |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 448.27 | 70.7 | 64.10 |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 591.34 | 64.45 | 58.67 |
How to use DistilCamemBERT-NLI
------------------------------
```python
from transformers import pipeline
classifier = pipeline(
task='zero-shot-classification',
model="cmarkea/distilcamembert-base-nli",
tokenizer="cmarkea/distilcamembert-base-nli"
)
result = classifier (
sequences="Le style très cinéphile de Quentin Tarantino "
"se reconnaît entre autres par sa narration postmoderne "
"et non linéaire, ses dialogues travaillés souvent "
"émaillés de références à la culture populaire, et ses "
"scènes hautement esthétiques mais d'une violence "
"extrême, inspirées de films d'exploitation, d'arts "
"martiaux ou de western spaghetti.",
candidate_labels="cinéma, technologie, littérature, politique",
hypothesis_template="Ce texte parle de {}."
)
result
{"labels": ["cinéma",
"littérature",
"technologie",
"politique"],
"scores": [0.7164115309715271,
0.12878799438476562,
0.1092301607131958,
0.0455702543258667]}
```
### Optimum + ONNX
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
HUB_MODEL = "cmarkea/distilcamembert-base-nli"
tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL)
model = ORTModelForSequenceClassification.from_pretrained(HUB_MODEL)
onnx_qa = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer)
# Quantized onnx model
quantized_model = ORTModelForSequenceClassification.from_pretrained(
HUB_MODEL, file_name="model_quantized.onnx"
)
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
```
|
{"language": "fr", "license": "mit", "tags": ["zero-shot-classification", "sentence-similarity", "nli"], "datasets": ["flue"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "Selon certains physiciens, un univers parall\u00e8le, miroir du n\u00f4tre ou relevant de ce que l'on appelle la th\u00e9orie des branes, autoriserait des neutrons \u00e0 sortir de notre Univers pour y entrer \u00e0 nouveau. L'id\u00e9e a \u00e9t\u00e9 test\u00e9e une nouvelle fois avec le r\u00e9acteur nucl\u00e9aire de l'Institut Laue-Langevin \u00e0 Grenoble, plus pr\u00e9cis\u00e9ment en utilisant le d\u00e9tecteur de l'exp\u00e9rience Stereo initialement con\u00e7u pour chasser des particules de mati\u00e8re noire potentielles, les neutrinos st\u00e9riles.", "candidate_labels": "politique, science, sport, sant\u00e9", "hypothesis_template": "Ce texte parle de {}."}]}
|
cmarkea/distilcamembert-base-nli
| null |
[
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"camembert",
"text-classification",
"zero-shot-classification",
"sentence-similarity",
"nli",
"fr",
"dataset:flue",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #tf #onnx #safetensors #camembert #text-classification #zero-shot-classification #sentence-similarity #nli #fr #dataset-flue #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
DistilCamemBERT-NLI
===================
We present DistilCamemBERT-NLI, which is DistilCamemBERT fine-tuned for the Natural Language Inference (NLI) task for the french language, also known as recognizing textual entailment (RTE). This model is constructed on the XNLI dataset, which determines whether a premise entails, contradicts or neither entails or contradicts a hypothesis.
This modelization is close to BaptisteDoyen/camembert-base-xnli based on CamemBERT model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue especially in the context of cross-encoding like this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power, thanks to DistilCamemBERT.
Dataset
-------
The dataset XNLI from FLUE comprises 392,702 premises with their hypothesis for the train and 5,010 couples for the test. The goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B?) and is a classification task (given two sentences, predict one of three labels). Sentence A is called *premise*, and sentence B is called *hypothesis*, then the goal of modelization is determined as follows:
$$P(premise=c\in{contradiction, entailment, neutral}\vert hypothesis)$$
Evaluation results
------------------
Benchmark
---------
We compare the DistilCamemBERT model to 2 other modelizations working on the french language. The first one BaptisteDoyen/camembert-base-xnli is based on well named CamemBERT, the french RoBERTa model and the second one MoritzLaurer/mDeBERTa-v3-base-mnli-xnli based on mDeBERTav3 a multilingual model. To compare the performances, the metrics of accuracy and MCC (Matthews Correlation Coefficient) were used. We used an AMD Ryzen 5 4500U @ 2.3GHz with 6 cores for mean inference time measure.
Zero-shot classification
------------------------
The main advantage of such modelization is to create a zero-shot classifier allowing text classification without training. This task can be summarized by:
$$P(hypothesis=i\in\mathcal{C}|premise)=\frac{e^{P(premise=entailment\vert hypothesis=i)}}{\sum\_{j\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis=j)}}$$
For this part, we use two datasets, the first one: allocine used to train the sentiment analysis models. The dataset comprises two classes: "positif" and "négatif" appreciation of movie reviews. Here we use "Ce commentaire est {}." as the hypothesis template and "positif" and "négatif" as candidate labels.
The second one: mlsum used to train the summarization models. In this aim, we aggregate sub-topics and select a few of them. We use the articles summary part to predict their topics. In this case, the hypothesis template used is "C'est un article traitant de {}." and the candidate labels are: "économie", "politique", "sport" and "science".
How to use DistilCamemBERT-NLI
------------------------------
### Optimum + ONNX
Citation
--------
|
[
"### Optimum + ONNX\n\n\nCitation\n--------"
] |
[
"TAGS\n#transformers #pytorch #tf #onnx #safetensors #camembert #text-classification #zero-shot-classification #sentence-similarity #nli #fr #dataset-flue #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Optimum + ONNX\n\n\nCitation\n--------"
] |
question-answering
|
transformers
|
DistilCamemBERT-QA
==================
We present DistilCamemBERT-QA, which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the Question-Answering task for the french language. This model is built using two datasets, FQuAD v1.0 and Piaf, composed of contexts and questions with their answers inside the context.
This modelization is close to [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue, especially in cross-encoding like this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power, thanks to DistilCamemBERT.
Dataset
-------
The dataset comprises FQuAD v1.0 and Piaf with 24'566 questions and answers for the training set and 3'188 for the evaluation set.
Evaluation results and benchmark
--------------------------------
We compare [DistilCamemBERT-QA](https://huggingface.co/cmarkea/distilcamembert-base-qa) to two other modelizations working on the french language. The first one [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) is based on well named [CamemBERT](https://huggingface.co/camembert-base), the french RoBERTa model and the second one [fmikaelian/flaubert-base-uncased-squad](https://huggingface.co/fmikaelian/flaubert-base-uncased-squad) is based on [FlauBERT](https://huggingface.co/flaubert/flaubert_base_uncased) another french model based on BERT architecture this time.
For our benchmarks, we do a word-to-word comparison between words that are matching between the predicted answer and the ground truth. We also use f1-score, which measures the intersection quality between predicted responses and ground truth. Finally, we use inclusion score, which measures if the ground truth answer is included in the predicted answer. An **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** was used for the mean inference time measure.
| **model** | **time (ms)** | **exact match (%)** | **f1-score (%)** | **inclusion-score (%)** |
| :--------------: | :-----------: | :--------------: | :------------: | :------------: |
| [cmarkea/distilcamembert-base-qa](https://huggingface.co/cmarkea/distilcamembert-base-qa) | **216.96** | 25.66 | 62.65 | 59.82 |
| [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) | 432.17 | **59.76** | **79.57** | **69.23** |
| [fmikaelian/flaubert-base-uncased-squad](https://huggingface.co/fmikaelian/flaubert-base-uncased-squad) | 875.84 | 0.22 | 5.21 | 3.68 |
Do not take into account the results of the FlauBERT model. The modeling seems to be a problem, as the results seem very low.
How to use DistilCamemBERT-QA
------------------------------
```python
from transformers import pipeline
qa_engine = pipeline(
"question-answering",
model="cmarkea/distilcamembert-base-qa",
tokenizer="cmarkea/distilcamembert-base-qa"
)
result = qa_engine(
context="David Fincher, né le 28 août 1962 à Denver (Colorado), "
"est un réalisateur et producteur américain. Il est principalement "
"connu pour avoir réalisé les films Seven, Fight Club, L'Étrange "
"Histoire de Benjamin Button, The Social Network et Gone Girl qui "
"lui ont valu diverses récompenses et nominations aux Oscars du "
"cinéma ou aux Golden Globes. Réputé pour son perfectionnisme, il "
"peut tourner un très grand nombre de prises de ses plans et "
"séquences afin d'obtenir le rendu visuel qu'il désire. Il a "
"également développé et produit les séries télévisées House of "
"Cards (pour laquelle il remporte l'Emmy Award de la meilleure "
"réalisation pour une série dramatique en 2013) et Mindhunter, "
"diffusées sur Netflix.",
question="Quel est le métier de David Fincher ?"
)
result
{'score': 0.7981914281845093,
'start': 61,
'end': 98,
'answer': ' réalisateur et producteur américain.'}
```
### Optimum + ONNX
```python
from optimum.onnxruntime import ORTModelForQuestionAnswering
from transformers import AutoTokenizer, pipeline
HUB_MODEL = "cmarkea/distilcamembert-base-qa"
tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL)
model = ORTModelForQuestionAnswering.from_pretrained(HUB_MODEL)
onnx_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)
# Quantized onnx model
quantized_model = ORTModelForQuestionAnswering.from_pretrained(
HUB_MODEL, file_name="model_quantized.onnx"
)
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
```
|
{"language": "fr", "license": "cc-by-nc-sa-3.0", "datasets": ["fquad", "piaf"], "widget": [{"text": "Quand et o\u00f9 est sorti Toy Story ?", "context": "Pixar Animation Studios, ou simplement Pixar dans le langage courant, est une soci\u00e9t\u00e9 am\u00e9ricaine de production de films en images tridimensionnelles de synth\u00e8se. Elle a acquis sa notori\u00e9t\u00e9 gr\u00e2ce \u00e0 Toy Story, premier long m\u00e9trage de ce type, sorti aux \u00c9tats-Unis en 1995. \u00c0 ce jour, le studio d'animation a remport\u00e9 dix-neuf Oscars, quatre Golden Globes et trois Grammy Awards ainsi que de nombreuses autres r\u00e9compenses. Le studio travaille avec PhotoRealistic RenderMan, sa propre version de l'interface de programmation de rendu RenderMan utilis\u00e9e pour cr\u00e9er des images de haute qualit\u00e9. Ses studios de production et son si\u00e8ge social se trouvent au Pixar Campus situ\u00e9 \u00e0 Emeryville pr\u00e8s de San Francisco en Californie."}, {"text": "Quel est le premier long m\u00e9trage du studio ?", "context": "Pixar Animation Studios, ou simplement Pixar dans le langage courant, est une soci\u00e9t\u00e9 am\u00e9ricaine de production de films en images tridimensionnelles de synth\u00e8se. Elle a acquis sa notori\u00e9t\u00e9 gr\u00e2ce \u00e0 Toy Story, premier long m\u00e9trage de ce type, sorti aux \u00c9tats-Unis en 1995. \u00c0 ce jour, le studio d'animation a remport\u00e9 dix-neuf Oscars, quatre Golden Globes et trois Grammy Awards ainsi que de nombreuses autres r\u00e9compenses. Le studio travaille avec PhotoRealistic RenderMan, sa propre version de l'interface de programmation de rendu RenderMan utilis\u00e9e pour cr\u00e9er des images de haute qualit\u00e9. Ses studios de production et son si\u00e8ge social se trouvent au Pixar Campus situ\u00e9 \u00e0 Emeryville pr\u00e8s de San Francisco en Californie."}]}
|
cmarkea/distilcamembert-base-qa
| null |
[
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"camembert",
"question-answering",
"fr",
"dataset:fquad",
"dataset:piaf",
"license:cc-by-nc-sa-3.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #tf #onnx #safetensors #camembert #question-answering #fr #dataset-fquad #dataset-piaf #license-cc-by-nc-sa-3.0 #endpoints_compatible #region-us
|
DistilCamemBERT-QA
==================
We present DistilCamemBERT-QA, which is DistilCamemBERT fine-tuned for the Question-Answering task for the french language. This model is built using two datasets, FQuAD v1.0 and Piaf, composed of contexts and questions with their answers inside the context.
This modelization is close to etalab-ia/camembert-base-squadFR-fquad-piaf based on CamemBERT model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue, especially in cross-encoding like this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power, thanks to DistilCamemBERT.
Dataset
-------
The dataset comprises FQuAD v1.0 and Piaf with 24'566 questions and answers for the training set and 3'188 for the evaluation set.
Evaluation results and benchmark
--------------------------------
We compare DistilCamemBERT-QA to two other modelizations working on the french language. The first one etalab-ia/camembert-base-squadFR-fquad-piaf is based on well named CamemBERT, the french RoBERTa model and the second one fmikaelian/flaubert-base-uncased-squad is based on FlauBERT another french model based on BERT architecture this time.
For our benchmarks, we do a word-to-word comparison between words that are matching between the predicted answer and the ground truth. We also use f1-score, which measures the intersection quality between predicted responses and ground truth. Finally, we use inclusion score, which measures if the ground truth answer is included in the predicted answer. An AMD Ryzen 5 4500U @ 2.3GHz with 6 cores was used for the mean inference time measure.
Do not take into account the results of the FlauBERT model. The modeling seems to be a problem, as the results seem very low.
How to use DistilCamemBERT-QA
-----------------------------
### Optimum + ONNX
Citation
--------
|
[
"### Optimum + ONNX\n\n\nCitation\n--------"
] |
[
"TAGS\n#transformers #pytorch #tf #onnx #safetensors #camembert #question-answering #fr #dataset-fquad #dataset-piaf #license-cc-by-nc-sa-3.0 #endpoints_compatible #region-us \n",
"### Optimum + ONNX\n\n\nCitation\n--------"
] |
text-classification
|
transformers
|
DistilCamemBERT-Sentiment
=========================
We present DistilCamemBERT-Sentiment, which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the sentiment analysis task for the French language. This model is built using two datasets: [Amazon Reviews](https://huggingface.co/datasets/amazon_reviews_multi) and [Allociné.fr](https://huggingface.co/datasets/allocine) to minimize the bias. Indeed, Amazon reviews are similar in messages and relatively shorts, contrary to Allociné critics, who are long and rich texts.
This modelization is close to [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which **divides the inference time by two** with the same consumption power thanks to [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base).
Dataset
-------
The dataset comprises 204,993 reviews for training and 4,999 reviews for the test from Amazon, and 235,516 and 4,729 critics from [Allocine website](https://www.allocine.fr/). The dataset is labeled into five categories:
* 1 star: represents a terrible appreciation,
* 2 stars: bad appreciation,
* 3 stars: neutral appreciation,
* 4 stars: good appreciation,
* 5 stars: excellent appreciation.
Evaluation results
------------------
In addition of accuracy (called here *exact accuracy*) in order to be robust to +/-1 star estimation errors, we will take the following definition as a performance measure:
$$\mathrm{top\!-\!2\; acc}=\frac{1}{|\mathcal{O}|}\sum_{i\in\mathcal{O}}\sum_{0\leq l < 2}\mathbb{1}(\hat{f}_{i,l}=y_i)$$
where \\(\hat{f}_l\\) is the l-th largest predicted label, \\(y\\) the true label, \\(\mathcal{O}\\) is the test set of the observations and \\(\mathbb{1}\\) is the indicator function.
| **class** | **exact accuracy (%)** | **top-2 acc (%)** | **support** |
| :---------: | :--------------------: | :---------------: | :---------: |
| **global** | 61.01 | 88.80 | 9,698 |
| **1 star** | 87.21 | 77.17 | 1,905 |
| **2 stars** | 79.19 | 84.75 | 1,935 |
| **3 stars** | 77.85 | 78.98 | 1,974 |
| **4 stars** | 78.61 | 90.22 | 1,952 |
| **5 stars** | 85.96 | 82.92 | 1,932 |
Benchmark
---------
This model is compared to 3 reference models (see below). As each model doesn't have the exact definition of targets, we detail the performance measure used for each. An **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** was used for the mean inference time measure.
#### bert-base-multilingual-uncased-sentiment
[nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) is based on BERT model in the multilingual and uncased version. This sentiment analyzer is trained on Amazon reviews, similar to our model. Hence the targets and their definitions are the same.
| **model** | **time (ms)** | **exact accuracy (%)** | **top-2 acc (%)** |
| :-------: | :-----------: | :--------------------: | :---------------: |
| [cmarkea/distilcamembert-base-sentiment](https://huggingface.co/cmarkea/distilcamembert-base-sentiment) | **95.56** | **61.01** | **88.80** |
| [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) | 187.70 | 54.41 | 82.82 |
#### tf-allociné and barthez-sentiment-classification
[tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) based on [CamemBERT](https://huggingface.co/camembert-base) model and [moussaKam/barthez-sentiment-classification](https://huggingface.co/moussaKam/barthez-sentiment-classification) based on [BARThez](https://huggingface.co/moussaKam/barthez) use the same bi-class definition between them. To bring this back to a two-class problem, we will only consider the *"1 star"* and *"2 stars"* labels for the *negative* sentiments and *"4 stars"* and *"5 stars"* for *positive* sentiments. We exclude the *"3 stars"* which can be interpreted as a *neutral* class. In this context, the problem of +/-1 star estimation errors disappears. Then we use only the classical accuracy definition.
| **model** | **time (ms)** | **exact accuracy (%)** |
| :-------: | :-----------: | :--------------------: |
| [cmarkea/distilcamembert-base-sentiment](https://huggingface.co/cmarkea/distilcamembert-base-sentiment) | **95.56** | **97.52** |
| [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) | 329.74 | 95.69 |
| [moussaKam/barthez-sentiment-classification](https://huggingface.co/moussaKam/barthez-sentiment-classification) | 197.95 | 94.29 |
How to use DistilCamemBERT-Sentiment
------------------------------------
```python
from transformers import pipeline
analyzer = pipeline(
task='text-classification',
model="cmarkea/distilcamembert-base-sentiment",
tokenizer="cmarkea/distilcamembert-base-sentiment"
)
result = analyzer(
"J'aime me promener en forêt même si ça me donne mal aux pieds.",
return_all_scores=True
)
result
[{'label': '1 star',
'score': 0.047529436647892},
{'label': '2 stars',
'score': 0.14150355756282806},
{'label': '3 stars',
'score': 0.3586442470550537},
{'label': '4 stars',
'score': 0.3181498646736145},
{'label': '5 stars',
'score': 0.13417290151119232}]
```
### Optimum + ONNX
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
HUB_MODEL = "cmarkea/distilcamembert-base-sentiment"
tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL)
model = ORTModelForSequenceClassification.from_pretrained(HUB_MODEL)
onnx_qa = pipeline("text-classification", model=model, tokenizer=tokenizer)
# Quantized onnx model
quantized_model = ORTModelForSequenceClassification.from_pretrained(
HUB_MODEL, file_name="model_quantized.onnx"
)
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
```
|
{"language": "fr", "license": "mit", "datasets": ["amazon_reviews_multi", "allocine"], "widget": [{"text": "Je pensais lire un livre nul, mais finalement je l'ai trouv\u00e9 super !"}, {"text": "Cette banque est tr\u00e8s bien, mais elle n'offre pas les services de paiements sans contact."}, {"text": "Cette banque est tr\u00e8s bien et elle offre en plus les services de paiements sans contact."}]}
|
cmarkea/distilcamembert-base-sentiment
| null |
[
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"camembert",
"text-classification",
"fr",
"dataset:amazon_reviews_multi",
"dataset:allocine",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #tf #onnx #safetensors #camembert #text-classification #fr #dataset-amazon_reviews_multi #dataset-allocine #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
DistilCamemBERT-Sentiment
=========================
We present DistilCamemBERT-Sentiment, which is DistilCamemBERT fine-tuned for the sentiment analysis task for the French language. This model is built using two datasets: Amazon Reviews and Allociné.fr to minimize the bias. Indeed, Amazon reviews are similar in messages and relatively shorts, contrary to Allociné critics, who are long and rich texts.
This modelization is close to tblard/tf-allocine based on CamemBERT model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which divides the inference time by two with the same consumption power thanks to DistilCamemBERT.
Dataset
-------
The dataset comprises 204,993 reviews for training and 4,999 reviews for the test from Amazon, and 235,516 and 4,729 critics from Allocine website. The dataset is labeled into five categories:
* 1 star: represents a terrible appreciation,
* 2 stars: bad appreciation,
* 3 stars: neutral appreciation,
* 4 stars: good appreciation,
* 5 stars: excellent appreciation.
Evaluation results
------------------
In addition of accuracy (called here *exact accuracy*) in order to be robust to +/-1 star estimation errors, we will take the following definition as a performance measure:
$$\mathrm{top!-!2; acc}=\frac{1}{|\mathcal{O}|}\sum\_{i\in\mathcal{O}}\sum\_{0\leq l < 2}\mathbb{1}(\hat{f}\_{i,l}=y\_i)$$
where \(\hat{f}\_l\) is the l-th largest predicted label, \(y\) the true label, \(\mathcal{O}\) is the test set of the observations and \(\mathbb{1}\) is the indicator function.
Benchmark
---------
This model is compared to 3 reference models (see below). As each model doesn't have the exact definition of targets, we detail the performance measure used for each. An AMD Ryzen 5 4500U @ 2.3GHz with 6 cores was used for the mean inference time measure.
#### bert-base-multilingual-uncased-sentiment
nlptown/bert-base-multilingual-uncased-sentiment is based on BERT model in the multilingual and uncased version. This sentiment analyzer is trained on Amazon reviews, similar to our model. Hence the targets and their definitions are the same.
#### tf-allociné and barthez-sentiment-classification
tblard/tf-allocine based on CamemBERT model and moussaKam/barthez-sentiment-classification based on BARThez use the same bi-class definition between them. To bring this back to a two-class problem, we will only consider the *"1 star"* and *"2 stars"* labels for the *negative* sentiments and *"4 stars"* and *"5 stars"* for *positive* sentiments. We exclude the *"3 stars"* which can be interpreted as a *neutral* class. In this context, the problem of +/-1 star estimation errors disappears. Then we use only the classical accuracy definition.
How to use DistilCamemBERT-Sentiment
------------------------------------
### Optimum + ONNX
Citation
--------
|
[
"#### bert-base-multilingual-uncased-sentiment\n\n\nnlptown/bert-base-multilingual-uncased-sentiment is based on BERT model in the multilingual and uncased version. This sentiment analyzer is trained on Amazon reviews, similar to our model. Hence the targets and their definitions are the same.",
"#### tf-allociné and barthez-sentiment-classification\n\n\ntblard/tf-allocine based on CamemBERT model and moussaKam/barthez-sentiment-classification based on BARThez use the same bi-class definition between them. To bring this back to a two-class problem, we will only consider the *\"1 star\"* and *\"2 stars\"* labels for the *negative* sentiments and *\"4 stars\"* and *\"5 stars\"* for *positive* sentiments. We exclude the *\"3 stars\"* which can be interpreted as a *neutral* class. In this context, the problem of +/-1 star estimation errors disappears. Then we use only the classical accuracy definition.\n\n\n\nHow to use DistilCamemBERT-Sentiment\n------------------------------------",
"### Optimum + ONNX\n\n\nCitation\n--------"
] |
[
"TAGS\n#transformers #pytorch #tf #onnx #safetensors #camembert #text-classification #fr #dataset-amazon_reviews_multi #dataset-allocine #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### bert-base-multilingual-uncased-sentiment\n\n\nnlptown/bert-base-multilingual-uncased-sentiment is based on BERT model in the multilingual and uncased version. This sentiment analyzer is trained on Amazon reviews, similar to our model. Hence the targets and their definitions are the same.",
"#### tf-allociné and barthez-sentiment-classification\n\n\ntblard/tf-allocine based on CamemBERT model and moussaKam/barthez-sentiment-classification based on BARThez use the same bi-class definition between them. To bring this back to a two-class problem, we will only consider the *\"1 star\"* and *\"2 stars\"* labels for the *negative* sentiments and *\"4 stars\"* and *\"5 stars\"* for *positive* sentiments. We exclude the *\"3 stars\"* which can be interpreted as a *neutral* class. In this context, the problem of +/-1 star estimation errors disappears. Then we use only the classical accuracy definition.\n\n\n\nHow to use DistilCamemBERT-Sentiment\n------------------------------------",
"### Optimum + ONNX\n\n\nCitation\n--------"
] |
fill-mask
|
transformers
|
DistilCamemBERT
===============
We present a distillation version of the well named [CamemBERT](https://huggingface.co/camembert-base), a RoBERTa French model version, alias DistilCamemBERT. The aim of distillation is to drastically reduce the complexity of the model while preserving the performances. The proof of concept is shown in the [DistilBERT paper](https://arxiv.org/abs/1910.01108) and the code used for the training is inspired by the code of [DistilBERT](https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation).
Loss function
-------------
The training for the distilled model (student model) is designed to be the closest as possible to the original model (teacher model). To perform this the loss function is composed of 3 parts:
* DistilLoss: a distillation loss which measures the silimarity between the probabilities at the outputs of the student and teacher models with a cross-entropy loss on the MLM task ;
* CosineLoss: a cosine embedding loss. This loss function is applied on the last hidden layers of student and teacher models to guarantee a collinearity between them ;
* MLMLoss: and finaly a Masked Language Modeling (MLM) task loss to perform the student model with the original task of the teacher model.
The final loss function is a combination of these three losses functions. We use the following ponderation:
$$Loss = 0.5 \times DistilLoss + 0.3 \times CosineLoss + 0.2 \times MLMLoss$$
Dataset
-------
To limit the bias between the student and teacher models, the dataset used for the DstilCamemBERT training is the same as the camembert-base training one: OSCAR. The French part of this dataset approximately represents 140 GB on a hard drive disk.
Training
--------
We pre-trained the model on a nVidia Titan RTX during 18 days.
Evaluation results
------------------
| Dataset name | f1-score |
| :----------: | :------: |
| [FLUE](https://huggingface.co/datasets/flue) CLS | 83% |
| [FLUE](https://huggingface.co/datasets/flue) PAWS-X | 77% |
| [FLUE](https://huggingface.co/datasets/flue) XNLI | 77% |
| [wikiner_fr](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr) NER | 98% |
How to use DistilCamemBERT
--------------------------
Load DistilCamemBERT and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cmarkea/distilcamembert-base")
model = AutoModel.from_pretrained("cmarkea/distilcamembert-base")
model.eval()
...
```
Filling masks using pipeline :
```python
from transformers import pipeline
model_fill_mask = pipeline("fill-mask", model="cmarkea/distilcamembert-base", tokenizer="cmarkea/distilcamembert-base")
results = model_fill_mask("Le camembert est <mask> :)")
results
[{'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.3878222405910492, 'token': 7200},
{'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.06469205021858215, 'token': 2183},
{'sequence': '<s> Le camembert est parfait :)</s>', 'score': 0.04534877464175224, 'token': 1654},
{'sequence': '<s> Le camembert est succulent :)</s>', 'score': 0.04128391295671463, 'token': 26202},
{'sequence': '<s> Le camembert est magnifique :)</s>', 'score': 0.02425697259604931, 'token': 1509}]
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
```
|
{"language": "fr", "license": "mit", "datasets": ["oscar"], "widget": [{"text": "J'aime lire les <mask> de SF."}]}
|
cmarkea/distilcamembert-base
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1910.01108",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1910.01108"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #tf #safetensors #camembert #fill-mask #fr #dataset-oscar #arxiv-1910.01108 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
DistilCamemBERT
===============
We present a distillation version of the well named CamemBERT, a RoBERTa French model version, alias DistilCamemBERT. The aim of distillation is to drastically reduce the complexity of the model while preserving the performances. The proof of concept is shown in the DistilBERT paper and the code used for the training is inspired by the code of DistilBERT.
Loss function
-------------
The training for the distilled model (student model) is designed to be the closest as possible to the original model (teacher model). To perform this the loss function is composed of 3 parts:
* DistilLoss: a distillation loss which measures the silimarity between the probabilities at the outputs of the student and teacher models with a cross-entropy loss on the MLM task ;
* CosineLoss: a cosine embedding loss. This loss function is applied on the last hidden layers of student and teacher models to guarantee a collinearity between them ;
* MLMLoss: and finaly a Masked Language Modeling (MLM) task loss to perform the student model with the original task of the teacher model.
The final loss function is a combination of these three losses functions. We use the following ponderation:
$$Loss = 0.5 \times DistilLoss + 0.3 \times CosineLoss + 0.2 \times MLMLoss$$
Dataset
-------
To limit the bias between the student and teacher models, the dataset used for the DstilCamemBERT training is the same as the camembert-base training one: OSCAR. The French part of this dataset approximately represents 140 GB on a hard drive disk.
Training
--------
We pre-trained the model on a nVidia Titan RTX during 18 days.
Evaluation results
------------------
How to use DistilCamemBERT
--------------------------
Load DistilCamemBERT and its sub-word tokenizer :
Filling masks using pipeline :
Citation
--------
|
[] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #camembert #fill-mask #fr #dataset-oscar #arxiv-1910.01108 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8651
- Matthews Correlation: 0.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5233 | 1.0 | 535 | 0.5353 | 0.4004 |
| 0.3497 | 2.0 | 1070 | 0.5165 | 0.5076 |
| 0.2386 | 3.0 | 1605 | 0.6661 | 0.5161 |
| 0.1745 | 4.0 | 2140 | 0.7730 | 0.5406 |
| 0.1268 | 5.0 | 2675 | 0.8651 | 0.5475 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5474713423103301, "name": "Matthews Correlation"}]}]}]}
|
cnu/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8651
* Matthews Correlation: 0.5475
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2
* Datasets 1.18.3
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
fill-mask
|
transformers
|
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-cail-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-cail-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
{"language": "zh", "license": "cc-by-nc-sa-4.0", "tags": ["legal", "fairlex"], "pipeline_tag": "fill-mask", "widget": [{"text": "\u4e0a\u8ff0\u4e8b\u5b9e\uff0c\u88ab\u544a\u4eba\u5728\u5ead\u5ba1\u8fc7\u7a0b\u4e2d\u4ea6\u65e0\u5f02\u8bae\uff0c\u4e14\u6709<mask>\u7684\u9648\u8ff0\uff0c\u73b0\u573a\u8fa8\u8ba4\u7b14\u5f55\u53ca\u7167\u7247\uff0c\u88ab\u544a\u4eba\u7684\u524d\u79d1\u5211\u4e8b\u5224\u51b3\u4e66\uff0c\u91ca\u653e\u8bc1\u660e\u6750\u6599\uff0c\u6293\u83b7\u7ecf\u8fc7\uff0c\u88ab\u544a\u4eba\u7684\u4f9b\u8ff0\u53ca\u8eab\u4efd\u8bc1\u660e\u7b49\u8bc1\u636e\u8bc1\u5b9e\uff0c\u8db3\u4ee5\u8ba4\u5b9a\u3002"}]}
|
coastalcph/fairlex-cail-minilm
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"legal",
"fairlex",
"zh",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #xlm-roberta #fill-mask #legal #fairlex #zh #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
FairLex: A multilingual benchmark for evaluating fairness in legal text processing
==================================================================================
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
Pre-training details
--------------------
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
Models list
-----------
Model name: 'coastalcph/fairlex-ecthr-minlm', Training corpora: ECtHR, Language: 'en'
Model name: 'coastalcph/fairlex-scotus-minlm', Training corpora: SCOTUS, Language: 'en'
Model name: 'coastalcph/fairlex-fscs-minlm', Training corpora: FSCS, Language: ['de', 'fr', 'it']
Model name: 'coastalcph/fairlex-cail-minlm', Training corpora: CAIL, Language: 'zh'
Load Pretrained Model
---------------------
Evaluation on downstream tasks
------------------------------
Consider the experiments in the article:
*Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.*
Author - Publication
--------------------
Ilias Chalkidis on behalf of CoAStaL NLP Group
| Github: @ilias.chalkidis | Twitter: @KiddoThe2B |
|
[] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #fill-mask #legal #fairlex #zh #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-ecthr-minilm")
model = AutoModel.from_pretrained("coastalcph/fairlex-ecthr-minilm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
{"language": "en", "license": "cc-by-nc-sa-4.0", "tags": ["legal", "fairlex"], "pipeline_tag": "fill-mask", "widget": [{"text": "The applicant submitted that her husband was subjected to treatment amounting to <mask> whilst in the custody of Adana Security Directorate"}]}
|
coastalcph/fairlex-ecthr-minilm
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"legal",
"fairlex",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #legal #fairlex #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
FairLex: A multilingual benchmark for evaluating fairness in legal text processing
==================================================================================
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
Pre-training details
--------------------
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
Models list
-----------
Model name: 'coastalcph/fairlex-ecthr-minlm', Training corpora: ECtHR, Language: 'en'
Model name: 'coastalcph/fairlex-scotus-minlm', Training corpora: SCOTUS, Language: 'en'
Model name: 'coastalcph/fairlex-fscs-minlm', Training corpora: FSCS, Language: ['de', 'fr', 'it']
Model name: 'coastalcph/fairlex-cail-minlm', Training corpora: CAIL, Language: 'zh'
Load Pretrained Model
---------------------
Evaluation on downstream tasks
------------------------------
Consider the experiments in the article:
*Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.*
Author - Publication
--------------------
Ilias Chalkidis on behalf of CoAStaL NLP Group
| Github: @ilias.chalkidis | Twitter: @KiddoThe2B |
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #legal #fairlex #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-fscs-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-fscs-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
{"language": ["de", "fr", "it"], "license": "cc-by-nc-sa-4.0", "tags": ["legal", "fairlex"], "pipeline_tag": "fill-mask", "widget": [{"text": "Aus seinem damaligen strafbaren Verhalten resultierte eine Forderung der Nachlassverwaltung eines <mask>, wor\u00fcber eine aussergerichtliche Vereinbarung \u00fcber Fr. 500'000."}, {"text": " Elle avait pour but social les <mask> dans le domaine des changes, en particulier l'exploitation d'une plateforme internet."}, {"text": "Il Pretore ha accolto la petizione con sentenza 16 luglio 2015, accordando all'attore l'importo <mask>, con interessi di mora a partire dalla notifica del precetto esecutivo, e ha rigettato in tale misura l'opposizione interposta a quest'ultimo."}]}
|
coastalcph/fairlex-fscs-minilm
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"legal",
"fairlex",
"de",
"fr",
"it",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de",
"fr",
"it"
] |
TAGS
#transformers #pytorch #xlm-roberta #fill-mask #legal #fairlex #de #fr #it #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
FairLex: A multilingual benchmark for evaluating fairness in legal text processing
==================================================================================
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
Pre-training details
--------------------
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
Models list
-----------
Model name: 'coastalcph/fairlex-ecthr-minlm', Training corpora: ECtHR, Language: 'en'
Model name: 'coastalcph/fairlex-scotus-minlm', Training corpora: SCOTUS, Language: 'en'
Model name: 'coastalcph/fairlex-fscs-minlm', Training corpora: FSCS, Language: ['de', 'fr', 'it']
Model name: 'coastalcph/fairlex-cail-minlm', Training corpora: CAIL, Language: 'zh'
Load Pretrained Model
---------------------
Evaluation on downstream tasks
------------------------------
Consider the experiments in the article:
*Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.*
Author - Publication
--------------------
Ilias Chalkidis on behalf of CoAStaL NLP Group
| Github: @ilias.chalkidis | Twitter: @KiddoThe2B |
|
[] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #fill-mask #legal #fairlex #de #fr #it #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-scotus-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-scotus-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
{"language": "en", "license": "cc-by-nc-sa-4.0", "tags": ["legal", "fairlex"], "pipeline_tag": "fill-mask", "widget": [{"text": "Because the Court granted <mask> before judgment, the Court effectively stands in the shoes of the Court of Appeals and reviews the defendants\u2019 appeals."}]}
|
coastalcph/fairlex-scotus-minilm
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"legal",
"fairlex",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #legal #fairlex #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
FairLex: A multilingual benchmark for evaluating fairness in legal text processing
==================================================================================
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
Pre-training details
--------------------
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
Models list
-----------
Model name: 'coastalcph/fairlex-ecthr-minlm', Training corpora: ECtHR, Language: 'en'
Model name: 'coastalcph/fairlex-scotus-minlm', Training corpora: SCOTUS, Language: 'en'
Model name: 'coastalcph/fairlex-fscs-minlm', Training corpora: FSCS, Language: ['de', 'fr', 'it']
Model name: 'coastalcph/fairlex-cail-minlm', Training corpora: CAIL, Language: 'zh'
Load Pretrained Model
---------------------
Evaluation on downstream tasks
------------------------------
Consider the experiments in the article:
*Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.*
Author - Publication
--------------------
Ilias Chalkidis on behalf of CoAStaL NLP Group
| Github: @ilias.chalkidis | Twitter: @KiddoThe2B |
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #legal #fairlex #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# Kohaku DialoGPT Model
|
{"tags": ["conversational"]}
|
cocoaclef/DialoGPT-small-kohaku
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Kohaku DialoGPT Model
|
[
"# Kohaku DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Kohaku DialoGPT Model"
] |
text-generation
|
transformers
|
# Rick Morty DialoGPT Model
|
{"tags": ["conversational"]}
|
codealtgeek/DiabloGPT-medium-rickmorty
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Morty DialoGPT Model
|
[
"# Rick Morty DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Morty DialoGPT Model"
] |
automatic-speech-recognition
|
transformers
|
HIYACCENT: An Improved Nigerian-Accented Speech Recognition System Based on Contrastive Learning
The global objective of this research was to develop a more robust model for the Nigerian English Speakers whose English pronunciations are heavily affected by their mother tongue. For this, the Wav2Vec-HIYACCENT model was proposed which introduced a new layer to the Novel Facebook Wav2vec to capture the disparity between the baseline model and Nigerian English Speeches. A CTC loss was also inserted on top of the model which adds flexibility to the speech-text alignment. This resulted in over 20% improvement in the performance for NAE.T
Fine-tuned facebook/wav2vec2-large on English using the UISpeech Corpus. When using this model, make sure that your speech input is sampled at 16kHz.
The script used for training can be found here: https://github.com/amceejay/HIYACCENT-NE-Speech-Recognition-System
##Usage: The model can be used directly (without a language model) as follows...
#Using the ASRecognition library:
from asrecognition import ASREngine
asr = ASREngine("fr", model_path="codeceejay/HIYACCENT_Wav2Vec2")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = asr.transcribe(audio_paths)
##Writing your own inference speech:
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "codeceejay/HIYACCENT_Wav2Vec2"
SAMPLES = 10
#You can use common_voice/timit or Nigerian Accented Speeches can also be found here: https://openslr.org/70/
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
|
{}
|
codeceejay/HIYACCENT_Wav2Vec2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
|
HIYACCENT: An Improved Nigerian-Accented Speech Recognition System Based on Contrastive Learning
The global objective of this research was to develop a more robust model for the Nigerian English Speakers whose English pronunciations are heavily affected by their mother tongue. For this, the Wav2Vec-HIYACCENT model was proposed which introduced a new layer to the Novel Facebook Wav2vec to capture the disparity between the baseline model and Nigerian English Speeches. A CTC loss was also inserted on top of the model which adds flexibility to the speech-text alignment. This resulted in over 20% improvement in the performance for NAE.T
Fine-tuned facebook/wav2vec2-large on English using the UISpeech Corpus. When using this model, make sure that your speech input is sampled at 16kHz.
The script used for training can be found here: URL
##Usage: The model can be used directly (without a language model) as follows...
#Using the ASRecognition library:
from asrecognition import ASREngine
asr = ASREngine("fr", model_path="codeceejay/HIYACCENT_Wav2Vec2")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = asr.transcribe(audio_paths)
##Writing your own inference speech:
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "codeceejay/HIYACCENT_Wav2Vec2"
SAMPLES = 10
#You can use common_voice/timit or Nigerian Accented Speeches can also be found here: URL
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = URL(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = URL(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
|
[
"# Preprocessing the datasets.",
"# We need to read the audio files as arrays\ndef speech_file_to_array_fn(batch):\n speech_array, sampling_rate = URL(batch[\"path\"], sr=16_000)\n batch[\"speech\"] = speech_array\n batch[\"sentence\"] = batch[\"sentence\"].upper()\n return batch\n\ntest_dataset = test_dataset.map(speech_file_to_array_fn)\ninputs = processor(test_dataset[\"speech\"], sampling_rate=16_000, return_tensors=\"pt\", padding=True)\n\nwith torch.no_grad():\n logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits\n\npredicted_ids = URL(logits, dim=-1)\npredicted_sentences = processor.batch_decode(predicted_ids)\n\nfor i, predicted_sentence in enumerate(predicted_sentences):\n print(\"-\" * 100)\n print(\"Reference:\", test_dataset[i][\"sentence\"])\n print(\"Prediction:\", predicted_sentence)"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n",
"# Preprocessing the datasets.",
"# We need to read the audio files as arrays\ndef speech_file_to_array_fn(batch):\n speech_array, sampling_rate = URL(batch[\"path\"], sr=16_000)\n batch[\"speech\"] = speech_array\n batch[\"sentence\"] = batch[\"sentence\"].upper()\n return batch\n\ntest_dataset = test_dataset.map(speech_file_to_array_fn)\ninputs = processor(test_dataset[\"speech\"], sampling_rate=16_000, return_tensors=\"pt\", padding=True)\n\nwith torch.no_grad():\n logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits\n\npredicted_ids = URL(logits, dim=-1)\npredicted_sentences = processor.batch_decode(predicted_ids)\n\nfor i, predicted_sentence in enumerate(predicted_sentences):\n print(\"-\" * 100)\n print(\"Reference:\", test_dataset[i][\"sentence\"])\n print(\"Prediction:\", predicted_sentence)"
] |
null |
transformers
|
# Calbert: a Catalan Language Model
## Introduction
CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture.
It is now available on Hugging Face in its `tiny-uncased` version and `base-uncased` (the one you're looking at) as well, and was pretrained on the [OSCAR dataset](https://traces1.inria.fr/oscar/).
For further information or requests, please go to the [GitHub repository](https://github.com/codegram/calbert)
## Pre-trained models
| Model | Arch. | Training data |
| ----------------------------------- | -------------- | ---------------------- |
| `codegram` / `calbert-tiny-uncased` | Tiny (uncased) | OSCAR (4.3 GB of text) |
| `codegram` / `calbert-base-uncased` | Base (uncased) | OSCAR (4.3 GB of text) |
## How to use Calbert with HuggingFace
#### Load Calbert and its tokenizer:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("codegram/calbert-base-uncased")
model = AutoModel.from_pretrained("codegram/calbert-base-uncased")
model.eval() # disable dropout (or leave in train mode to finetune
```
#### Filling masks using pipeline
```python
from transformers import pipeline
calbert_fill_mask = pipeline("fill-mask", model="codegram/calbert-base-uncased", tokenizer="codegram/calbert-base-uncased")
results = calbert_fill_mask("M'agrada [MASK] això")
# results
# [{'sequence': "[CLS] m'agrada molt aixo[SEP]", 'score': 0.614592969417572, 'token': 61},
# {'sequence': "[CLS] m'agrada moltíssim aixo[SEP]", 'score': 0.06058056280016899, 'token': 4867},
# {'sequence': "[CLS] m'agrada més aixo[SEP]", 'score': 0.017195818945765495, 'token': 43},
# {'sequence': "[CLS] m'agrada llegir aixo[SEP]", 'score': 0.016321714967489243, 'token': 684},
# {'sequence': "[CLS] m'agrada escriure aixo[SEP]", 'score': 0.012185849249362946, 'token': 1306}]
```
#### Extract contextual embedding features from Calbert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("M'és una mica igual")
# ['▁m', "'", 'es', '▁una', '▁mica', '▁igual']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [2, 109, 7, 71, 36, 371, 1103, 3]
# NB: Can be done in one step : tokenize.encode("M'és una mica igual")
# Feed tokens to Calbert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = model(encoded_sentence)
embeddings.size()
# torch.Size([1, 8, 768])
embeddings.detach()
# tensor([[[-0.0261, 0.1166, -0.1075, ..., -0.0368, 0.0193, 0.0017],
# [ 0.1289, -0.2252, 0.9881, ..., -0.1353, 0.3534, 0.0734],
# [-0.0328, -1.2364, 0.9466, ..., 0.3455, 0.7010, -0.2085],
# ...,
# [ 0.0397, -1.0228, -0.2239, ..., 0.2932, 0.1248, 0.0813],
# [-0.0261, 0.1165, -0.1074, ..., -0.0368, 0.0193, 0.0017],
# [-0.1934, -0.2357, -0.2554, ..., 0.1831, 0.6085, 0.1421]]])
```
## Authors
CALBERT was trained and evaluated by [Txus Bach](https://twitter.com/txustice), as part of [Codegram](https://www.codegram.com)'s applied research.
<a href="https://huggingface.co/exbert/?model=codegram/calbert-base-uncased&modelKind=bidirectional&sentence=M%27agradaria%20força%20saber-ne%20més">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"language": "ca", "license": "mit", "tags": ["masked-lm", "catalan", "exbert"]}
|
codegram/calbert-base-uncased
| null |
[
"transformers",
"pytorch",
"albert",
"masked-lm",
"catalan",
"exbert",
"ca",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ca"
] |
TAGS
#transformers #pytorch #albert #masked-lm #catalan #exbert #ca #license-mit #endpoints_compatible #region-us
|
Calbert: a Catalan Language Model
=================================
Introduction
------------
CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture.
It is now available on Hugging Face in its 'tiny-uncased' version and 'base-uncased' (the one you're looking at) as well, and was pretrained on the OSCAR dataset.
For further information or requests, please go to the GitHub repository
Pre-trained models
------------------
Model: 'codegram' / 'calbert-tiny-uncased', Arch.: Tiny (uncased), Training data: OSCAR (4.3 GB of text)
Model: 'codegram' / 'calbert-base-uncased', Arch.: Base (uncased), Training data: OSCAR (4.3 GB of text)
How to use Calbert with HuggingFace
-----------------------------------
#### Load Calbert and its tokenizer:
#### Filling masks using pipeline
#### Extract contextual embedding features from Calbert output
Authors
-------
CALBERT was trained and evaluated by Txus Bach, as part of Codegram's applied research.
[<img width="300px" src="URL
</a>](URL%20força%20saber-ne%20més)
|
[
"#### Load Calbert and its tokenizer:",
"#### Filling masks using pipeline",
"#### Extract contextual embedding features from Calbert output\n\n\nAuthors\n-------\n\n\nCALBERT was trained and evaluated by Txus Bach, as part of Codegram's applied research.\n\n\n[<img width=\"300px\" src=\"URL\n</a>](URL%20força%20saber-ne%20més)"
] |
[
"TAGS\n#transformers #pytorch #albert #masked-lm #catalan #exbert #ca #license-mit #endpoints_compatible #region-us \n",
"#### Load Calbert and its tokenizer:",
"#### Filling masks using pipeline",
"#### Extract contextual embedding features from Calbert output\n\n\nAuthors\n-------\n\n\nCALBERT was trained and evaluated by Txus Bach, as part of Codegram's applied research.\n\n\n[<img width=\"300px\" src=\"URL\n</a>](URL%20força%20saber-ne%20més)"
] |
null |
transformers
|
# Calbert: a Catalan Language Model
## Introduction
CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture.
It is now available on Hugging Face in its `tiny-uncased` version (the one you're looking at) and `base-uncased` as well, and was pretrained on the [OSCAR dataset](https://traces1.inria.fr/oscar/).
For further information or requests, please go to the [GitHub repository](https://github.com/codegram/calbert)
## Pre-trained models
| Model | Arch. | Training data |
| ----------------------------------- | -------------- | ---------------------- |
| `codegram` / `calbert-tiny-uncased` | Tiny (uncased) | OSCAR (4.3 GB of text) |
| `codegram` / `calbert-base-uncased` | Base (uncased) | OSCAR (4.3 GB of text) |
## How to use Calbert with HuggingFace
#### Load Calbert and its tokenizer:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("codegram/calbert-tiny-uncased")
model = AutoModel.from_pretrained("codegram/calbert-tiny-uncased")
model.eval() # disable dropout (or leave in train mode to finetune
```
#### Filling masks using pipeline
```python
from transformers import pipeline
calbert_fill_mask = pipeline("fill-mask", model="codegram/calbert-tiny-uncased", tokenizer="codegram/calbert-tiny-uncased")
results = calbert_fill_mask("M'agrada [MASK] això")
# results
# [{'sequence': "[CLS] m'agrada molt aixo[SEP]", 'score': 0.4403671622276306, 'token': 61},
# {'sequence': "[CLS] m'agrada més aixo[SEP]", 'score': 0.050061386078596115, 'token': 43},
# {'sequence': "[CLS] m'agrada veure aixo[SEP]", 'score': 0.026286985725164413, 'token': 157},
# {'sequence': "[CLS] m'agrada bastant aixo[SEP]", 'score': 0.022483550012111664, 'token': 2143},
# {'sequence': "[CLS] m'agrada moltíssim aixo[SEP]", 'score': 0.014491282403469086, 'token': 4867}]
```
#### Extract contextual embedding features from Calbert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("M'és una mica igual")
# ['▁m', "'", 'es', '▁una', '▁mica', '▁igual']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [2, 109, 7, 71, 36, 371, 1103, 3]
# NB: Can be done in one step : tokenize.encode("M'és una mica igual")
# Feed tokens to Calbert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = model(encoded_sentence)
embeddings.size()
# torch.Size([1, 8, 312])
embeddings.detach()
# tensor([[[-0.2726, -0.9855, 0.9643, ..., 0.3511, 0.3499, -0.1984],
# [-0.2824, -1.1693, -0.2365, ..., -3.1866, -0.9386, -1.3718],
# [-2.3645, -2.2477, -1.6985, ..., -1.4606, -2.7294, 0.2495],
# ...,
# [ 0.8800, -0.0244, -3.0446, ..., 0.5148, -3.0903, 1.1879],
# [ 1.1300, 0.2425, 0.2162, ..., -0.5722, -2.2004, 0.4045],
# [ 0.4549, -0.2378, -0.2290, ..., -2.1247, -2.2769, -0.0820]]])
```
## Authors
CALBERT was trained and evaluated by [Txus Bach](https://twitter.com/txustice), as part of [Codegram](https://www.codegram.com)'s applied research.
<a href="https://huggingface.co/exbert/?model=codegram/calbert-tiny-uncased&modelKind=bidirectional&sentence=M%27agradaria%20força%20saber-ne%20més">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"language": "ca", "license": "mit", "tags": ["masked-lm", "catalan", "exbert"]}
|
codegram/calbert-tiny-uncased
| null |
[
"transformers",
"pytorch",
"albert",
"masked-lm",
"catalan",
"exbert",
"ca",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ca"
] |
TAGS
#transformers #pytorch #albert #masked-lm #catalan #exbert #ca #license-mit #endpoints_compatible #region-us
|
Calbert: a Catalan Language Model
=================================
Introduction
------------
CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture.
It is now available on Hugging Face in its 'tiny-uncased' version (the one you're looking at) and 'base-uncased' as well, and was pretrained on the OSCAR dataset.
For further information or requests, please go to the GitHub repository
Pre-trained models
------------------
Model: 'codegram' / 'calbert-tiny-uncased', Arch.: Tiny (uncased), Training data: OSCAR (4.3 GB of text)
Model: 'codegram' / 'calbert-base-uncased', Arch.: Base (uncased), Training data: OSCAR (4.3 GB of text)
How to use Calbert with HuggingFace
-----------------------------------
#### Load Calbert and its tokenizer:
#### Filling masks using pipeline
#### Extract contextual embedding features from Calbert output
Authors
-------
CALBERT was trained and evaluated by Txus Bach, as part of Codegram's applied research.
[<img width="300px" src="URL
</a>](URL%20força%20saber-ne%20més)
|
[
"#### Load Calbert and its tokenizer:",
"#### Filling masks using pipeline",
"#### Extract contextual embedding features from Calbert output\n\n\nAuthors\n-------\n\n\nCALBERT was trained and evaluated by Txus Bach, as part of Codegram's applied research.\n\n\n[<img width=\"300px\" src=\"URL\n</a>](URL%20força%20saber-ne%20més)"
] |
[
"TAGS\n#transformers #pytorch #albert #masked-lm #catalan #exbert #ca #license-mit #endpoints_compatible #region-us \n",
"#### Load Calbert and its tokenizer:",
"#### Filling masks using pipeline",
"#### Extract contextual embedding features from Calbert output\n\n\nAuthors\n-------\n\n\nCALBERT was trained and evaluated by Txus Bach, as part of Codegram's applied research.\n\n\n[<img width=\"300px\" src=\"URL\n</a>](URL%20força%20saber-ne%20més)"
] |
text2text-generation
|
transformers
|
This model is a paraphraser designed for the Adversarial Paraphrasing Task described and used in this paper: https://aclanthology.org/2021.acl-long.552/.
Please refer to `nap_generation.py` on the github repository for ways to better utilize this model using concepts of top-k sampling and top-p sampling. The demo on huggingface will output only one sentence which will most likely be the same as the input sentence since the model is supposed to output using beam search and sampling.
Github repository: https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt.git
Please cite the following if you use this model:
```bib
@inproceedings{nighojkar-licato-2021-improving,
title = "Improving Paraphrase Detection with the Adversarial Paraphrasing Task",
author = "Nighojkar, Animesh and
Licato, John",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.552",
pages = "7106--7116",
abstract = "If two sentences have the same meaning, it should follow that they are equivalent in their inferential properties, i.e., each sentence should textually entail the other. However, many paraphrase datasets currently in widespread use rely on a sense of paraphrase based on word overlap and syntax. Can we teach them instead to identify paraphrases in a way that draws on the inferential properties of the sentences, and is not over-reliant on lexical and syntactic similarities of a sentence pair? We apply the adversarial paradigm to this question, and introduce a new adversarial method of dataset creation for paraphrase identification: the Adversarial Paraphrasing Task (APT), which asks participants to generate semantically equivalent (in the sense of mutually implicative) but lexically and syntactically disparate paraphrases. These sentence pairs can then be used both to test paraphrase identification models (which get barely random accuracy) and then improve their performance. To accelerate dataset generation, we explore automation of APT using T5, and show that the resulting dataset also improves accuracy. We discuss implications for paraphrase detection and release our dataset in the hope of making paraphrase detection models better able to detect sentence-level meaning equivalence.",
}
```
|
{}
|
AMHR/T5-for-Adversarial-Paraphrasing
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This model is a paraphraser designed for the Adversarial Paraphrasing Task described and used in this paper: URL
Please refer to 'nap_generation.py' on the github repository for ways to better utilize this model using concepts of top-k sampling and top-p sampling. The demo on huggingface will output only one sentence which will most likely be the same as the input sentence since the model is supposed to output using beam search and sampling.
Github repository: URL
Please cite the following if you use this model:
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification
|
transformers
|
This model is a paraphrase detector trained on the Adversarial Paraphrasing datasets described and used in this paper: https://aclanthology.org/2021.acl-long.552/.
Github repository: https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt.git
Please cite the following if you use this model:
```bib
@inproceedings{nighojkar-licato-2021-improving,
title = "Improving Paraphrase Detection with the Adversarial Paraphrasing Task",
author = "Nighojkar, Animesh and
Licato, John",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.552",
pages = "7106--7116",
abstract = "If two sentences have the same meaning, it should follow that they are equivalent in their inferential properties, i.e., each sentence should textually entail the other. However, many paraphrase datasets currently in widespread use rely on a sense of paraphrase based on word overlap and syntax. Can we teach them instead to identify paraphrases in a way that draws on the inferential properties of the sentences, and is not over-reliant on lexical and syntactic similarities of a sentence pair? We apply the adversarial paradigm to this question, and introduce a new adversarial method of dataset creation for paraphrase identification: the Adversarial Paraphrasing Task (APT), which asks participants to generate semantically equivalent (in the sense of mutually implicative) but lexically and syntactically disparate paraphrases. These sentence pairs can then be used both to test paraphrase identification models (which get barely random accuracy) and then improve their performance. To accelerate dataset generation, we explore automation of APT using T5, and show that the resulting dataset also improves accuracy. We discuss implications for paraphrase detection and release our dataset in the hope of making paraphrase detection models better able to detect sentence-level meaning equivalence.",
}
```
|
{}
|
AMHR/adversarial-paraphrasing-detector
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
This model is a paraphrase detector trained on the Adversarial Paraphrasing datasets described and used in this paper: URL
Github repository: URL
Please cite the following if you use this model:
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.