modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 06:30:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 06:29:58
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
alireza7/ARMAN-SH-persian-base-parsinlu-sentiment-movie
|
alireza7
| 2021-09-29T19:18:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-SH-persian-base-parsinlu-qqp
|
alireza7
| 2021-09-29T19:18:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-SH-persian-base-parsinlu-multiple-choice
|
alireza7
| 2021-09-29T19:18:05Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-SH-persian-base-PN-summary
|
alireza7
| 2021-09-29T19:17:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base
|
alireza7
| 2021-09-29T19:17:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-voa-title
|
alireza7
| 2021-09-29T19:17:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-tebyan
|
alireza7
| 2021-09-29T19:16:58Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-perkey-summary
|
alireza7
| 2021-09-29T19:16:27Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-parsinlu-textual-entailment
|
alireza7
| 2021-09-29T19:16:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-parsinlu-sentiment-movie
|
alireza7
| 2021-09-29T19:15:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-parsinlu-qqp
|
alireza7
| 2021-09-29T19:15:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-parsinlu-multiple-choice
|
alireza7
| 2021-09-29T19:15:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-PN-summary
|
alireza7
| 2021-09-29T19:14:47Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
dweb/deberta-base-CoLA
|
dweb
| 2021-09-29T17:37:10Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-base-CoLA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-CoLA
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1655
- Accuracy: 0.8482
- F1: 0.8961
- Roc Auc: 0.8987
- Mcc: 0.6288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Roc Auc | Mcc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:-------:|:------:|
| 0.5266 | 1.0 | 535 | 0.4138 | 0.8159 | 0.8698 | 0.8627 | 0.5576 |
| 0.3523 | 2.0 | 1070 | 0.3852 | 0.8387 | 0.8880 | 0.9041 | 0.6070 |
| 0.2479 | 3.0 | 1605 | 0.3981 | 0.8482 | 0.8901 | 0.9120 | 0.6447 |
| 0.1712 | 4.0 | 2140 | 0.4732 | 0.8558 | 0.9008 | 0.9160 | 0.6486 |
| 0.1354 | 5.0 | 2675 | 0.7181 | 0.8463 | 0.8938 | 0.9024 | 0.6250 |
| 0.0876 | 6.0 | 3210 | 0.8453 | 0.8520 | 0.8992 | 0.9123 | 0.6385 |
| 0.0682 | 7.0 | 3745 | 1.0282 | 0.8444 | 0.8938 | 0.9061 | 0.6189 |
| 0.0431 | 8.0 | 4280 | 1.1114 | 0.8463 | 0.8960 | 0.9010 | 0.6239 |
| 0.0323 | 9.0 | 4815 | 1.1663 | 0.8501 | 0.8970 | 0.8967 | 0.6340 |
| 0.0163 | 10.0 | 5350 | 1.1655 | 0.8482 | 0.8961 | 0.8987 | 0.6288 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
WurmWillem/DialoGPT-medium-RickandMorty3
|
WurmWillem
| 2021-09-29T17:15:34Z | 0 | 0 | null |
[
"conversational",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- conversational
---
|
huggingartists/platina
|
huggingartists
| 2021-09-29T17:06:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/platina",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/platina
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b12dc90e6f405684ef6b74c9de92fdcd.853x853x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Платина (Platina)</div>
<a href="https://genius.com/artists/platina">
<div style="text-align: center; font-size: 14px;">@platina</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Платина (Platina).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/platina).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/platina")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2ih365j7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Платина (Platina)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1quasiz0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1quasiz0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/platina')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/platina")
model = AutoModelWithLMHead.from_pretrained("huggingartists/platina")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/cyrusshepard-fastfwdco-lilyraynyc
|
huggingtweets
| 2021-09-29T08:19:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/cyrusshepard-fastfwdco-lilyraynyc/1632903540115/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/713653445262237696/mdyVSGoj_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1241620963768201216/sG68m_iE_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1308419103510626304/gUgr1gMo_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">fastfwd & Cyrus & Lily Ray 😏</div>
<div style="text-align: center; font-size: 14px;">@cyrusshepard-fastfwdco-lilyraynyc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from fastfwd & Cyrus & Lily Ray 😏.
| Data | fastfwd | Cyrus | Lily Ray 😏 |
| --- | --- | --- | --- |
| Tweets downloaded | 945 | 3248 | 3250 |
| Retweets | 60 | 343 | 89 |
| Short tweets | 5 | 729 | 310 |
| Tweets kept | 880 | 2176 | 2851 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3k89f9gx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cyrusshepard-fastfwdco-lilyraynyc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3eq4v17k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3eq4v17k/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cyrusshepard-fastfwdco-lilyraynyc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
TalTechNLP/espnet2_estonian
|
TalTechNLP
| 2021-09-29T07:36:28Z | 7 | 1 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"et",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: et
license: cc-by-4.0
---
# Estonian Espnet2 ASR model
## Model description
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech.
## Intended uses & limitations
This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
## How to use
```python
from espnet2.bin.asr_inference import Speech2Text
model = Speech2Text.from_pretrained(
"TalTechNLP/espnet2_estonian",
lm_weight=0.6, ctc_weight=0.4, beam_size=60
)
# read a sound file with 16k sample rate
import soundfile
speech, rate = soundfile.read("speech.wav")
assert rate == 16000
text, *_ = model(speech)
print(text[0])
```
#### Limitations and bias
Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:
* Speech containing technical and other domain-specific terms
* Children's speech
* Non-native speech
* Speech recorded under very noisy conditions or with a microphone far from the speaker
* Very spontaneous and overlapping speech
## Training data
Acoustic training data:
| Type | Amount (h) |
|-----------------------|:------:|
| Broadcast speech | 591 |
| Spontaneous speech | 53 |
| Elderly speech corpus | 53 |
| Talks, lectures | 49 |
| Parliament speeches | 31 |
| *Total* | *761* |
Language model training data:
* Estonian National Corpus 2019
* OpenSubtitles
* Speech transcripts
## Training procedure
Standard EspNet2 Conformer recipe.
## Evaluation results
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/aktuaalne2021.testset|2864|56575|93.1|4.5|2.4|2.0|8.9|63.4|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/jutusaated.devset|273|4677|93.9|3.6|2.4|1.2|7.3|46.5|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/jutusaated.testset|818|11093|94.7|2.7|2.5|0.9|6.2|45.0|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/www-trans.devset|1207|13865|82.3|8.5|9.3|3.4|21.2|74.1|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/www-trans.testset|1648|22707|86.4|7.6|6.0|2.5|16.1|75.7|
### BibTeX entry and citation info
#### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
|
huggingtweets/balcobops-liyrex_irl-waitforgot
|
huggingtweets
| 2021-09-29T04:04:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/balcobops-liyrex_irl-waitforgot/1632888280266/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1379189650556911619/EiZklugS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1441278016164818955/T-PDXXvg_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1438447321604313089/5_lZmeyb_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Wait Forgot & Balco - Special Boperative & Liyrex</div>
<div style="text-align: center; font-size: 14px;">@balcobops-liyrex_irl-waitforgot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Wait Forgot & Balco - Special Boperative & Liyrex.
| Data | Wait Forgot | Balco - Special Boperative | Liyrex |
| --- | --- | --- | --- |
| Tweets downloaded | 3194 | 1171 | 3189 |
| Retweets | 1294 | 129 | 1587 |
| Short tweets | 285 | 122 | 279 |
| Tweets kept | 1615 | 920 | 1323 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/371suxoa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @balcobops-liyrex_irl-waitforgot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bj54dpp8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bj54dpp8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/balcobops-liyrex_irl-waitforgot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lewtun/bert-base-uncased-finetuned-imdb
|
lewtun
| 2021-09-28T20:45:38Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: bert-base-uncased-finetuned-imdb
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: imdb
type: imdb
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2244 | 1.0 | 958 | 2.0726 |
| 2.1537 | 2.0 | 1916 | 2.0381 |
| 2.1183 | 3.0 | 2874 | 2.0284 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
lewtun/MiniLM-L12-H384-uncased-finetuned-imdb
|
lewtun
| 2021-09-28T18:59:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: MiniLM-L12-H384-uncased-finetuned-imdb
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: imdb
type: imdb
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-L12-H384-uncased-finetuned-imdb
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.2464 | 1.0 | 391 | 4.2951 |
| 4.2302 | 2.0 | 782 | 4.0023 |
| 4.0726 | 3.0 | 1173 | 3.9328 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
sgugger/marian-finetuned-kde4-en-to-fr
|
sgugger
| 2021-09-28T13:47:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 53.2503
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8666
- Bleu: 53.2503
- Gen Len: 14.7005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
huggingtweets/plinz
|
huggingtweets
| 2021-09-28T12:42:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/plinz/1632832956311/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/936396593762357248/f66CtXot_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joscha Bach</div>
<div style="text-align: center; font-size: 14px;">@plinz</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joscha Bach.
| Data | Joscha Bach |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 298 |
| Short tweets | 131 |
| Tweets kept | 2819 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zr1xovwx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @plinz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bpt8w0c) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bpt8w0c/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/plinz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Kceilord/autonlp-tc-13522454
|
Kceilord
| 2021-09-28T10:46:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:Kceilord/autonlp-data-tc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Kceilord/autonlp-data-tc
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 13522454
## Validation Metrics
- Loss: 0.31450966000556946
- Accuracy: 0.8461538461538461
- Precision: 0.8181818181818182
- Recall: 0.782608695652174
- AUC: 0.9369259032455604
- F1: 0.8
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Kceilord/autonlp-tc-13522454
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
pere/nb-nn-dev
|
pere
| 2021-09-28T07:34:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"translation",
"no",
"dataset:oscar",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- translation
datasets:
- oscar
widget:
- text: Skriv inn en tekst som du ønsker å oversette til en annen målform.
---
# Norwegian mT5 - Translation Bokmål Nynorsk - Development
## Description
This is the development version of the Bokmål-Nynorsk translator. If you want something that is stable, Please do run [this version](https://huggingface.co/pere/nb-nn-translation/) instead.
Here is an example of how to use the model from Python
```python
# Import libraries
from transformers import T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('pere/nb-nn-dev',from_flax=True)
tokenizer = AutoTokenizer.from_pretrained('pere/nb-nn-dev')
#Encode the text
text = "Hun vil ikke gi bort sine personlige data."
inputs = tokenizer.encode(text, return_tensors="pt")
outputs = model.generate(inputs, max_length=255, num_beams=4, early_stopping=True)
#Decode and print the result
print(tokenizer.decode(outputs[0]))
```
Or if you like to use the pipeline instead
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-dev')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
```
|
vesteinn/fasttext_is_rmh
|
vesteinn
| 2021-09-27T22:09:07Z | 0 | 0 | null |
[
"is",
"license:agpl-3.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: agpl-3.0
language:
- is
---
# FastText model trained on Icelandic
This model is trained on the lemmas of the Icelandic Gigaword Corpus version 20.05. It is trained using the gensim package, version 4.1.0. and parameters were set to default (100 dimensions, windows size 5)
This model can not be loaded directly since it uses gensim, clone the repository and run the following to use it.
```python
import gensim
model = gensim.models.FastText.load("./rmh.w2v.model")
```
## Example output
```bash
In [1]: model.wv.most_similar("england")
Out[1]:
[('englands', 0.8778558969497681),
('southland', 0.8573296070098877),
('skotland', 0.846065878868103),
('englaland', 0.8320872187614441),
('hoogland', 0.8299505114555359),
('hoagland', 0.8277317881584167),
('totland', 0.8265103697776794),
('lackland', 0.8234561681747437),
('skarpengland', 0.8227219581604004),
('langland', 0.8222305774688721)]
In [2]: model.wv.most_similar("kanína")
Out[2]:
[('loðkanína', 0.9271067976951599),
('dvergkanína', 0.9106121063232422),
('angórakanína', 0.895512044429779),
('angórukanína', 0.8741581439971924),
('feldkanína', 0.8696010708808899),
('kanínubangsi', 0.8562541604042053),
('holdakanína', 0.8543838858604431),
('villikanína', 0.8525990843772888),
('silkikanína', 0.8515204191207886),
('kaníni', 0.8445548415184021)]
```
|
huggingtweets/lemonjellyhats
|
huggingtweets
| 2021-09-27T20:44:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/lemonjellyhats/1632775437870/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1388380421780611074/njQLSVzW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tanisha</div>
<div style="text-align: center; font-size: 14px;">@lemonjellyhats</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tanisha.
| Data | tanisha |
| --- | --- |
| Tweets downloaded | 791 |
| Retweets | 241 |
| Short tweets | 19 |
| Tweets kept | 531 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/xyrf4cur/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lemonjellyhats's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yfq9m4c) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yfq9m4c/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lemonjellyhats')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
chrommium/rubert-base-cased-sentence-finetuned-sent_in_news_sents
|
chrommium
| 2021-09-27T19:10:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-base-cased-sentence-finetuned-sent_in_news_sents
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7224199288256228
- name: F1
type: f1
value: 0.5137303178348194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-sent_in_news_sents
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9506
- Accuracy: 0.7224
- F1: 0.5137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 1.0045 | 0.6690 | 0.1388 |
| No log | 2.0 | 162 | 0.9574 | 0.6228 | 0.2980 |
| No log | 3.0 | 243 | 1.0259 | 0.6477 | 0.3208 |
| No log | 4.0 | 324 | 1.1262 | 0.6619 | 0.4033 |
| No log | 5.0 | 405 | 1.3377 | 0.6299 | 0.3909 |
| No log | 6.0 | 486 | 1.5716 | 0.6868 | 0.3624 |
| 0.6085 | 7.0 | 567 | 1.6286 | 0.6762 | 0.4130 |
| 0.6085 | 8.0 | 648 | 1.6450 | 0.6940 | 0.4775 |
| 0.6085 | 9.0 | 729 | 1.7108 | 0.7224 | 0.4920 |
| 0.6085 | 10.0 | 810 | 1.8792 | 0.7046 | 0.5028 |
| 0.6085 | 11.0 | 891 | 1.8670 | 0.7153 | 0.4992 |
| 0.6085 | 12.0 | 972 | 1.8856 | 0.7153 | 0.4934 |
| 0.0922 | 13.0 | 1053 | 1.9506 | 0.7224 | 0.5137 |
| 0.0922 | 14.0 | 1134 | 2.0363 | 0.7189 | 0.4761 |
| 0.0922 | 15.0 | 1215 | 2.0601 | 0.7224 | 0.5053 |
| 0.0922 | 16.0 | 1296 | 2.0813 | 0.7153 | 0.5038 |
| 0.0922 | 17.0 | 1377 | 2.0960 | 0.7189 | 0.5065 |
| 0.0922 | 18.0 | 1458 | 2.1060 | 0.7224 | 0.5098 |
| 0.0101 | 19.0 | 1539 | 2.1153 | 0.7260 | 0.5086 |
| 0.0101 | 20.0 | 1620 | 2.1187 | 0.7260 | 0.5086 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Unbabel/gec-t5_small
|
Unbabel
| 2021-09-27T11:27:48Z | 287 | 23 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"grammatical error correction",
"text2text",
"en",
"dataset:clang-8",
"dataset:conll-14",
"dataset:conll-13",
"arxiv:2106.03830",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- grammatical error correction
- text2text
- t5
license: apache-2.0
datasets:
- clang-8
- conll-14
- conll-13
metrics:
- f0.5
---
This model is an implementation of the paper [A Simple Recipe for Multilingual Grammatical Error Correction](https://arxiv.org/pdf/2106.03830.pdf) from Google where they report the State of the art score in the task of Grammatical Error Correction (GEC).
We implement the version with the T5-small with the reported F_0.5 score in the paper (60.70).
To effectively use the "Hosted inference API", write "gec: [YOUR SENTENCE HERE]".
In order to use the model, look at the following snippet:
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("Unbabel/gec-t5_small")
tokenizer = T5Tokenizer.from_pretrained('t5-small')
sentence = "I like to swimming"
tokenized_sentence = tokenizer('gec: ' + sentence, max_length=128, truncation=True, padding='max_length', return_tensors='pt')
corrected_sentence = tokenizer.decode(
model.generate(
input_ids = tokenized_sentence.input_ids,
attention_mask = tokenized_sentence.attention_mask,
max_length=128,
num_beams=5,
early_stopping=True,
)[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True
)
print(corrected_sentence) # -> I like swimming.
```
|
vppvgit/BiblItBERT-1
|
vppvgit
| 2021-09-27T09:40:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: BiblItBERT-1
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiblItBERT-1
This model is a fine-tuned version of [vppvgit/BiblItBERT](https://huggingface.co/vppvgit/BiblItBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.5764 | 1.0 | 16528 | 1.5214 |
| 1.4572 | 2.0 | 33056 | 1.4201 |
| 1.3787 | 3.0 | 49584 | 1.3728 |
| 1.3451 | 4.0 | 66112 | 1.3245 |
| 1.3066 | 5.0 | 82640 | 1.2614 |
| 1.2447 | 6.0 | 99168 | 1.2333 |
| 1.2172 | 7.0 | 115696 | 1.2149 |
| 1.2079 | 8.0 | 132224 | 1.1853 |
| 1.2167 | 9.0 | 148752 | 1.1586 |
| 1.2056 | 10.0 | 165280 | 1.1503 |
| 1.1307 | 11.0 | 181808 | 1.1224 |
| 1.1689 | 12.0 | 198336 | 1.1074 |
| 1.1007 | 13.0 | 214864 | 1.0924 |
| 1.0901 | 14.0 | 231392 | 1.0659 |
| 1.0667 | 15.0 | 247920 | 1.0650 |
| 1.0434 | 16.0 | 264448 | 1.0362 |
| 1.0333 | 17.0 | 280976 | 1.0250 |
| 1.0342 | 18.0 | 297504 | 1.0198 |
| 1.0059 | 19.0 | 314032 | 0.9950 |
| 0.9719 | 20.0 | 330560 | 0.9836 |
| 0.9863 | 21.0 | 347088 | 0.9873 |
| 0.9781 | 22.0 | 363616 | 0.9724 |
| 0.9369 | 23.0 | 380144 | 0.9599 |
| 0.9578 | 24.0 | 396672 | 0.9557 |
| 0.9253 | 25.0 | 413200 | 0.9400 |
| 0.9441 | 26.0 | 429728 | 0.9222 |
| 0.9138 | 27.0 | 446256 | 0.9140 |
| 0.882 | 28.0 | 462784 | 0.9045 |
| 0.864 | 29.0 | 479312 | 0.8880 |
| 0.8632 | 30.0 | 495840 | 0.9023 |
| 0.8342 | 32.0 | 528896 | 0.8740 |
| 0.8037 | 34.0 | 561952 | 0.8647 |
| 0.8119 | 37.0 | 611536 | 0.8358 |
| 0.8011 | 38.0 | 628064 | 0.8252 |
| 0.786 | 39.0 | 644592 | 0.8228 |
| 0.7697 | 41.0 | 677648 | 0.8138 |
| 0.7485 | 42.0 | 694176 | 0.8104 |
| 0.7689 | 43.0 | 710704 | 0.8018 |
| 0.7401 | 45.0 | 743760 | 0.7957 |
| 0.7031 | 47.0 | 776816 | 0.7726 |
| 0.7578 | 48.0 | 793344 | 0.7864 |
| 0.7298 | 49.0 | 809872 | 0.7775 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
microsoft/layoutlm-base-cased
|
microsoft
| 2021-09-27T05:55:31Z | 53,140 | 17 |
transformers
|
[
"transformers",
"pytorch",
"layoutlm",
"arxiv:1912.13318",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# LayoutLM
**Multimodal (text + layout/format + image) pre-training for document AI**
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlm)
## Model description
LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318)
Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers)
## Different Tokenizer
Note that LayoutLM-Cased requires a different tokenizer, based on RobertaTokenizer. You can
initialize it as follows:
~~~
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('microsoft/layoutlm-base-cased')
~~~
## Citation
If you find LayoutLM useful in your research, please cite the following paper:
``` latex
@misc{xu2019layoutlm,
title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding},
author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou},
year={2019},
eprint={1912.13318},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
huggingtweets/aly__dixon-haleyosomething-svpino
|
huggingtweets
| 2021-09-26T12:49:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/aly__dixon-haleyosomething-svpino/1632660543535/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1416541994952937474/yi5cJxnq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1368667185879584770/pKNxJut-_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1393327649318076417/cQWDVv-q_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">haley o'shaughnessy & Santiago & Aly Dixon</div>
<div style="text-align: center; font-size: 14px;">@aly__dixon-haleyosomething-svpino</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from haley o'shaughnessy & Santiago & Aly Dixon.
| Data | haley o'shaughnessy | Santiago | Aly Dixon |
| --- | --- | --- | --- |
| Tweets downloaded | 3241 | 3250 | 3003 |
| Retweets | 430 | 7 | 426 |
| Short tweets | 460 | 316 | 195 |
| Tweets kept | 2351 | 2927 | 2382 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mt8xsda/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aly__dixon-haleyosomething-svpino's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31g4nsgq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31g4nsgq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aly__dixon-haleyosomething-svpino')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Davlan/mbart50-large-yor-eng-mt
|
Davlan
| 2021-09-26T12:40:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mbart50-large-yor-eng-mt
## Model description
**mbart50-large-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/mbart-large-50 model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *mbart-large-50* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning mbart50-large achieves **15.88 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mbart50-large-eng-yor-mt
|
Davlan
| 2021-09-26T11:57:50Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mbart50-large-eng-yor-mt
## Model description
**mbart50-large-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned facebook/mbart-large-50 model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *mbart-large-50* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning mbarr50-large achieves **13.39 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
KhoiNXM/KhoiNXM_Vietnamese_QA
|
KhoiNXM
| 2021-09-26T03:46:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:04Z |
VietnameseQA model based on custom dataset.
|
napoler/chinese_roberta_L-2_H-512_relative_key_query_token_type_100
|
napoler
| 2021-09-26T03:21:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
-修改为相对位置
-对内容类型进行修改
```python
from transformers import BertTokenizer, BertModel,BertConfig
Config = BertConfig.from_pretrained("napoler/chinese_roberta_L-2_H-512_relative_key_query_token_type_100")
tokenizer = BertTokenizer.from_pretrained('napoler/chinese_roberta_L-2_H-512_relative_key_query_token_type_100')
model = BertModel.from_pretrained("napoler/chinese_roberta_L-2_H-512_relative_key_query_token_type_100",config=Config)
```
修改方案
https://www.kaggle.com/terrychanorg/bert-notebook9525623d9e
|
huggingtweets/caucasianjames-haleyosomething-officialkat
|
huggingtweets
| 2021-09-26T02:14:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/caucasianjames-haleyosomething-officialkat/1632622460306/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1416541994952937474/yi5cJxnq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/933947605104685056/mumGVsyS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1420078509230223363/u7XR7esE_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">haley o'shaughnessy & James & Kat Dennings</div>
<div style="text-align: center; font-size: 14px;">@caucasianjames-haleyosomething-officialkat</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from haley o'shaughnessy & James & Kat Dennings.
| Data | haley o'shaughnessy | James | Kat Dennings |
| --- | --- | --- | --- |
| Tweets downloaded | 3242 | 3242 | 3228 |
| Retweets | 431 | 89 | 689 |
| Short tweets | 460 | 602 | 424 |
| Tweets kept | 2351 | 2551 | 2115 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ctao3i2l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @caucasianjames-haleyosomething-officialkat's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vge9p265) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vge9p265/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/caucasianjames-haleyosomething-officialkat')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Hate-speech-CNERG/deoffxlmr-mono-kannada
|
Hate-speech-CNERG
| 2021-09-25T14:01:14Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"kn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: kn
license: apache-2.0
---
This model is used to detect **Offensive Content** in **Kannada Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Kannada(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the second-highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.73, Ensemble - 0.74)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-spanish
|
Hate-speech-CNERG
| 2021-09-25T14:00:12Z | 136 | 8 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"es",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: es
license: apache-2.0
---
This model is used detecting **hatespeech** in **Spanish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.740287 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-polish
|
Hate-speech-CNERG
| 2021-09-25T13:58:40Z | 110 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"pl",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: pl
license: apache-2.0
---
This model is used detecting **hatespeech** in **Polish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.723254 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-english
|
Hate-speech-CNERG
| 2021-09-25T13:55:16Z | 90,803 | 10 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: en
license: apache-2.0
---
This model is used detecting **hatespeech** in **English language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.726030 for a learning rate of 2e-5. Training code can be found here https://github.com/punyajoy/DE-LIMIT
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-french
|
Hate-speech-CNERG
| 2021-09-25T13:51:14Z | 158 | 4 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"fr",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: fr
license: apache-2.0
---
This model is used detecting **hatespeech** in **French language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.692094 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
huggingtweets/sixjay__
|
huggingtweets
| 2021-09-25T11:43:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/sixjay__/1632570148333/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1434204311505055754/Ozub-Lmd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">joj</div>
<div style="text-align: center; font-size: 14px;">@sixjay__</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from joj.
| Data | joj |
| --- | --- |
| Tweets downloaded | 2494 |
| Retweets | 508 |
| Short tweets | 429 |
| Tweets kept | 1557 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wcyvex9s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sixjay__'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6yf1o7q5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6yf1o7q5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sixjay__')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
superb/superb-test-org__test-submission-with-example-expert__d609b3c32044e50e3d5e9067bd97af1b42f04b0e
|
superb
| 2021-09-24T19:49:31Z | 0 | 0 | null |
[
"tensorboard",
"library:s3prl",
"benchmark:superb",
"type:model",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
datasets:
- superb
tags:
- library:s3prl
- benchmark:superb
- type:model
---
# Fine-tuned s3prl model
Upstream Model: superb-test-org/test-submission-with-example-expert
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
SaulLu/cotet5_small_fix
|
SaulLu
| 2021-09-24T17:56:36Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"codet5",
"dataset:code_search_net",
"arxiv:2109.00859",
"arxiv:1909.09436",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- codet5
datasets:
- code_search_net
inference: false
---
# CodeT5 (small-sized model)
Pre-trained CodeT5 model. It was introduced in the paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models
for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in [this repository](https://github.com/salesforce/CodeT5).
Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, [nielsr](https://huggingface.co/nielsr)).
## Model description
From the abstract:
"We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code."
## Intended uses & limitations
This repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:
* code summarization
* code generation
* code translation
* code refinement
* code defect detection
* code clone detection.
See the [model hub](https://huggingface.co/models?search=salesforce/codet) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import RobertaTokenizer, T5ForConditionalGeneration
tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-small')
model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-small')
text = "def greet(user): print(f'hello <extra_id_0>!')"
input_ids = tokenizer(text, return_tensors="pt").input_ids
# simply generate a single sequence
generated_ids = model.generate(input_ids, max_length=10)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
# this prints "user: {user.name}"
```
## Training data
The CodeT5 model was pretrained on CodeSearchNet [Husain et al., 2019](https://arxiv.org/abs/1909.09436). Additionally, the authors collected two datasets of C/CSharp from [BigQuery1](https://console.cloud.google.com/marketplace/details/github/github-repos) to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining.
## Training procedure
### Preprocessing
This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.
## Evaluation results
For evaluation results on several downstream benchmarks, we refer to the paper.
### BibTeX entry and citation info
```bibtex
@misc{wang2021codet5,
title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation},
author={Yue Wang and Weishi Wang and Shafiq Joty and Steven C. H. Hoi},
year={2021},
eprint={2109.00859},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Adi2K/Priv-Consent
|
Adi2K
| 2021-09-24T12:53:04Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"eng",
"dataset:Adi2K/autonlp-data-Priv-Consent",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: eng
widget:
- text: "You can control cookies and tracking tools. To learn how to manage how we - and our vendors - use cookies and other tracking tools, please click here."
datasets:
- Adi2K/autonlp-data-Priv-Consent
---
# Model
- Problem type: Binary Classification
- Model ID: 12592372
## Validation Metrics
- Loss: 0.23033875226974487
- Accuracy: 0.9138655462184874
- Precision: 0.9087136929460581
- Recall: 0.9201680672268907
- AUC: 0.9690346726926065
- F1: 0.9144050104384133
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Adi2K/autonlp-Priv-Consent-12592372
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
gchhablani/fnet-large-finetuned-rte
|
gchhablani
| 2021-09-24T11:27:19Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6425992779783394
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-rte
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7528
- Accuracy: 0.6426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7105 | 1.0 | 623 | 0.6887 | 0.5740 |
| 0.6714 | 2.0 | 1246 | 0.6742 | 0.6209 |
| 0.509 | 3.0 | 1869 | 0.7528 | 0.6426 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16
|
bionlp
| 2021-09-24T07:46:55Z | 46 | 5 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"bluebert",
"en",
"dataset:PubMed",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- bert
- bluebert
license: cc0-1.0
datasets:
- PubMed
---
# BlueBert-Base, Uncased, PubMed
## Model description
A BERT model pre-trained on PubMed abstracts.
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-large-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
|
bionlp/bluebert_pubmed_mimic_uncased_L-24_H-1024_A-16
|
bionlp
| 2021-09-24T07:46:34Z | 57 | 4 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"bluebert",
"en",
"dataset:PubMed",
"dataset:MIMIC-III",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- bert
- bluebert
license: cc0-1.0
datasets:
- PubMed
- MIMIC-III
---
# BlueBert-Base, Uncased, PubMed and MIMIC-III
## Model description
A BERT model pre-trained on PubMed abstracts and clinical notes ([MIMIC-III](https://mimic.physionet.org/)).
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-large-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
|
bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12
|
bionlp
| 2021-09-24T07:46:11Z | 11,721 | 20 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"bluebert",
"en",
"dataset:PubMed",
"dataset:MIMIC-III",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- bert
- bluebert
license: cc0-1.0
datasets:
- PubMed
- MIMIC-III
---
# BlueBert-Base, Uncased, PubMed and MIMIC-III
## Model description
A BERT model pre-trained on PubMed abstracts and clinical notes ([MIMIC-III](https://mimic.physionet.org/)).
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-base-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
|
zgotter/bert-base-finetuned-ynat
|
zgotter
| 2021-09-24T02:00:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: bert-base-finetuned-ynat
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: ynat
metrics:
- name: F1
type: f1
value: 0.8669116640755216
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3710
- F1: 0.8669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.4223 | 0.8549 |
| No log | 2.0 | 358 | 0.3710 | 0.8669 |
| 0.2576 | 3.0 | 537 | 0.3891 | 0.8631 |
| 0.2576 | 4.0 | 716 | 0.3968 | 0.8612 |
| 0.2576 | 5.0 | 895 | 0.4044 | 0.8617 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingtweets/jschlatt
|
huggingtweets
| 2021-09-23T19:13:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/jschlatt/1632424426297/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1104281298967904257/KuDWZQfF_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Schlatt</div>
<div style="text-align: center; font-size: 14px;">@jschlatt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Schlatt.
| Data | Schlatt |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 3 |
| Short tweets | 1207 |
| Tweets kept | 2040 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ad6fl7e4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jschlatt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24kxtuwd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24kxtuwd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jschlatt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
piotr-rybak/poleval2021-task4-herbert-large-encoder
|
piotr-rybak
| 2021-09-23T17:34:47Z | 103 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6098 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3049,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 1024, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
valhalla/distilt5-qg-hl-12-6
|
valhalla
| 2021-09-23T16:42:49Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-generation",
"distilt5",
"distilt5-qg",
"dataset:squad",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
datasets:
- squad
tags:
- question-generation
- distilt5
- distilt5-qg
widget:
- text: <hl> 42 <hl> is the answer to life, the universe and everything. </s>
- text: Python is a programming language. It is developed by <hl> Guido Van Rossum
<hl>. </s>
- text: Although <hl> practicality <hl> beats purity </s>
license: mit
---
## DistilT5 for question-generation
This is distilled version of [t5-base-qg-hl](https://huggingface.co/valhalla/t5-base-qg-hl) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
The model is distilled using the **No Teacher Distillation** method proposed by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `t5-base-qg-hl` and finetune more on the same data. Following table lists other distilled models and their metrics.
| Name | BLEU-4 | METEOR | ROUGE-L | QA-EM | QA-F1 |
|---------------------------------------------------------------------------------|---------|---------|---------|--------|--------|
| [distilt5-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qg-hl-6-4) | 18.4141 | 24.8417 | 40.3435 | - | - |
| [distilt5-qa-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qa-qg-hl-6-4) | 18.6493 | 24.9685 | 40.5605 | 76.13 | 84.659 |
| [distilt5-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qg-hl-12-6) | 20.5275 | 26.5010 | 43.2676 | - | - |
| [distilt5-qa-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qa-qg-hl-12-6)| 20.6109 | 26.4533 | 43.0895 | 81.61 | 89.831 |
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens. For example
`<hl> 42 <hl> is the answer to life, the universe and everything.`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("question-generation", model="valhalla/distilt5-qg-hl-12-6")
nlp("42 is the answer to life, universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
```
|
toloka/t5-large-for-text-aggregation
|
toloka
| 2021-09-23T16:40:58Z | 16 | 7 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text aggregation",
"summarization",
"en",
"dataset:toloka/CrowdSpeech",
"arxiv:1910.10683",
"arxiv:2107.01091",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text aggregation
- summarization
license: apache-2.0
datasets:
- toloka/CrowdSpeech
metrics:
- wer
---
# T5 Large for Text Aggregation
## Model description
This is a T5 Large fine-tuned for crowdsourced text aggregation tasks. The model takes multiple performers' responses and yields a single aggregated response. This approach was introduced for the first time during [VLDB 2021 Crowd Science Challenge](https://crowdscience.ai/challenges/vldb21) and originally implemented at the second-place competitor's [GitHub](https://github.com/A1exRey/VLDB2021_workshop_t5). The [paper](http://ceur-ws.org/Vol-2932/short2.pdf) describing this model was presented at the [2nd Crowd Science Workshop](https://crowdscience.ai/conference_events/vldb21).
## How to use
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoConfig
mname = "toloka/t5-large-for-text-aggregation"
tokenizer = AutoTokenizer.from_pretrained(mname)
model = AutoModelForSeq2SeqLM.from_pretrained(mname)
input = "samplee text | sampl text | sample textt"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # sample text
```
## Training data
Pretrained weights were taken from the [original](https://huggingface.co/t5-large) T5 Large model by Google. For more details on the T5 architecture and training procedure see https://arxiv.org/abs/1910.10683
Model was fine-tuned on `train-clean`, `dev-clean` and `dev-other` parts of the [CrowdSpeech](https://huggingface.co/datasets/toloka/CrowdSpeech) dataset that was introduced in [our paper](https://openreview.net/forum?id=3_hgF1NAXU7&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DNeurIPS.cc%2F2021%2FTrack%2FDatasets_and_Benchmarks%2FRound1%2FAuthors%23your-submissions).
## Training procedure
The model was fine-tuned for eight epochs directly following the HuggingFace summarization training [example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization).
## Eval results
Dataset | Split | WER
-----------|------------|----------
CrowdSpeech| test-clean | 4.99
CrowdSpeech| test-other | 10.61
### BibTeX entry and citation info
```bibtex
@inproceedings{Pletenev:21,
author = {Pletenev, Sergey},
title = {{Noisy Text Sequences Aggregation as a Summarization Subtask}},
year = {2021},
booktitle = {Proceedings of the 2nd Crowd Science Workshop: Trust, Ethics, and Excellence in Crowdsourced Data Management at Scale},
pages = {15--20},
address = {Copenhagen, Denmark},
issn = {1613-0073},
url = {http://ceur-ws.org/Vol-2932/short2.pdf},
language = {english},
}
```
```bibtex
@misc{pavlichenko2021vox,
title={Vox Populi, Vox DIY: Benchmark Dataset for Crowdsourced Audio Transcription},
author={Nikita Pavlichenko and Ivan Stelmakh and Dmitry Ustalov},
year={2021},
eprint={2107.01091},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
|
tanmoyio/wav2vec2-large-xlsr-bengali
|
tanmoyio
| 2021-09-23T16:39:27Z | 1,078 | 3 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:OpenSLR",
"license:cc-by-sa-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: Bengali
datasets:
- OpenSLR
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: cc-by-sa-4.0
model-index:
- name: XLSR Wav2Vec2 Bengali by Tanmoy Sarkar
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: OpenSLR
args: ben
metrics:
- name: Test WER
type: wer
value: 88.58
---
# Wav2Vec2-Large-XLSR-Bengali
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using the [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
Dataset must be downloaded from [this website](https://www.openslr.org/53/) and preprocessed accordingly. For example 1250 test samples has been chosen.
```python
import pandas as pd
test_dataset = pd.read_csv('utt_spk_text.tsv', sep='\\t', header=None)[60000:61250]
test_dataset.columns = ["audio_path", "__", "label"]
test_dataset = test_data.drop("__", axis=1)
def add_file_path(text):
path = "data/" + text[:2] + "/" + text + '.flac'
return path
test_dataset['audio_path'] = test_dataset['audio_path'].map(lambda x: add_file_path(x))
```
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["label"][:2])
```
## Evaluation
The model can be evaluated as follows on the Bengali test data of OpenSLR.
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["label"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 88.58 %
## Training
The script used for training can be found [Bengali ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1Bkc5C_cJV9BeS0FD0MuHyayl8hqcbdRZ?usp=sharing)
|
skt/kogpt2-base-v2
|
skt
| 2021-09-23T16:29:28Z | 23,482 | 45 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ko
tags:
- gpt2
license: cc-by-nc-sa-4.0
---
For more details: https://github.com/SKT-AI/KoGPT2
|
skt/ko-gpt-trinity-1.2B-v0.5
|
skt
| 2021-09-23T16:29:25Z | 628 | 43 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt3",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ko
tags:
- gpt3
license: cc-by-nc-sa-4.0
---
# Ko-GPT-Trinity 1.2B (v0.5)
## Model Description
Ko-GPT-Trinity 1.2B is a transformer model designed using SK telecom's replication of the GPT-3 architecture. Ko-GPT-Trinity refers to the class of models, while 1.2B represents the number of parameters of this particular pre-trained model.
### Model date
May 2021
### Model type
Language model
### Model version
1.2 billion parameter model
## Training data
Ko-GPT-Trinity 1.2B was trained on Ko-DAT, a large scale curated dataset created by SK telecom for the purpose of training this model.
## Training procedure
This model was trained on ko-DAT for 35 billion tokens over 72,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
The model learns an inner representation of the Korean language that can then be used to extract features useful for downstream tasks. The model excels at generating texts from a prompt, which was the pre-training objective.
### Limitations and Biases
Ko-GPT-Trinity was trained on Ko-DAT, a dataset known to contain profanity, lewd, politically charged, and otherwise abrasive language. As such, Ko-GPT-Trinity may produce socially unacceptable text. As with all language models, it is hard to predict in advance how Ko-GPT-Trinity will respond to particular prompts and offensive content may occur without warning.
Ko-GPT-Trinity was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, this is an active area of ongoing research. Known limitations include the following:
Predominantly Korean: Ko-GPT-Trinity was trained largely on text in the Korean language, and is best suited for classifying, searching, summarizing, or generating such text. Ko-GPT-Trinity will by default perform worse on inputs that are different from the data distribution it is trained on, including non-Korean languages as well as specific dialects of Korean that are not as well-represented in training data.
Interpretability & predictability: the capacity to interpret or predict how Ko-GPT-Trinity will behave is very limited, a limitation common to most deep learning systems, especially in models of this scale.
High variance on novel inputs: Ko-GPT-Trinity is not necessarily well-calibrated in its predictions on novel inputs. This can be observed in the much higher variance in its performance as compared to that of humans on standard benchmarks.
## Eval results
### Reasoning
| Model and Size | BoolQ | CoPA | WiC |
| ----------------------- | --------- | ---------- | --------- |
| **Ko-GPT-Trinity 1.2B** | **71.77** | **68.66** | **78.73** |
| KoElectra-base | 65.17 | 67.56 | 77.27 |
| KoBERT-base | 55.97 | 62.24 | 77.60 |
## Where to send questions or comments about the model
Please contact [Eric] (eric.davis@sktair.com)
|
persiannlp/wikibert-base-parsinlu-multiple-choice
|
persiannlp
| 2021-09-23T16:20:58Z | 67 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"multiple-choice",
"wikibert",
"persian",
"farsi",
"text-classification",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- wikibert
- persian
- farsi
pipeline_tag: text-classification
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a wikibert-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from typing import List
import torch
from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer
model_name = "persiannlp/wikibert-base-parsinlu-multiple-choice"
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config)
def run_model(question: str, candicates: List[str]):
assert len(candicates) == 4, "you need four candidates"
choices_inputs = []
for c in candicates:
text_a = "" # empty context
text_b = question + " " + c
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=128,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
choices_inputs.append(inputs)
input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs])
output = model(input_ids=input_ids)
print(output)
return output
run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"])
run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"])
run_model(
question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ",
candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"])
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-translation_en_fa
|
persiannlp
| 2021-09-23T16:20:48Z | 705 | 3 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (English -> Persian).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;")
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?")
```
which should output:
```
['برای الله، یعنی چرنده و سوزان دنیا، تحسین کنید']
['خودش را در سفید پوسته می کند و به صورت عشق برادرانه']
['او از تمام بلاگرها و سازمان هایی که حمایتشان را نشان می داد']
['در طول ماه آوریل و دسامبر در والی فیودورونا نزدیک بیکر']
['من می خواهم در مورد شبکه اجتماعی تحقیقات علوم کامپیوتری را دن']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-snli-entailment
|
persiannlp
| 2021-09-23T16:20:43Z | 52 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"entailment",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:snli",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- snli
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size="small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-snli-entailment"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(premise, hypothesis, **generator_args):
input_ids = tokenizer.encode(f"{premise}<sep>{hypothesis}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
run_model(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
run_model(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-qqp-query-paraphrasing
|
persiannlp
| 2021-09-23T16:20:38Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"query-paraphrasing",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:qqp",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- query-paraphrasing
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- qqp
metrics:
- accuracy
---
# Detection of Paraphrased Queries (تشخصیص سوالات هممعنی)
This is a model for detection of paraphrased queries.
Here is an example of how you can run this model:
```python
from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer
model_name = "persiannlp/mt5-small-parsinlu-qqp-query-paraphrasing"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(q1, q2, **generator_args):
input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟")
run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-opus-translation_fa_en
|
persiannlp
| 2021-09-23T16:20:36Z | 66,934 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (Persian -> English).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("ستایش خدای را که پروردگار جهانیان است.")
run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه میکند؛")
run_model("وی از تمامی بلاگرها، سازمانها و افرادی که از وی پشتیبانی کردهاند، تشکر کرد.")
run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ")
run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-arc-comqa-obqa-multiple-choice
|
persiannlp
| 2021-09-23T16:20:31Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-translation_en_fa
|
persiannlp
| 2021-09-23T16:20:29Z | 580 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (English -> Persian).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;")
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?")
```
which should output:
```
['خدا را شکر که آفریننده و نگهدار جهان است.']
['خود را با کفن سفید می پوشد و به شکل برادرانه ای در']
['او از همه ی وبلاگ نویسان و سازمان هایی که از او حمایت کردند']
['مسابقات بین آوریل و دسامبر در فرودگاه والی عبدین نزدیک بی']
['من می خواهم پایان نامه دکتری را در رشته علوم کامپیوتر در']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-sentiment-analysis
|
persiannlp
| 2021-09-23T16:20:21Z | 25 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"sentiment",
"sentiment-analysis",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- sentiment
- sentiment-analysis
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Sentiment Analysis (آنالیز احساسات)
This is a mT5 model for sentiment analysis.
Here is an example of how you can run this model:
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
import numpy as np
model_name_or_path = "persiannlp/mt5-large-parsinlu-sentiment-analysis"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
def run_model(context, query, **generator_args):
input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .",
"نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟"
)
run_model(
"فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...",
"نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟"
)
run_model(
"اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن....... (ノಠ益ಠ)ノ ",
" نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخرهباز چیست؟"
)
run_model(
" گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ",
" نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟"
)
run_model(
"در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب",
" شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟"
)
run_model(
"من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ",
"نظر شما به صورت کلی در مورد این شیر چیست؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-opus-translation_fa_en
|
persiannlp
| 2021-09-23T16:20:17Z | 184 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (Persian -> English).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("ستایش خدای را که پروردگار جهانیان است.")
run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه میکند؛")
run_model("وی از تمامی بلاگرها، سازمانها و افرادی که از وی پشتیبانی کردهاند، تشکر کرد.")
run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ")
run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-multiple-choice
|
persiannlp
| 2021-09-23T16:20:14Z | 63 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-squad-reading-comprehension
|
persiannlp
| 2021-09-23T16:20:07Z | 201 | 4 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"reading-comprehension",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:squad",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- reading-comprehension
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- squad
metrics:
- f1
---
# Reading Comprehension (مدل برای پاسخ به درک مطلب)
This is a mT5-based model for reading comprehension.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-squad-reading-comprehension"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(paragraph, question, **generator_args):
input_ids = tokenizer.encode(question + "\n" + paragraph, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک شی را دارای تقارن مینامیم زمانی که ان شی را بتوان به دو یا چند قسمت تقسیم کرد که آنها قسمتی از یک طرح سازمان یافته باشند یعنی بر روی شکل تنها جابجایی و چرخش و بازتاب و تجانس انجام شود و در اصل شکل تغییری به وجود نیایید آنگاه ان را تقارن مینامیم مرکز تقارن:اگر در یک شکل نقطهای مانندA وجود داشته باشد که هر نقطهٔ روی شکل (محیط) نسبت به نقطه یAمتقارن یک نقطهٔ دیگر شکل (محیط) باشد، نقطهٔ Aمرکز تقارن است. یعنی هر نقطه روی شکل باید متقارنی داشته باشد شکلهای که منتظم هستند و زوج ضلع دارند دارای مرکز تقارند ولی شکلهای فرد ضلعی منتظم مرکز تقارن ندارند. متوازیالأضلاع و دایره یک مرکز تقارن دارند ممکن است یک شکل خط تقارن نداشته باشد ولی مرکز تقارن داشته باشد. (منبع:س. گ)",
"اشکالی که یک مرکز تقارن دارند"
)
run_model(
"شُتُر یا اُشتر را که در زبان پهلوی (ushtar)[نیازمند منبع] میگفتند حیوانی است نیرومند و تنومند با توش و توان بالا از خانواده شتران؛ شبه نشخوارکننده و با دست و گردنی دراز. بر پشت خود یک یا دو کوهان دارد که ساختارش از پیه و چربی است. در دین اسلام گوشت او حلال است. اما ذبح آن با دیگر جانوران حلال گوشت متفاوت است و آن را نحر (بریدن گلو) میکنند و اگر سر آن را مانند گوسفند پیش از نحر ببرند گوشت آن حلال نیست. شیرش نیز نوشیده میشود ولی بیشتر کاربرد بارکشی دارد. پشم و پوستش نیز برای ریسندگی و پارچهبافی و کفشدوزی کاربرد دارد. گونههای دیگری از شتران نیز در آمریکای جنوبی زندگی میکنند، به نامهای لاما، آلپاکا، گواناکو که دارای کوهان نیستند. شتر ویژگیهای خاصّی دارد که مهمترین آنها تحمّل شرایط سخت صحرا و دماهای گوناگون و بهویژه گرمای شدید تابستان و کمبود آب و علوفه است. ترکیب جسمانی شتر با دیگر جانوران اختلاف زیادی دارد، و این اختلاف انگیزه شده که شتر در درازا روزهای سال در بیابان زندگی کند و از بوتهها و درختچههای گوناگون صحرایی و کویری و حتی از بوتههای شور و خاردار تغذیه کند. عربها از زمانهای بسیار دور از شتر استفاده کرده و میکنند. آنها به این حیوان اهلی لقب کشتی صحرا (به عربی: سفینةالصحراء) دادهاند.",
"غذای شترچیست؟"
)
run_model(
"""حسین میرزایی میگوید مرحله اول پرداخت وام حمایتی کرونا به همگی خانوارهای یارانهبگیر متقاضی تکمیل شده است و حال چهار میلیون خانوار که به عنوان "اقشار خاص" و "آسیبپذیر" شناسایی شدند، میتوانند برای یک میلیون تومان وام دیگر درخواست بدهند. آقای میرزایی گفته خانوارهای "آسیبپذیر" که شرایط گرفتن وام یک میلیونی اضافی را دارند با پیامک از این امکان مطلع شدهاند. بنا به گزارشهای رسمی با شیوع کرونا در ایران یک میلیون نفر بیکار شدهاند و درآمد کارکنان مشاغل غیررسمی نیز ضربه قابل توجهی خورده است. ارزش ریال هم در هفتههای اخیر در برابر ارزهای خارجی سقوط کرده است. اقتصاد ایران پیش از شیوع کرونا نیز با مشکلات مزمن رکود، تورم، تحریم و فساد روبرو بود.""",
"وام یارانه به چه کسانی میدهند؟"
)
run_model(
"در ۲۲ ژوئن ۱۹۴۱ نیروهای محور در عملیات بارباروسا حمله سنگینی به اتحاد شوروی کرده و یکی از بزرگترین نبردهای زمینی تاریخ بشر را رقم زدند. همچنین جبهه شرقی باعث به دام افتادن نیروهای محور شد و بیش از همه ارتش آلمان نازی را درگیر جنگ فرسایشی کرد. در دسامبر ۱۹۴۱ ژاپن یک در عملیاتی ناگهانی با نام نبرد پرل هاربر به پایگاه دریایی ایالات متحده آمریکا حمله کرد. به دنبال این اتفاق آمریکا نیز بلافاصله علیه ژاپن اعلان جنگ کرد که با حمایت بریتانیا همراه شد. پس از آن متحدین (نیروهای محور در اروپا) نیز با اتحاد ژاپن علیه آمریکا اعلام جنگ کردند. دستآوردهای ژاپن در یورش به آمریکا باعث ایجاد این احساس در آسیا شد که آسیا از تسلط غرب خارج شدهاست از این رو بسیاری از ارتشهای شکست خورده با آنها همراهی کردند.",
"چرا امریکا وارد جنگ جهانی دوم شد؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-opus-translation_fa_en
|
persiannlp
| 2021-09-23T16:19:57Z | 6,058 | 7 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (Persian -> English).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("ستایش خدای را که پروردگار جهانیان است.")
run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه میکند؛")
run_model("وی از تمامی بلاگرها، سازمانها و افرادی که از وی پشتیبانی کردهاند، تشکر کرد.")
run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ")
run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟")
```
which should give the following:
```
['the admiration of God, which is the Lord of the world.']
['At the Ford Park, the Crawford Park stands on a vase;']
['He thanked all the bloggers, the organizations, and the people who supported him']
['similar to the year 2001, the economy of ammonia in the United States in the']
['I want to follow the computer experts on social networks, what is the unsolved problem in']
```
which should give the following:
```
['Adoration of God, the Lord of the world.']
['At the High End of the Park, Conrad stands on a vase preaching;']
['She thanked all the bloggers, organizations, and men who had supported her.']
['In 2000, the lack of water ammonia in the United States was almost']
['I want to follow the computer science doctorate on social networks. What is the unsolved challenge']
```
Which should produce the following:
```
['the praise of God, the Lord of the world.']
['At the Hyde Park Corner, Carpenter is preaching on a vase;']
['He thanked all the bloggers, organizations, and people who had supported him.']
['Similarly in 2001, the production of waterless ammonia in the United States was']
['I want to pursue my degree in Computer Science on social networks, what is the']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-multiple-choice
|
persiannlp
| 2021-09-23T16:19:55Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-arc-comqa-obqa-multiple-choice
|
persiannlp
| 2021-09-23T16:19:52Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mbert-base-parsinlu-multiple-choice
|
persiannlp
| 2021-09-23T16:19:49Z | 70 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"multiple-choice",
"mbert",
"persian",
"farsi",
"text-classification",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mbert
- persian
- farsi
pipeline_tag: text-classification
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mbert-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from typing import List
import torch
from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer
model_name = "persiannlp/mbert-base-parsinlu-multiple-choice"
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config)
def run_model(question: str, candicates: List[str]):
assert len(candicates) == 4, "you need four candidates"
choices_inputs = []
for c in candicates:
text_a = "" # empty context
text_b = question + " " + c
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=128,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
choices_inputs.append(inputs)
input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs])
output = model(input_ids=input_ids)
print(output)
return output
run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"])
run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"])
run_model(
question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ",
candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"])
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
pere/norwegian-t5-base-NCC-nb-nn
|
pere
| 2021-09-23T16:19:35Z | 60 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"seq2seq",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴
This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything.
The following setting were used in training:
```bash
./run_t5_mlm_flax_streaming.py \
--output_dir="./" \
--model_type="t5" \
--config_name="./" \
--tokenizer_name="./" \
--dataset_name="pere/norwegian_colossal_corpus_v2_short100k" \
--max_seq_length="512" \
--weight_decay="0.01" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="32" \
--learning_rate="8e-3" \
--warmup_steps="0" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_epochs="5" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="500" \
--num_train_steps="1000000" \
--num_eval_samples="5000" \
--save_steps="5000" \
--eval_steps="5000" \
--preprocessing_num_workers 96 \
--adafactor \
--push_to_hub
```
|
pere/norwegian-gpt2
|
pere
| 2021-09-23T16:19:24Z | 196 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"norwegian",
"GPT2",
"casual language modeling",
"no",
"dataset:oscar",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- norwegian
- GPT2
- casual language modeling
datasets:
- oscar
---
# Norwegian GPT-2 - Oscar
## Description
This is a sample reference model trained only on the Oscar Corpus for a day on a TPU v3-8. Pretrained model on Norwegian language using a causal language modeling (CLM) objective.
|
pere/nb-nn-dev2
|
pere
| 2021-09-23T16:19:18Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"translation",
"no",
"dataset:oscar",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- translation
datasets:
- oscar
widget:
- text: Skriv inn en tekst som du ønsker å oversette til en annen målform.
---
# Norwegian T5 - Translation Bokmål Nynorsk - Development
## Description
This is the development version of the Bokmål-Nynorsk translator. If you want something that is stable, Please do run [this version](https://huggingface.co/pere/nb-nn-translation/) instead.
Here is an example of how to use the model from Python
```python
# Import libraries
from transformers import T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('pere/nb-nn-dev',from_flax=True)
tokenizer = AutoTokenizer.from_pretrained('pere/nb-nn-dev')
#Encode the text
text = "Hun vil ikke gi bort sine personlige data."
inputs = tokenizer.encode(text, return_tensors="pt")
outputs = model.generate(inputs, max_length=255, num_beams=4, early_stopping=True)
#Decode and print the result
print(tokenizer.decode(outputs[0]))
```
Or if you like to use the pipeline instead
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-dev')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
```
|
osanseviero/corenlp_english-extra
|
osanseviero
| 2021-09-23T16:16:44Z | 0 | 0 | null |
[
"corenlp",
"en",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- en
license: gpl
---
# Core NLP model for en
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_english-default
|
osanseviero
| 2021-09-23T16:16:41Z | 0 | 0 | null |
[
"corenlp",
"en",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- en
license: gpl
---
# Core NLP model for en
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_chinese
|
osanseviero
| 2021-09-23T16:16:39Z | 0 | 1 | null |
[
"corenlp",
"ch",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- ch
license: gpl
---
# Core NLP model for ch
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
mpariente/DPRNNTasNet-ks2_WHAM_sepclean
|
mpariente
| 2021-09-23T16:12:22Z | 252 | 9 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"DPRNNTasNet",
"audio-to-audio",
"dataset:wham",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- DPRNNTasNet
- audio-to-audio
datasets:
- wham
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `mpariente/DPRNNTasNet-ks2_WHAM_sepclean`
Imported from [Zenodo](https://zenodo.org/record/3862942)
### Description:
This model was trained by Manuel Pariente
using the wham/DPRNN recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the WHAM! dataset.
### Training config:
```yaml
data:
mode: min
nondefault_nsrc: None
sample_rate: 8000
segment: 2.0
task: sep_clean
train_dir: data/wav8k/min/tr
valid_dir: data/wav8k/min/cv
filterbank:
kernel_size: 2
n_filters: 64
stride: 1
main_args:
exp_dir: exp/train_dprnn_new/
gpus: -1
help: None
masknet:
bidirectional: True
bn_chan: 128
chunk_size: 250
dropout: 0
hid_size: 128
hop_size: 125
in_chan: 64
mask_act: sigmoid
n_repeats: 6
n_src: 2
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1e-05
positional arguments:
training:
batch_size: 3
early_stop: True
epochs: 200
gradient_clipping: 5
half_lr: True
num_workers: 8
```
### Results:
```yaml
si_sdr: 19.316743490695334
si_sdr_imp: 19.317895273889842
sdr: 19.68085347190952
sdr_imp: 19.5298092932871
sir: 30.362213998701232
sir_imp: 30.21116982007881
sar: 20.15553251343315
sar_imp: -129.02091762351188
stoi: 0.97772664309074
stoi_imp: 0.23968091518217424
```
### License notice:
This work "DPRNNTasNet-ks2_WHAM_sepclean" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A)
by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for
Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only).
"DPRNNTasNet-ks2_WHAM_sepclean" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente.
|
mpariente/ConvTasNet_Libri3Mix_sepnoisy
|
mpariente
| 2021-09-23T16:12:18Z | 17 | 0 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:LibriMix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- LibriMix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model
Imported from this Zenodo [model page](https://zenodo.org/record/4020529).
## Description:
This model was trained by Takhir Mirzaev using the Librimix/ConvTasNet recipe in Asteroid.
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
## Training config:
```yaml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 4
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
## Results:
```yaml
si_sdr: 6.824750632456865
si_sdr_imp: 11.234803761803752
sdr: 7.715799858488098
sdr_imp: 11.778681386239114
sir: 16.442141130818637
sir_imp: 19.527535070051055
sar: 8.757864265661263
sar_imp: -0.15657258049670303
stoi: 0.7854554136619554
stoi_imp: 0.22267957718163015
```
## License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by
[Vassil Panayotov](https://github.com/vdp),
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente.
|
mpariente/ConvTasNet_Libri1Mix_enhsingle_8k
|
mpariente
| 2021-09-23T16:12:15Z | 19 | 1 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"dataset:LibriMix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- ConvTasNet
datasets:
- LibriMix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model
Imported from this Zenodo [model page](https://zenodo.org/record/3970768).
## Description:
This model was trained by Brij Mohan using the Librimix/ConvTasNet recipe in Asteroid.
It was trained on the `enh_single` task of the Libri3Mix dataset.
## Training config:
```yaml
data:
n_src: 1
sample_rate: 8000
segment: 3
task: enh_single
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
```
## Results:
```yaml
si_sdr: 14.783675142685572
si_sdr_imp: 11.464625198953202
sdr: 15.497505907983102
sdr_imp: 12.07230150154914
sar: 15.497505907983102
sar_imp: 12.07230150154914
stoi: 0.9270030254700518
stoi_imp: 0.1320547197597893
```
## License notice:
This work "ConvTasNet_Libri1Mix_enhsingle_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by
[Vassil Panayotov](https://github.com/vdp),
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
"ConvTasNet_Libri1Mix_enhsingle_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente.
|
JorisCos/DCUNet_Libri1Mix_enhsingle_16k
|
JorisCos
| 2021-09-23T15:49:15Z | 631 | 5 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"DCUNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- DCUNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/DCUNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_n_filters: 1024
stft_kernel_size: 1024
stft_stride: 256
masknet:
architecture: Large-DCUNet-20
fix_length_mode: pad
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.154035391645971
si_sdr_imp: 9.704254085786271
sdr: 13.568058873121435
sdr_imp: 10.065396073908367
sar: 13.568058873121435
sar_imp: 10.065396073908367
stoi: 0.9199373340235417
stoi_imp: 0.12401751048300132
```
License notice:
This work "DCUNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCUNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/ConvTasNet_Libri3Mix_sepclean_8k
|
JorisCos
| 2021-09-23T15:49:06Z | 27 | 0 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.581797049575108
si_sdr_imp: 11.977037288467368
sdr' 9.305885208641385
sdr_imp: 12.3943409734845
sir: 16.42030534048559
sir_imp: 19.508759460400984
sar: 10.641943911079238
sar_imp: -56.4345187842095
stoi: 0.8365148408724333
stoi_imp: 0.24401766199806396
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris.
|
JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k
|
JorisCos
| 2021-09-23T15:49:01Z | 10 | 1 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k`
Imported from [Zenodo](https://zenodo.org/record/3874420#.X9I6NcLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri2Mix dataset.
Training config:
```yml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
Results:
On Libri2Mix min test set :
```yml
si_sdr: 9.944424856077259
si_sdr_imp: 11.939395359731192
sdr: 10.701526190782072
sdr_imp: 12.481757547845662
sir: 22.633644975545575
sir_imp: 22.45666740833025
sar: 11.131644100944868
sar_imp: 4.248489589311784
stoi: 0.852048619949357
stoi_imp: 0.2071994899565506
```
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_8k" is licensed under A[Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/ConvTasNet_Libri2Mix_sepclean_16k
|
JorisCos
| 2021-09-23T15:48:54Z | 2,594 | 2 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 15.243671356901526
si_sdr_imp: 15.243034178473609
sdr: 15.668108919568112
sdr_imp: 15.578229918028036
sir: 25.295100756629957
sir_imp: 25.205219921301754
sar: 16.307682590197313
sar_imp: -51.64989963759405
stoi: 0.9394951175291422
stoi_imp: 0.22640192740016568
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris.
|
JorisCos/ConvTasNet_Libri1Mix_enhsingle_16k
|
JorisCos
| 2021-09-23T15:48:51Z | 89 | 3 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.743051006476085
si_sdr_imp: 11.293269700616385
sdr: 15.300522933671061
sdr_imp: 11.797860134458015
sir: Infinity
sir_imp: NaN
sar: 15.300522933671061
sar_imp: 11.797860134458015
stoi: 0.9310514162434267
stoi_imp: 0.13513159270288563
```
License notice:
This work "ConvTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
groadabike/ConvTasNet_DAMP-VSEP_enhboth
|
groadabike
| 2021-09-23T13:57:35Z | 4 | 0 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:DAMP-VSEP",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- DAMP-VSEP
license: cc-by-sa-4.0
---
## Asteroid model `groadabike/ConvTasNet_DAMP-VSEP_enhboth`
Imported from [Zenodo](https://zenodo.org/record/3994193)
### Description:
This model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset.
### Training config:
```yaml
data:
channels: 1
n_src: 2
root_path: data
sample_rate: 16000
samples_per_track: 10
segment: 3.0
task: enh_both
filterbank:
kernel_size: 20
n_filters: 256
stride: 10
main_args:
exp_dir: exp/train_convtasnet
help: None
masknet:
bn_chan: 256
conv_kernel_size: 3
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 4
n_src: 2
norm_type: gLN
skip_chan: 256
optim:
lr: 0.0003
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 12
early_stop: True
epochs: 50
half_lr: True
num_workers: 12
```
### Results:
```yaml
si_sdr: 14.018196157142519
si_sdr_imp: 14.017103133809577
sdr: 14.498517291333885
sdr_imp: 14.463389151567865
sir: 24.149634529133372
sir_imp: 24.11450638936735
sar: 15.338597389045935
sar_imp: -137.30634122401517
stoi: 0.7639416744417206
stoi_imp: 0.1843383526963759
```
### License notice:
This work "ConvTasNet_DAMP-VSEP_enhboth" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). "ConvTasNet_DAMP-VSEP_enhboth" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike.
|
superb/hubert__508944ac
|
superb
| 2021-09-23T13:56:17Z | 0 | 0 | null |
[
"tensorboard",
"library:s3prl",
"benchmark:superb",
"type:model",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
datasets:
- superb
tags:
- library:s3prl
- benchmark:superb
- type:model
---
# Fine-tuned s3prl model
Upstream Model: hubert
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
DDSC/roberta-base-scandinavian
|
DDSC
| 2021-09-23T13:54:15Z | 71 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"scandinavian",
"da",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: da
license: cc-by-4.0
tags:
- scandinavian
- roberta
pipeline_tag: fill-mask
widget:
- text: På biblioteket kan du låne en <mask>.
---
# Scandinavian Roberta Base - MC4
## Description
This is a sample reference model for Flax/Jax training using only on the MC4. It is trained for roughly three day on a TPU v3-8. Training procedure...
---
## Description
My description
|
flax-community/nordic-roberta-wiki
|
flax-community
| 2021-09-23T13:53:50Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"feature-extraction",
"swedish",
"fill-mask",
"sv",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: sv
license: cc-by-4.0
tags:
- swedish
- roberta
pipeline_tag: fill-mask
widget:
- text: Meninged med livet är <mask>.
---
# Nordic Roberta Wikipedia
## Description
Nord roberta model trainined on the swedish danish and norwegian wikipedia.
## Evaluation
Evaluation on Named Entity recognition in Danish.
I finetuned each model on 3 epochs on DaNE, repeated it 5 times for each model, and calculated 95% confidence intervals for the means. Here are the results:
xlm-roberta-base : 88.01 +- 0.43
flax-community/nordic-roberta-wiki: 85.75 +- 0.69 (this model)
Maltehb/danish-bert-botxo: 85.38 +- 0.55
flax-community/roberta-base-danish: 80.14 +- 1.47
flax-community/roberta-base-scandinavian : 78.03 +- 3.02
Maltehb/-l-ctra-danish-electra-small-cased: 57.87 +- 3.19
NbAiLab/nb-bert-base : 30.24 +- 1.21
Randomly initialised RoBERTa model: 19.79 +- 2.00
Evaluation on Sentiment analysis in Dansish
Here are the results on test set, where each model has been trained 5 times, and the “+-” refers to a 95% confidence interval of the mean score:
Maltehb/danish-bert-botxo: 65.19 +- 0.53
NbAiLab/nb-bert-base : 63.80 +- 0.77
xlm-roberta-base : 63.55 +- 1.59
flax-community/nordic-roberta-wiki : 56.46 +- 1.77
flax-community/roberta-base-danish : 54.73 +- 8.96
flax-community/roberta-base-scandinavian : 44.28 +- 9.21
Maltehb/-l-ctra-danish-electra-small-cased : 47.78 +- 12.65
Randomly initialised RoBERTa model: 36.96 +- 1.02
Maltehb/roberta-base-scandinavian : 33.65 +- 8.32
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
|
csae8092/de_RTA_NER
|
csae8092
| 2021-09-23T13:46:37Z | 6 | 0 |
spacy
|
[
"spacy",
"token-classification",
"de",
"license:cc-by-nc-4.0",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- de
license: cc-by-nc-4.0
model-index:
- name: de_RTA_NER
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8630136986
- name: NER Recall
type: recall
value: 0.8743253662
- name: NER F Score
type: f_score
value: 0.8686327078
---
Regensburger Reichstag von 1576
| Feature | Description |
| --- | --- |
| **Name** | `de_RTA_NER` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.1.0,<3.2.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `https://creativecommons.org/licenses/by-nc/4.0/` |
| **Author** | [n/a](https://reichstagsakten-1576.uni-graz.at) |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DATE`, `LOC`, `PER`, `TIME` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 86.86 |
| `ENTS_P` | 86.30 |
| `ENTS_R` | 87.43 |
| `TOK2VEC_LOSS` | 43588.74 |
| `NER_LOSS` | 95573.96 |
|
tohoku-nlp/bert-large-japanese
|
tohoku-nlp
| 2021-09-23T13:45:41Z | 1,246 | 9 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東北大学で[MASK]の研究をしています。
---
# BERT large Japanese (unidic-lite with whole word masking, jawiki-20200831)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0).
## Model architecture
The model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/).
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
byeongal/Ko-DialoGPT
|
byeongal
| 2021-09-23T13:43:34Z | 83 | 8 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ko
tags:
- gpt2
- conversational
license: cc-by-nc-sa-4.0
---
## Ko-DialoGPT
### How to use
```python
from transformers import PreTrainedTokenizerFast, GPT2LMHeadModel
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PreTrainedTokenizerFast.from_pretrained('byeongal/Ko-DialoGPT')
model = GPT2LMHeadModel.from_pretrained('byeongal/Ko-DialoGPT').to(device)
past_user_inputs = []
generated_responses = []
while True:
user_input = input(">> User:")
if user_input == 'bye':
break
text_idx = tokenizer.encode(user_input + tokenizer.eos_token, return_tensors='pt')
for i in range(len(generated_responses)-1, len(generated_responses)-3, -1):
if i < 0:
break
encoded_vector = tokenizer.encode(generated_responses[i] + tokenizer.eos_token, return_tensors='pt')
if text_idx.shape[-1] + encoded_vector.shape[-1] < 1000:
text_idx = torch.cat([encoded_vector, text_idx], dim=-1)
else:
break
encoded_vector = tokenizer.encode(past_user_inputs[i] + tokenizer.eos_token, return_tensors='pt')
if text_idx.shape[-1] + encoded_vector.shape[-1] < 1000:
text_idx = torch.cat([encoded_vector, text_idx], dim=-1)
else:
break
text_idx = text_idx.to(device)
inference_output = model.generate(
text_idx,
max_length=1000,
num_beams=5,
top_k=20,
no_repeat_ngram_size=4,
length_penalty=0.65,
repetition_penalty=2.0,
)
inference_output = inference_output.tolist()
bot_response = tokenizer.decode(inference_output[0][text_idx.shape[-1]:], skip_special_tokens=True)
print(f"Bot: {bot_response}")
past_user_inputs.append(user_input)
generated_responses.append(bot_response)
```
### Reference
* [SKT-KoGPT2](https://huggingface.co/skt/kogpt2-base-v2)
* [KETI R&D 데이터](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-008)
* [한국어 대화 요약](https://aihub.or.kr/aidata/30714)
|
andrek/LAT2NOB
|
andrek
| 2021-09-23T13:06:22Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- translation
widget:
- text: Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua.
---
|
vishalz/paraphrase_model
|
vishalz
| 2021-09-23T10:00:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
pegasus paraphraser model
using <a href="https://huggingface.co/tuner007/pegasus_paraphrase" target="_blank">tuner007/pegasus_paraphrase</a>
|
gchhablani/fnet-large-finetuned-wnli
|
gchhablani
| 2021-09-23T05:39:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.38028169014084506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-wnli
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6953
- Accuracy: 0.3803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7217 | 1.0 | 159 | 0.6864 | 0.5634 |
| 0.7056 | 2.0 | 318 | 0.6869 | 0.5634 |
| 0.706 | 3.0 | 477 | 0.6875 | 0.5634 |
| 0.7032 | 4.0 | 636 | 0.6931 | 0.5634 |
| 0.7025 | 5.0 | 795 | 0.6953 | 0.3803 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/bert-large-cased-finetuned-wnli
|
gchhablani
| 2021-09-23T05:10:44Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-large-cased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.352112676056338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-wnli
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7087
- Accuracy: 0.3521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.7114 | 1.0 | 159 | 0.5634 | 0.6923 |
| 0.7141 | 2.0 | 318 | 0.5634 | 0.6895 |
| 0.7063 | 3.0 | 477 | 0.5634 | 0.6930 |
| 0.712 | 4.0 | 636 | 0.4507 | 0.7077 |
| 0.7037 | 5.0 | 795 | 0.3521 | 0.7087 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
espnet/Dan_Berrebbi_aishell4_asr
|
espnet
| 2021-09-22T23:16:35Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:aishell4",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: zh
datasets:
- aishell4
license: cc-by-4.0
---
## ESPnet2 ASR model
### `Dan_Berrebbi_aishell4_asr`
This model was trained by dan_berrebbi using aishell4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout da1a26652f7d5a019cc24ad1e0e6e844f2b57e1b
pip install -e .
cd egs2/aishell4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model Dan_Berrebbi_aishell4_asr
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Sep 21 09:36:01 EDT 2021`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: `7887faeabbc2299922267928e190ed89cb032a36`
- Commit date: `Mon Sep 20 16:25:02 2021 -0400`
## asr_fine_tune5_100ep
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|601|6.8|92.7|0.5|0.0|93.2|93.2|
|decode_transformer_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|601|6.8|92.8|0.3|0.0|93.2|93.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|15936|66.9|25.6|7.5|9.8|42.9|93.2|
|decode_transformer_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|15936|64.7|27.6|7.7|11.0|46.3|93.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer5.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_fine_tune5_100ep
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 3
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 10000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char/train/speech_shape
- exp/asr_stats_raw_zh_char/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char/valid/speech_shape
- exp/asr_stats_raw_zh_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodev/wav.scp
- speech
- sound
- - dump/raw/train_nodev/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 4.0
scheduler: noamlr
scheduler_conf:
model_size: 256
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ,
- 的
- 是
- 个
- 这
- 一
- 。
- 就
- 儿
- 嗯
- 们
- 呃
- 我
- 有
- <sil>
- 那
- 说
- 不
- 些
- 也
- 他
- 你
- 要
- 后
- 以
- 咱
- 在
- 啊
- 了
- 然
- 家
- 都
- 来
- 还
- 可
- 子
- 下
- 上
- 时
- 比
- 话
- 孩
- 呢
- 去
- 人
- 好
- 对
- 能
- 么
- 吧
- 学
- 多
- 到
- 看
- 为
- 进
- 把
- 大
- 做
- 生
- 种
- 品
- 给
- 没
- 行
- 现
- 小
- 会
- 作
- 较
- 方
- 块
- 业
- 让
- 点
- 定
- 因
- 什
- 长
- 面
- 如
- 安
- 客
- 问
- 过
- 车
- 出
- 啦
- 边
- 候
- 主
- 所
- 题
- 买
- 销
- 天
- 意
- 自
- 全
- 动
- 工
- '&'
- 老
- 或
- 者
- 年
- 着
- 实
- 活
- 理
- 包
- 样
- 再
- 区
- 用
- 呀
- 零
- 员
- 发
- 先
- 部
- 放
- 门
- 情
- 像
- 分
- 售
- 很
- 开
- 己
- 十
- 括
- 跟
- 事
- 需
- 更
- 其
- 装
- 市
- 成
- 里
- 物
- 别
- 间
- 第
- 次
- 中
- 提
- 超
- 顾
- 保
- 感
- 加
- 量
- 二
- 和
- 各
- 嘛
- 新
- 每
- 完
- 力
- 消
- 得
- 店
- 本
- 通
- 习
- 觉
- 道
- 心
- 校
- 菜
- 交
- 哪
- 产
- 于
- 位
- 电
- 想
- 三
- 况
- 度
- 期
- 应
- 但
- 教
- 体
- 常
- 师
- 它
- 高
- 前
- 之
- 西
- 特
- 商
- 果
- 场
- 重
- 防
- 管
- 起
- 地
- 该
- 东
- 少
- 打
- 费
- 当
- 带
- 服
- 口
- 购
- 知
- 回
- 同
- 钱
- 外
- 户
- 注
- 促
- 价
- 解
- <#>
- 水
- 百
- 今
- 太
- 最
- 报
- 怎
- 才
- 等
- 及
- 关
- <->
- 肯
- 火
- 机
- 流
- 制
- 送
- 手
- 确
- 法
- 写
- 玩
- 传
- 路
- 班
- 查
- 招
- 卖
- 几
- 正
- 合
- 够
- 五
- 引
- 容
- 只
- 男
- 日
- 四
- 宣
- 反
- 两
- 清
- 处
- 周
- 单
- 首
- 课
- 衣
- 便
- 身
- 气
- 针
- 奶
- 六
- 经
- 接
- 女
- 育
- 鲜
- 赠
- 试
- 停
- 晚
- 类
- 故
- 入
- 性
- 增
- 食
- 满
- 格
- 基
- 备
- 洗
- 培
- 质
- 美
- 明
- 整
- 化
- 公
- 案
- 哎
- 吸
- 原
- 易
- 幺
- 总
- 尽
- 优
- 而
- 建
- 责
- 啥
- 干
- 月
- 使
- 找
- 季
- 望
- 器
- 目
- 识
- 低
- 听
- 烟
- 相
- 早
- 检
- 护
- 摆
- 住
- 直
- 从
- 务
- 希
- 导
- 内
- 八
- 持
- 近
- 配
- 叫
- 见
- 设
- 吗
- 非
- 调
- 程
- 拿
- 训
- <%>
- 结
- 标
- 挺
- 花
- <$>
- 受
- 式
- 求
- 平
- 换
- 具
- 愿
- 货
- 牌
- 专
- 轻
- 推
- 妈
- 司
- 辆
- 存
- 名
- 且
- 欢
- 喜
- 吃
- 数
- 段
- 议
- 控
- 往
- 礼
- 决
- 走
- 养
- 免
- 惠
- 园
- 档
- 谁
- 真
- 快
- 置
- 幼
- 乐
- 证
- 向
- 厂
- 简
- 声
- 视
- 划
- 绩
- 适
- 集
- 搞
- 办
- 规
- 灾
- 造
- 准
- 必
- 任
- 险
- 响
- 毕
- 群
- 鞋
- 九
- 嘞
- 信
- 库
- 计
- 认
- 奖
- 表
- 无
- 影
- 头
- 卡
- 告
- 考
- 抽
- 竟
- 选
- 帮
- 何
- 修
- 酒
- 尤
- 线
- 穿
- 讲
- 光
- 留
- 讨
- 随
- 请
- 卫
- 系
- 队
- 失
- 双
- 庭
- 强
- 微
- 折
- 色
- 半
- 否
- 立
- 差
- 沟
- 冬
- 批
- 害
- 已
- 危
- 白
- 爆
- 节
- 参
- 逛
- 搭
- 风
- 朋
- 友
- 环
- 验
- 评
- 严
- 般
- 效
- 舞
- 饭
- 境
- 负
- 又
- 底
- 术
- 刚
- 件
- 罚
- 助
- 态
- 状
- 室
- 房
- 游
- 息
- 领
- 难
- 警
- 按
- 级
- 错
- 利
- 与
- 餐
- 陪
- 蹈
- 论
- 记
- 许
- 马
- 算
- 楼
- 型
- 排
- 广
- 值
- 油
- 糕
- 楚
- 步
- 至
- 拉
- 紧
- 灯
- 升
- 七
- 共
- 努
- 除
- 展
- 形
- 元
- 网
- 宜
- 营
- 兴
- 互
- 蛋
- 燃
- 冷
- 条
- 思
- 巡
- 净
- 须
- 遇
- 落
- 禁
- 科
- 款
- 哦
- 止
- 采
- 材
- 介
- 套
- 围
- 维
- 旦
- 切
- 显
- 汇
- 损
- 速
- 越
- 模
- 假
- 精
- 稍
- 书
- 绍
- 父
- 积
- 策
- 示
- 骑
- 改
- 跑
- 运
- 变
- 洁
- 仓
- 鱼
- <space>
- 绝
- 诶
- 伤
- 细
- 职
- 离
- 慢
- 素
- 料
- 睡
- 趣
- 爱
- 母
- 眼
- 味
- 列
- 督
- 张
- 率
- 被
- 域
- 语
- 坏
- 资
- 红
- 减
- 励
- 择
- 预
- 层
- 陈
- 根
- 休
- 毒
- 球
- 爸
- 登
- 足
- 取
- 指
- 柜
- 限
- 降
- 概
- 院
- 供
- 支
- 额
- 源
- 始
- 盘
- 饮
- 项
- 液
- 童
- 爷
- 号
- 抓
- 台
- 转
- 观
- 金
- 照
- 滑
- 岁
- 致
- 文
- 她
- 弄
- 站
- 酸
- 音
- 胎
- 投
- 疏
- 乱
- 临
- 允
- 狗
- 疫
- 询
- 、
- 象
- 占
- 坐
- 倒
- 争
- 午
- 亲
- 读
- 演
- 退
- 惯
- 贵
- 达
- 监
- 志
- 绿
- 醒
- 急
- 驾
- 违
- 诉
- 片
- 空
- 势
- 极
- 豆
- 独
- 钟
- 代
- 瓶
- 纸
- 并
- 企
- 映
- 统
- 属
- 省
- 夜
- 障
- 谈
- 避
- 由
- 终
- 频
- 掉
- 估
- 激
- 仅
- 布
- 谢
- 灭
- 忙
- 码
- 伙
- 缺
- 叶
- 功
- 析
- 赖
- 架
- 范
- 签
- D
- 待
- 神
- 龄
- 画
- 券
- 居
- 杜
- 堵
- 您
- 勤
- 扫
- 技
- 财
- 隐
- 患
- 例
- 乘
- 摩
- 戏
- 鼓
- 份
- 杂
- 散
- 热
- 铺
- 据
- 肤
- 怕
- 依
- 拖
- 充
- 智
- 偷
- 远
- 挂
- 盗
- 附
- 梯
- 冰
- 联
- 借
- 蹭
- 异
- 蔬
- 绑
- 堂
- 将
- 厨
- 帽
- 破
- 戴
- 皮
- 粉
- 氛
- 仪
- 国
- 益
- 闯
- 惩
- 逃
- 刻
- 突
- 申
- 略
- 顿
- 毛
- 召
- 海
- 黄
- 青
- 士
- 移
- 喝
- 板
- 练
- 歌
- 千
- 床
- 享
- 磨
- 构
- 收
- 万
- 摸
- 圈
- 亮
- 刹
- 逆
- 驶
- 赶
- 松
- 呐
- 压
- 拥
- 辅
- 协
- 托
- 断
- 轮
- 善
- 哈
- 捆
- 座
- 病
- 健
- 牛
- 草
- 释
- 似
- 土
- 补
- 俩
- 堆
- 即
- 密
- 背
- 言
- 街
- 尚
- 窗
- C
- 艺
- 纠
- 纷
- 忽
- 句
- 另
- 施
- 政
- 温
- 某
- 翻
- 章
- 守
- 熟
- 民
- 续
- 良
- 挤
- 础
- 字
- 瓜
- 乎
- 竞
- 距
- 际
- 暖
- 凭
- 董
- 碗
- 短
- 渠
- 康
- 藏
- 香
- 虽
- 露
- 厉
- 忘
- 误
- 冒
- 窃
- 络
- 淡
- 腐
- 颜
- 播
- 默
- 锻
- 炼
- 宝
- 组
- 淘
- 则
- 逻
- 垃
- 圾
- 复
- 贴
- 靠
- 潜
- 察
- 晨
- 碰
- 剩
- 峰
- 深
- 偏
- 虑
- 念
- 初
- 闹
- 幸
- 跳
- 米
- 旧
- 蛤
- 虾
- 汽
- 苦
- 螃
- 蟹
- 冲
- 固
- 隔
- 懂
- 卷
- 镜
- 罩
- 暴
- 闭
- 野
- 玻
- 璃
- 义
- B
- 煤
- 富
- 踩
- 途
- 闲
- 紫
- 北
- 欲
- 曲
- 榜
- 垒
- 伴
- 累
- 判
- 搜
- 困
- 租
- 键
- 肥
- 社
- 弯
- 角
- 纪
- 律
- 详
- 右
- 刮
- 继
- 撤
- 输
- 普
- 未
- 稳
- 摔
- 访
- 扩
- 扣
- 末
- 票
- 承
- 担
- 丢
- 涉
- 欠
- 创
- 获
- 摊
- 疑
- 蓝
- 答
- 霜
- 录
- 齐
- 烦
- 治
- 粗
- 叛
- 污
- 址
- 若
- 染
- 含
- 药
- 雨
- 此
- 陌
- 研
- 催
- 拨
- 页
- 磕
- 呆
- 脸
- 墙
- 夫
- A
- 棉
- 袜
- 填
- 死
- 懒
- 植
- 扇
- 捡
- 遍
- 操
- 摄
- 箱
- ?
- 繁
- 城
- 咯
- 左
- 拐
- 悉
- 犯
- 宽
- 伞
- 余
- 糊
- 巧
- 透
- 贪
- 顺
- 局
- 妇
- 私
- 浪
- 岗
- 棋
- 序
- 辛
- V
- 握
- 擦
- 扔
- 斤
- 付
- 剐
- 锁
- 麻
- 敢
- 桶
- 佩
- 坠
- 封
- 替
- 塞
- 斗
- 攀
- 爽
- 沉
- 混
- 滋
- 刺
- 潮
- 皿
- 端
- 刷
- 刀
- 巾
- 烫
- 木
- 漏
- 迅
- 织
- 救
- 吹
- 仔
- 称
- 返
- 景
- 聚
- 阶
- 秀
- 涨
- P
- 颈
- 肩
- 泥
- I
- 侣
- 尔
- 伍
- 甚
- 皂
- 蒙
- 世
- 界
- 嘻
- 辈
- Q
- 审
- 尾
- 浇
- 遛
- 馨
- 措
- 邻
- 撒
- 挥
- 遵
- 予
- 击
- 鉴
- 殊
- 哇
- 载
- 添
- 盈
- 盯
- 惊
- 喷
- 荷
- 怠
- 抢
- 喂
- 饱
- 谅
- 团
- 龙
- 冻
- 图
- 掺
- 扑
- 刊
- 葱
- 薄
- 萝
- 卜
- 麦
- 苹
- 触
- 飞
- 艳
- 畅
- 鸡
- 权
- 趟
- 连
- 哭
- 旁
- 漂
- 焊
- 敞
- 叉
- 钢
- 氧
- 溺
- 聊
- 巢
- 衡
- 淀
- 劣
- 虫
- 符
- 均
- 辨
- 菌
- 彻
- 烂
- 厅
- 皱
- 妥
- 拾
- 插
- 携
- 竹
- 碍
- 湿
- 灵
- 忌
- 旅
- 勿
- 宿
- 迷
- 探
- 春
- 劵
- 星
- 耐
- 裤
- 颖
- 韩
- 艾
- 灸
- 邀
- 婚
- 乳
- 芽
- 挑
- 摘
- 阿
- 姨
- 伊
- 慕
- 纯
- 貌
- 嘴
- 偶
- 睛
- 献
- 坚
- 账
- 典
- 唱
- L
- E
- 贡
- 寒
- 唧
- Y
- 尝
- 抹
- 汰
- 腾
- 哼
- 仿
- 英
- 舒
- 扰
- 拒
- 剪
- 夏
- 宠
- 咬
- 派
- 委
- 婉
- 执
- 呗
- 悄
- 搬
- 雪
- 盐
- 暂
- 奸
- 耍
- 僻
- 却
- 署
- 寻
- 串
- 援
- 亏
- 烈
- 印
- 捎
- 幅
- 绘
- 锈
- 闸
- 罪
- 嫌
- 俗
- 歹
- 劳
- 兜
- 喽
- 谓
- 鹤
- 舍
- 克
- 徇
- 倍
- 敏
- 丝
- 纺
- 拭
- 融
- 蔫
- 掂
- 测
- T
- 众
- 卸
- 暗
- 赔
- 偿
- 举
- 劲
- 篮
- 储
- 乙
- 炔
- 软
- 侵
- 诱
- 浊
- 蚀
- 秽
- 炸
- 泽
- 闻
- 鼻
- 甜
- 澈
- 脏
- 官
- 凝
- 芳
- 灰
- 卵
- 农
- 烧
- 肉
- 桌
- 椅
- 垫
- 硬
- 叠
- 瓷
- 碎
- 柄
- 屉
- 拳
- 撞
- 铝
- 歇
- 遗
- 炮
- 掌
- 妨
- 静
- 浸
- 涂
- 凉
- 炫
- 耀
- 姓
- 究
- 奏
- 缆
- 脚
- 酿
- 抄
- 慌
- 戚
- 燥
- 毯
- 挽
- 诺
- 济
- 旺
- 抖
- 郊
- 疗
- 巴
- 痧
- 脊
- 膜
- 晒
- 润
- 掏
- 笔
- 鞭
- 博
- 捧
- 函
- 胡
- 锅
- 雾
- 疯
- 狂
- 趋
- 膏
- 妆
- 尘
- 袋
- 贝
- 俺
- 耽
- 怀
- 恐
- 赋
- 脑
- 焉
- 愣
- 呵
- 噼
- 啪
- 虚
- 河
- 归
- 绊
- 械
- 扬
- 筒
- 靴
- 束
- 彩
- 荐
- 沙
- 迎
- 荡
- 凌
- 昂
- 碑
- 蹦
- 扉
- 泼
- 丰
- 滴
- 沾
- 亭
- 粘
- 奇
- 饼
- 牙
- 娃
- 杯
- 踢
- 嘿
- 抛
- 枯
- 剔
- 苗
- 纹
- 永
- 津
- 唉
- 趁
- 屡
- 逮
- 戒
- 肃
- 仁
- 肇
- 醉
- 糟
- 馈
- 横
- 扭
- 盔
- 侧
- 鲁
- 莽
- 飙
- 稿
- 逐
- 谋
- 京
- 苏
- 宁
- 驻
- 咨
- 旷
- 拓
- 杆
- 秤
- 叮
- 嘱
- 咋
- 炊
- 怪
- 婆
- 阎
- 王
- 饿
- 鬼
- 惨
- 渡
- 坎
- 囤
- 甲
- 蛙
- 鲤
- 桂
- 石
- 玉
- 溪
- 华
- 窝
- 截
- 秩
- 嗨
- 芹
- 梨
- 蕉
- S
- 煲
- 汤
- 鲫
- 揽
- 挡
- 柚
- 瑞
- 匹
- '2'
- 踹
- 吵
- 凶
- 矩
- 迟
- 脾
- 纳
- 朵
- 墨
- 袖
- 链
- 钩
- 笼
- 熄
- 盆
- 殴
- 欺
- 诈
- 厕
- 娱
- 爬
- 威
- 胁
- 阅
- 赌
- 拢
- 症
- 伪
- 脂
- 堪
- 盛
- 蚊
- 蝇
- 煎
- 晰
- 柔
- 涩
- 汁
- 腹
- 胃
- 痉
- 挛
- 颗
- 粒
- 匀
- 败
- 历
- 佳
- 乏
- 寄
- 残
- 杀
- 剂
- 疾
- 衍
- 溅
- 倘
- 褶
- 席
- 启
- 遮
- 槽
- 递
- 橱
- 迹
- 镁
- 泄
- 阀
- 柴
- 阻
- 恋
- 盲
- 浓
- 捂
- 腰
- 姿
- 缝
- 肿
- 焦
- 骗
- 伺
- 嘘
- 掩
- 褥
- 帘
- 籍
- 锥
- 锋
- 尖
- 锐
- 祸
- 秒
- 李
- 伸
- 浏
- 览
- 航
- 讯
- 谨
- 慎
- 匪
- 劫
- 医
- 族
- 忧
- 孤
- 拜
- 窄
- 唯
- 搁
- 朝
- 尺
- 盟
- 波
- 隆
- 词
- 村
- 娶
- 媳
- 县
- 聘
- 醇
- 泡
- 坨
- 淋
- 延
- 柱
- 肾
- 蒸
- 槛
- 赚
- 凡
- 恩
- 厚
- 赞
- 茎
- 蒜
- 苔
- 甘
- 菠
- 涮
- 霾
- 仍
- 云
- 追
- 丽
- 盖
- 欧
- 莱
- 雅
- 婴
- 孕
- 敲
- 约
- 惰
- 谱
- 射
- 惑
- 睹
- 奉
- 诚
- 惶
- 卓
- 勉
- 聪
- 疼
- 弃
- 奴
- 隶
- 嚷
- 眠
- 躺
- 乒
- 乓
- 琴
- 挖
- 掘
- 阵
- 浆
- 索
- 呼
- 古
- 弥
- 熔
- 抱
- 怨
- 猫
- 笑
- 挣
- 黑
- 猛
- 令
- 核
- 磊
- 橙
- 吨
- 吊
- 蘸
- 氮
- 罐
- 战
- 懈
- 渐
- 胜
- 命
- 抬
- 缘
- 睦
- 扮
- 珠
- 颁
- 蔼
- 凳
- 饰
- 缤
- 晶
- 抵
- 遥
- 腿
- 拍
- 妻
- 羽
- 绒
- 梳
- 袄
- 述
- 跆
- 屈
- 脱
- 朗
- 劝
- 胆
- 腔
- 圆
- 亚
- 宴
- 编
- 肢
- 壶
- 暑
- 怒
- 描
- 绕
- 悦
- 忆
- 嗓
- 胖
- 疙
- 瘩
- 哒
- 碴
- 棱
- 炒
- 井
- 漫
- 烘
- 焙
- 涤
- 船
- 纱
- 君
- 茉
- 莉
- 钙
- 瞩
- <_>
- 塌
- 嗷
- 屁
- 股
- 绪
- 勇
- 奋
- 荣
- 诲
- 卑
- 挫
- 昧
- 疲
- 惫
- 册
- 呈
- 僵
- 熬
- 敬
- 呦
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: /ocean/projects/cis210027p/berrebbi/espnet/egs2/aishell4/asr1/data/nlsyms.txt
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.3a1
distributed: false
```
</details>
## LM config
<details><summary>expand</summary>
```
config: conf/train_lm_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/lm_nuit
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 2000000
valid_batch_bins: null
train_shape_file:
- exp/lm_stats_zh_char/train/text_shape.char
valid_shape_file:
- exp/lm_stats_zh_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/lm_train.txt
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ,
- 的
- 是
- 个
- 这
- 一
- 。
- 就
- 儿
- 嗯
- 们
- 呃
- 我
- 有
- <sil>
- 那
- 说
- 不
- 些
- 也
- 他
- 你
- 要
- 后
- 以
- 咱
- 在
- 啊
- 了
- 然
- 家
- 都
- 来
- 还
- 可
- 子
- 下
- 上
- 时
- 比
- 话
- 孩
- 呢
- 去
- 人
- 好
- 对
- 能
- 么
- 吧
- 学
- 多
- 到
- 看
- 为
- 进
- 把
- 大
- 做
- 生
- 种
- 品
- 给
- 没
- 行
- 现
- 小
- 会
- 作
- 较
- 方
- 块
- 业
- 让
- 点
- 定
- 因
- 什
- 长
- 面
- 如
- 安
- 客
- 问
- 过
- 车
- 出
- 啦
- 边
- 候
- 主
- 所
- 题
- 买
- 销
- 天
- 意
- 自
- 全
- 动
- 工
- '&'
- 老
- 或
- 者
- 年
- 着
- 实
- 活
- 理
- 包
- 样
- 再
- 区
- 用
- 呀
- 零
- 员
- 发
- 先
- 部
- 放
- 门
- 情
- 像
- 分
- 售
- 很
- 开
- 己
- 十
- 括
- 跟
- 事
- 需
- 更
- 其
- 装
- 市
- 成
- 里
- 物
- 别
- 间
- 第
- 次
- 中
- 提
- 超
- 顾
- 保
- 感
- 加
- 量
- 二
- 和
- 各
- 嘛
- 新
- 每
- 完
- 力
- 消
- 得
- 店
- 本
- 通
- 习
- 觉
- 道
- 心
- 校
- 菜
- 交
- 哪
- 产
- 于
- 位
- 电
- 想
- 三
- 况
- 度
- 期
- 应
- 但
- 教
- 体
- 常
- 师
- 它
- 高
- 前
- 之
- 西
- 特
- 商
- 果
- 场
- 重
- 防
- 管
- 起
- 地
- 该
- 东
- 少
- 打
- 费
- 当
- 带
- 服
- 口
- 购
- 知
- 回
- 同
- 钱
- 外
- 户
- 注
- 促
- 价
- 解
- <#>
- 水
- 百
- 今
- 太
- 最
- 报
- 怎
- 才
- 等
- 及
- 关
- <->
- 肯
- 火
- 机
- 流
- 制
- 送
- 手
- 确
- 法
- 写
- 玩
- 传
- 路
- 班
- 查
- 招
- 卖
- 几
- 正
- 合
- 够
- 五
- 引
- 容
- 只
- 男
- 日
- 四
- 宣
- 反
- 两
- 清
- 处
- 周
- 单
- 首
- 课
- 衣
- 便
- 身
- 气
- 针
- 奶
- 六
- 经
- 接
- 女
- 育
- 鲜
- 赠
- 试
- 停
- 晚
- 类
- 故
- 入
- 性
- 增
- 食
- 满
- 格
- 基
- 备
- 洗
- 培
- 质
- 美
- 明
- 整
- 化
- 公
- 案
- 哎
- 吸
- 原
- 易
- 幺
- 总
- 尽
- 优
- 而
- 建
- 责
- 啥
- 干
- 月
- 使
- 找
- 季
- 望
- 器
- 目
- 识
- 低
- 听
- 烟
- 相
- 早
- 检
- 护
- 摆
- 住
- 直
- 从
- 务
- 希
- 导
- 内
- 八
- 持
- 近
- 配
- 叫
- 见
- 设
- 吗
- 非
- 调
- 程
- 拿
- 训
- <%>
- 结
- 标
- 挺
- 花
- <$>
- 受
- 式
- 求
- 平
- 换
- 具
- 愿
- 货
- 牌
- 专
- 轻
- 推
- 妈
- 司
- 辆
- 存
- 名
- 且
- 欢
- 喜
- 吃
- 数
- 段
- 议
- 控
- 往
- 礼
- 决
- 走
- 养
- 免
- 惠
- 园
- 档
- 谁
- 真
- 快
- 置
- 幼
- 乐
- 证
- 向
- 厂
- 简
- 声
- 视
- 划
- 绩
- 适
- 集
- 搞
- 办
- 规
- 灾
- 造
- 准
- 必
- 任
- 险
- 响
- 毕
- 群
- 鞋
- 九
- 嘞
- 信
- 库
- 计
- 认
- 奖
- 表
- 无
- 影
- 头
- 卡
- 告
- 考
- 抽
- 竟
- 选
- 帮
- 何
- 修
- 酒
- 尤
- 线
- 穿
- 讲
- 光
- 留
- 讨
- 随
- 请
- 卫
- 系
- 队
- 失
- 双
- 庭
- 强
- 微
- 折
- 色
- 半
- 否
- 立
- 差
- 沟
- 冬
- 批
- 害
- 已
- 危
- 白
- 爆
- 节
- 参
- 逛
- 搭
- 风
- 朋
- 友
- 环
- 验
- 评
- 严
- 般
- 效
- 舞
- 饭
- 境
- 负
- 又
- 底
- 术
- 刚
- 件
- 罚
- 助
- 态
- 状
- 室
- 房
- 游
- 息
- 领
- 难
- 警
- 按
- 级
- 错
- 利
- 与
- 餐
- 陪
- 蹈
- 论
- 记
- 许
- 马
- 算
- 楼
- 型
- 排
- 广
- 值
- 油
- 糕
- 楚
- 步
- 至
- 拉
- 紧
- 灯
- 升
- 七
- 共
- 努
- 除
- 展
- 形
- 元
- 网
- 宜
- 营
- 兴
- 互
- 蛋
- 燃
- 冷
- 条
- 思
- 巡
- 净
- 须
- 遇
- 落
- 禁
- 科
- 款
- 哦
- 止
- 采
- 材
- 介
- 套
- 围
- 维
- 旦
- 切
- 显
- 汇
- 损
- 速
- 越
- 模
- 假
- 精
- 稍
- 书
- 绍
- 父
- 积
- 策
- 示
- 骑
- 改
- 跑
- 运
- 变
- 洁
- 仓
- 鱼
- <space>
- 绝
- 诶
- 伤
- 细
- 职
- 离
- 慢
- 素
- 料
- 睡
- 趣
- 爱
- 母
- 眼
- 味
- 列
- 督
- 张
- 率
- 被
- 域
- 语
- 坏
- 资
- 红
- 减
- 励
- 择
- 预
- 层
- 陈
- 根
- 休
- 毒
- 球
- 爸
- 登
- 足
- 取
- 指
- 柜
- 限
- 降
- 概
- 院
- 供
- 支
- 额
- 源
- 始
- 盘
- 饮
- 项
- 液
- 童
- 爷
- 号
- 抓
- 台
- 转
- 观
- 金
- 照
- 滑
- 岁
- 致
- 文
- 她
- 弄
- 站
- 酸
- 音
- 胎
- 投
- 疏
- 乱
- 临
- 允
- 狗
- 疫
- 询
- 、
- 象
- 占
- 坐
- 倒
- 争
- 午
- 亲
- 读
- 演
- 退
- 惯
- 贵
- 达
- 监
- 志
- 绿
- 醒
- 急
- 驾
- 违
- 诉
- 片
- 空
- 势
- 极
- 豆
- 独
- 钟
- 代
- 瓶
- 纸
- 并
- 企
- 映
- 统
- 属
- 省
- 夜
- 障
- 谈
- 避
- 由
- 终
- 频
- 掉
- 估
- 激
- 仅
- 布
- 谢
- 灭
- 忙
- 码
- 伙
- 缺
- 叶
- 功
- 析
- 赖
- 架
- 范
- 签
- D
- 待
- 神
- 龄
- 画
- 券
- 居
- 杜
- 堵
- 您
- 勤
- 扫
- 技
- 财
- 隐
- 患
- 例
- 乘
- 摩
- 戏
- 鼓
- 份
- 杂
- 散
- 热
- 铺
- 据
- 肤
- 怕
- 依
- 拖
- 充
- 智
- 偷
- 远
- 挂
- 盗
- 附
- 梯
- 冰
- 联
- 借
- 蹭
- 异
- 蔬
- 绑
- 堂
- 将
- 厨
- 帽
- 破
- 戴
- 皮
- 粉
- 氛
- 仪
- 国
- 益
- 闯
- 惩
- 逃
- 刻
- 突
- 申
- 略
- 顿
- 毛
- 召
- 海
- 黄
- 青
- 士
- 移
- 喝
- 板
- 练
- 歌
- 千
- 床
- 享
- 磨
- 构
- 收
- 万
- 摸
- 圈
- 亮
- 刹
- 逆
- 驶
- 赶
- 松
- 呐
- 压
- 拥
- 辅
- 协
- 托
- 断
- 轮
- 善
- 哈
- 捆
- 座
- 病
- 健
- 牛
- 草
- 释
- 似
- 土
- 补
- 俩
- 堆
- 即
- 密
- 背
- 言
- 街
- 尚
- 窗
- C
- 艺
- 纠
- 纷
- 忽
- 句
- 另
- 施
- 政
- 温
- 某
- 翻
- 章
- 守
- 熟
- 民
- 续
- 良
- 挤
- 础
- 字
- 瓜
- 乎
- 竞
- 距
- 际
- 暖
- 凭
- 董
- 碗
- 短
- 渠
- 康
- 藏
- 香
- 虽
- 露
- 厉
- 忘
- 误
- 冒
- 窃
- 络
- 淡
- 腐
- 颜
- 播
- 默
- 锻
- 炼
- 宝
- 组
- 淘
- 则
- 逻
- 垃
- 圾
- 复
- 贴
- 靠
- 潜
- 察
- 晨
- 碰
- 剩
- 峰
- 深
- 偏
- 虑
- 念
- 初
- 闹
- 幸
- 跳
- 米
- 旧
- 蛤
- 虾
- 汽
- 苦
- 螃
- 蟹
- 冲
- 固
- 隔
- 懂
- 卷
- 镜
- 罩
- 暴
- 闭
- 野
- 玻
- 璃
- 义
- B
- 煤
- 富
- 踩
- 途
- 闲
- 紫
- 北
- 欲
- 曲
- 榜
- 垒
- 伴
- 累
- 判
- 搜
- 困
- 租
- 键
- 肥
- 社
- 弯
- 角
- 纪
- 律
- 详
- 右
- 刮
- 继
- 撤
- 输
- 普
- 未
- 稳
- 摔
- 访
- 扩
- 扣
- 末
- 票
- 承
- 担
- 丢
- 涉
- 欠
- 创
- 获
- 摊
- 疑
- 蓝
- 答
- 霜
- 录
- 齐
- 烦
- 治
- 粗
- 叛
- 污
- 址
- 若
- 染
- 含
- 药
- 雨
- 此
- 陌
- 研
- 催
- 拨
- 页
- 磕
- 呆
- 脸
- 墙
- 夫
- A
- 棉
- 袜
- 填
- 死
- 懒
- 植
- 扇
- 捡
- 遍
- 操
- 摄
- 箱
- ?
- 繁
- 城
- 咯
- 左
- 拐
- 悉
- 犯
- 宽
- 伞
- 余
- 糊
- 巧
- 透
- 贪
- 顺
- 局
- 妇
- 私
- 浪
- 岗
- 棋
- 序
- 辛
- V
- 握
- 擦
- 扔
- 斤
- 付
- 剐
- 锁
- 麻
- 敢
- 桶
- 佩
- 坠
- 封
- 替
- 塞
- 斗
- 攀
- 爽
- 沉
- 混
- 滋
- 刺
- 潮
- 皿
- 端
- 刷
- 刀
- 巾
- 烫
- 木
- 漏
- 迅
- 织
- 救
- 吹
- 仔
- 称
- 返
- 景
- 聚
- 阶
- 秀
- 涨
- P
- 颈
- 肩
- 泥
- I
- 侣
- 尔
- 伍
- 甚
- 皂
- 蒙
- 世
- 界
- 嘻
- 辈
- Q
- 审
- 尾
- 浇
- 遛
- 馨
- 措
- 邻
- 撒
- 挥
- 遵
- 予
- 击
- 鉴
- 殊
- 哇
- 载
- 添
- 盈
- 盯
- 惊
- 喷
- 荷
- 怠
- 抢
- 喂
- 饱
- 谅
- 团
- 龙
- 冻
- 图
- 掺
- 扑
- 刊
- 葱
- 薄
- 萝
- 卜
- 麦
- 苹
- 触
- 飞
- 艳
- 畅
- 鸡
- 权
- 趟
- 连
- 哭
- 旁
- 漂
- 焊
- 敞
- 叉
- 钢
- 氧
- 溺
- 聊
- 巢
- 衡
- 淀
- 劣
- 虫
- 符
- 均
- 辨
- 菌
- 彻
- 烂
- 厅
- 皱
- 妥
- 拾
- 插
- 携
- 竹
- 碍
- 湿
- 灵
- 忌
- 旅
- 勿
- 宿
- 迷
- 探
- 春
- 劵
- 星
- 耐
- 裤
- 颖
- 韩
- 艾
- 灸
- 邀
- 婚
- 乳
- 芽
- 挑
- 摘
- 阿
- 姨
- 伊
- 慕
- 纯
- 貌
- 嘴
- 偶
- 睛
- 献
- 坚
- 账
- 典
- 唱
- L
- E
- 贡
- 寒
- 唧
- Y
- 尝
- 抹
- 汰
- 腾
- 哼
- 仿
- 英
- 舒
- 扰
- 拒
- 剪
- 夏
- 宠
- 咬
- 派
- 委
- 婉
- 执
- 呗
- 悄
- 搬
- 雪
- 盐
- 暂
- 奸
- 耍
- 僻
- 却
- 署
- 寻
- 串
- 援
- 亏
- 烈
- 印
- 捎
- 幅
- 绘
- 锈
- 闸
- 罪
- 嫌
- 俗
- 歹
- 劳
- 兜
- 喽
- 谓
- 鹤
- 舍
- 克
- 徇
- 倍
- 敏
- 丝
- 纺
- 拭
- 融
- 蔫
- 掂
- 测
- T
- 众
- 卸
- 暗
- 赔
- 偿
- 举
- 劲
- 篮
- 储
- 乙
- 炔
- 软
- 侵
- 诱
- 浊
- 蚀
- 秽
- 炸
- 泽
- 闻
- 鼻
- 甜
- 澈
- 脏
- 官
- 凝
- 芳
- 灰
- 卵
- 农
- 烧
- 肉
- 桌
- 椅
- 垫
- 硬
- 叠
- 瓷
- 碎
- 柄
- 屉
- 拳
- 撞
- 铝
- 歇
- 遗
- 炮
- 掌
- 妨
- 静
- 浸
- 涂
- 凉
- 炫
- 耀
- 姓
- 究
- 奏
- 缆
- 脚
- 酿
- 抄
- 慌
- 戚
- 燥
- 毯
- 挽
- 诺
- 济
- 旺
- 抖
- 郊
- 疗
- 巴
- 痧
- 脊
- 膜
- 晒
- 润
- 掏
- 笔
- 鞭
- 博
- 捧
- 函
- 胡
- 锅
- 雾
- 疯
- 狂
- 趋
- 膏
- 妆
- 尘
- 袋
- 贝
- 俺
- 耽
- 怀
- 恐
- 赋
- 脑
- 焉
- 愣
- 呵
- 噼
- 啪
- 虚
- 河
- 归
- 绊
- 械
- 扬
- 筒
- 靴
- 束
- 彩
- 荐
- 沙
- 迎
- 荡
- 凌
- 昂
- 碑
- 蹦
- 扉
- 泼
- 丰
- 滴
- 沾
- 亭
- 粘
- 奇
- 饼
- 牙
- 娃
- 杯
- 踢
- 嘿
- 抛
- 枯
- 剔
- 苗
- 纹
- 永
- 津
- 唉
- 趁
- 屡
- 逮
- 戒
- 肃
- 仁
- 肇
- 醉
- 糟
- 馈
- 横
- 扭
- 盔
- 侧
- 鲁
- 莽
- 飙
- 稿
- 逐
- 谋
- 京
- 苏
- 宁
- 驻
- 咨
- 旷
- 拓
- 杆
- 秤
- 叮
- 嘱
- 咋
- 炊
- 怪
- 婆
- 阎
- 王
- 饿
- 鬼
- 惨
- 渡
- 坎
- 囤
- 甲
- 蛙
- 鲤
- 桂
- 石
- 玉
- 溪
- 华
- 窝
- 截
- 秩
- 嗨
- 芹
- 梨
- 蕉
- S
- 煲
- 汤
- 鲫
- 揽
- 挡
- 柚
- 瑞
- 匹
- '2'
- 踹
- 吵
- 凶
- 矩
- 迟
- 脾
- 纳
- 朵
- 墨
- 袖
- 链
- 钩
- 笼
- 熄
- 盆
- 殴
- 欺
- 诈
- 厕
- 娱
- 爬
- 威
- 胁
- 阅
- 赌
- 拢
- 症
- 伪
- 脂
- 堪
- 盛
- 蚊
- 蝇
- 煎
- 晰
- 柔
- 涩
- 汁
- 腹
- 胃
- 痉
- 挛
- 颗
- 粒
- 匀
- 败
- 历
- 佳
- 乏
- 寄
- 残
- 杀
- 剂
- 疾
- 衍
- 溅
- 倘
- 褶
- 席
- 启
- 遮
- 槽
- 递
- 橱
- 迹
- 镁
- 泄
- 阀
- 柴
- 阻
- 恋
- 盲
- 浓
- 捂
- 腰
- 姿
- 缝
- 肿
- 焦
- 骗
- 伺
- 嘘
- 掩
- 褥
- 帘
- 籍
- 锥
- 锋
- 尖
- 锐
- 祸
- 秒
- 李
- 伸
- 浏
- 览
- 航
- 讯
- 谨
- 慎
- 匪
- 劫
- 医
- 族
- 忧
- 孤
- 拜
- 窄
- 唯
- 搁
- 朝
- 尺
- 盟
- 波
- 隆
- 词
- 村
- 娶
- 媳
- 县
- 聘
- 醇
- 泡
- 坨
- 淋
- 延
- 柱
- 肾
- 蒸
- 槛
- 赚
- 凡
- 恩
- 厚
- 赞
- 茎
- 蒜
- 苔
- 甘
- 菠
- 涮
- 霾
- 仍
- 云
- 追
- 丽
- 盖
- 欧
- 莱
- 雅
- 婴
- 孕
- 敲
- 约
- 惰
- 谱
- 射
- 惑
- 睹
- 奉
- 诚
- 惶
- 卓
- 勉
- 聪
- 疼
- 弃
- 奴
- 隶
- 嚷
- 眠
- 躺
- 乒
- 乓
- 琴
- 挖
- 掘
- 阵
- 浆
- 索
- 呼
- 古
- 弥
- 熔
- 抱
- 怨
- 猫
- 笑
- 挣
- 黑
- 猛
- 令
- 核
- 磊
- 橙
- 吨
- 吊
- 蘸
- 氮
- 罐
- 战
- 懈
- 渐
- 胜
- 命
- 抬
- 缘
- 睦
- 扮
- 珠
- 颁
- 蔼
- 凳
- 饰
- 缤
- 晶
- 抵
- 遥
- 腿
- 拍
- 妻
- 羽
- 绒
- 梳
- 袄
- 述
- 跆
- 屈
- 脱
- 朗
- 劝
- 胆
- 腔
- 圆
- 亚
- 宴
- 编
- 肢
- 壶
- 暑
- 怒
- 描
- 绕
- 悦
- 忆
- 嗓
- 胖
- 疙
- 瘩
- 哒
- 碴
- 棱
- 炒
- 井
- 漫
- 烘
- 焙
- 涤
- 船
- 纱
- 君
- 茉
- 莉
- 钙
- 瞩
- <_>
- 塌
- 嗷
- 屁
- 股
- 绪
- 勇
- 奋
- 荣
- 诲
- 卑
- 挫
- 昧
- 疲
- 惫
- 册
- 呈
- 僵
- 熬
- 敬
- 呦
- <sos/eos>
init: null
model_conf:
ignore_id: 0
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: /ocean/projects/cis210027p/berrebbi/espnet/egs2/aishell4/asr1/data/nlsyms.txt
cleaner: null
g2p: null
lm: transformer
lm_conf:
pos_enc: null
embed_unit: 128
att_unit: 512
head: 8
unit: 2048
layer: 16
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.3a1
distributed: false
```
</details>
|
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat
|
andi611
| 2021-09-22T20:36:06Z | 70 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"en",
"dataset:squad_v2",
"dataset:mit_movie",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
language:
- en
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
- mit_movie
model_index:
- name: bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: squad_v2
type: squad_v2
- task:
name: Token Classification
type: token-classification
dataset:
name: mit_movie
type: mit_movie
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the mit_movie datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.