modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-05 06:27:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-05 06:27:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
persiannlp/mt5-large-parsinlu-sentiment-analysis
|
persiannlp
| 2021-09-23T16:20:21Z | 25 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"sentiment",
"sentiment-analysis",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- sentiment
- sentiment-analysis
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Sentiment Analysis (آنالیز احساسات)
This is a mT5 model for sentiment analysis.
Here is an example of how you can run this model:
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
import numpy as np
model_name_or_path = "persiannlp/mt5-large-parsinlu-sentiment-analysis"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
def run_model(context, query, **generator_args):
input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .",
"نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟"
)
run_model(
"فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...",
"نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟"
)
run_model(
"اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن....... (ノಠ益ಠ)ノ ",
" نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخرهباز چیست؟"
)
run_model(
" گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ",
" نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟"
)
run_model(
"در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب",
" شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟"
)
run_model(
"من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ",
"نظر شما به صورت کلی در مورد این شیر چیست؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-qqp-query-paraphrasing
|
persiannlp
| 2021-09-23T16:20:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"query-paraphrasing",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:qqp",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- query-paraphrasing
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- qqp
metrics:
- accuracy
---
# Detection of Paraphrased Queries (تشخصیص سوالات هممعنی)
This is a model for detection of paraphrased queries.
Here is an example of how you can run this model:
```python
from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer
model_name = "persiannlp/mt5-large-parsinlu-qqp-query-paraphrasing"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(q1, q2, **generator_args):
input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟")
run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-opus-translation_fa_en
|
persiannlp
| 2021-09-23T16:20:17Z | 184 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (Persian -> English).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("ستایش خدای را که پروردگار جهانیان است.")
run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه میکند؛")
run_model("وی از تمامی بلاگرها، سازمانها و افرادی که از وی پشتیبانی کردهاند، تشکر کرد.")
run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ")
run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-multiple-choice
|
persiannlp
| 2021-09-23T16:20:14Z | 63 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-arc-comqa-obqa-multiple-choice
|
persiannlp
| 2021-09-23T16:20:12Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-sentiment-analysis
|
persiannlp
| 2021-09-23T16:20:02Z | 94 | 4 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"sentiment",
"sentiment-analysis",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- sentiment
- sentiment-analysis
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Sentiment Analysis (آنالیز احساسات)
This is a mT5 model for sentiment analysis.
Here is an example of how you can run this model:
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
import numpy as np
model_name_or_path = "persiannlp/mt5-base-parsinlu-sentiment-analysis"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
def run_model(context, query, **generator_args):
input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .",
"نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟"
)
run_model(
"فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...",
"نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟"
)
run_model(
"اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن....... (ノಠ益ಠ)ノ ",
" نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخرهباز چیست؟"
)
run_model(
" گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ",
" نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟"
)
run_model(
"در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب",
" شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟"
)
run_model(
"من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ",
"نظر شما به صورت کلی در مورد این شیر چیست؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-qqp-query-paraphrasing
|
persiannlp
| 2021-09-23T16:20:00Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"query-paraphrasing",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:qqp",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- query-paraphrasing
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- qqp
metrics:
- accuracy
---
# Detection of Paraphrased Queries (تشخصیص سوالات هممعنی)
This is a model for detection of paraphrased queries.
Here is an example of how you can run this model:
```python
from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer
model_name = "persiannlp/mt5-base-parsinlu-qqp-query-paraphrasing"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(q1, q2, **generator_args):
input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟")
run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-multiple-choice
|
persiannlp
| 2021-09-23T16:19:55Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
pere/norwegian-t5-base
|
pere
| 2021-09-23T16:19:40Z | 10 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"seq2seq",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian T5 Base model 🇳🇴
This T5-base model is trained from scratch on a 19GB Balanced Bokmål-Nynorsk Corpus.
Update: Due to disk space errors, the model had to be restarted July 20. It is currently still running.
Parameters used in training:
```bash
python3 ./run_t5_mlm_flax_streaming.py
--model_name_or_path="./norwegian-t5-base"
--output_dir="./norwegian-t5-base"
--config_name="./norwegian-t5-base"
--tokenizer_name="./norwegian-t5-base"
--dataset_name="pere/nb_nn_balanced_shuffled"
--max_seq_length="512"
--per_device_train_batch_size="32"
--per_device_eval_batch_size="32"
--learning_rate="0.005"
--weight_decay="0.001"
--warmup_steps="2000"
--overwrite_output_dir
--logging_steps="100"
--save_steps="500"
--eval_steps="500"
--push_to_hub
--preprocessing_num_workers 96
--adafactor
```
|
pere/norwegian-t5-base-NCC-nb-nn
|
pere
| 2021-09-23T16:19:35Z | 60 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"seq2seq",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴
This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything.
The following setting were used in training:
```bash
./run_t5_mlm_flax_streaming.py \
--output_dir="./" \
--model_type="t5" \
--config_name="./" \
--tokenizer_name="./" \
--dataset_name="pere/norwegian_colossal_corpus_v2_short100k" \
--max_seq_length="512" \
--weight_decay="0.01" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="32" \
--learning_rate="8e-3" \
--warmup_steps="0" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_epochs="5" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="500" \
--num_train_steps="1000000" \
--num_eval_samples="5000" \
--save_steps="5000" \
--eval_steps="5000" \
--preprocessing_num_workers 96 \
--adafactor \
--push_to_hub
```
|
pere/nb-nn-translation
|
pere
| 2021-09-23T16:19:21Z | 960 | 5 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"translation",
"no",
"dataset:oscar",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- translation
datasets:
- oscar
widget:
- text: Skriv inn en tekst som du ønsker å oversette til en annen målform.
---
# 🇳🇴 Bokmål ⇔ Nynorsk 🇳🇴
Norwegian has two relatively similar written languages; Bokmål and Nynorsk. Historically Nynorsk is a written norm based on dialects curated by the linguist Ivar Aasen in the mid-to-late 1800s, whereas Bokmål is a gradual 'Norwegization' of written Danish.
The two written languages are considered equal and citizens have a right to receive public service information in their primary and prefered language. Even though this right has been around for a long time only between 5-10% of Norwegian texts are written in Nynorsk. Nynorsk is therefore a low-resource language within a low-resource language.
Apart from some word-list based engines, there are not any working off-the-shelf machine learning-based translation models. Translation between Bokmål and Nynorsk is not available in Google Translate.
## Demo
| | |
|---|---|
| Widget | Try the widget in the top right corner |
| Huggingface Spaces | [Spaces Demo](https://huggingface.co/spaces/NbAiLab/nb2nn) |
| | |
## Pretraining a T5-base
There is an [mt5](https://huggingface.co/google/mt5-base) that includes Norwegian. Unfortunately a very small part of this is Nynorsk; there is only around 1GB Nynorsk text in mC4. Despite this, the mt5 also gives a BLEU score above 80. During the project we extracted all available Nynorsk text from the [Norwegian Colossal Corpus](https://github.com/NBAiLab/notram/blob/master/guides/corpus_v2_summary.md) at the National Library of Norway, and matched it (by material type i.e. book, newspapers and so on) with an equal amount of Bokmål. The corpus collection is described [here](https://github.com/NBAiLab/notram/blob/master/guides/nb_nn_balanced_corpus.md) and the total size is 19GB.
## Finetuning - BLEU-SCORE 88.17 🎉
The central finetuning data of the project have been 200k translation units (TU) i.e. aligned pairs of sentences in the respective languages extracted from textbooks of various subjects and newspapers.
Training for [10] epochs with a learning rate of [7e-4], a batch size of [32] and a max source and target length of [512] fine tuning reached a SACREBLEU score of [88.03] at training and a test score of [**88.17**] after training.
## This is not a translator
We found out that we were able to get almost identical BLEU-score with training it both ways, and letting the model decide if the input is in Bokmål or Nynorsk. This way we can train one model instead of two. We call it a language switcher.
## Future work
The following Google Docs Add-on is currently pending approval.

## How to use the model
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-translation')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
```
|
pere/nb-nn-dev2
|
pere
| 2021-09-23T16:19:18Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"translation",
"no",
"dataset:oscar",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- translation
datasets:
- oscar
widget:
- text: Skriv inn en tekst som du ønsker å oversette til en annen målform.
---
# Norwegian T5 - Translation Bokmål Nynorsk - Development
## Description
This is the development version of the Bokmål-Nynorsk translator. If you want something that is stable, Please do run [this version](https://huggingface.co/pere/nb-nn-translation/) instead.
Here is an example of how to use the model from Python
```python
# Import libraries
from transformers import T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('pere/nb-nn-dev',from_flax=True)
tokenizer = AutoTokenizer.from_pretrained('pere/nb-nn-dev')
#Encode the text
text = "Hun vil ikke gi bort sine personlige data."
inputs = tokenizer.encode(text, return_tensors="pt")
outputs = model.generate(inputs, max_length=255, num_beams=4, early_stopping=True)
#Decode and print the result
print(tokenizer.decode(outputs[0]))
```
Or if you like to use the pipeline instead
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-dev')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
```
|
osanseviero/corenlp_spanish
|
osanseviero
| 2021-09-23T16:16:53Z | 0 | 0 | null |
[
"corenlp",
"sp",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- sp
license: gpl
---
# Core NLP model for sp
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_french
|
osanseviero
| 2021-09-23T16:16:49Z | 0 | 0 | null |
[
"corenlp",
"fr",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- fr
license: gpl
---
# Core NLP model for fr
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_english-extra
|
osanseviero
| 2021-09-23T16:16:44Z | 0 | 0 | null |
[
"corenlp",
"en",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- en
license: gpl
---
# Core NLP model for en
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_chinese
|
osanseviero
| 2021-09-23T16:16:39Z | 0 | 1 | null |
[
"corenlp",
"ch",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- ch
license: gpl
---
# Core NLP model for ch
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
mpariente/DPRNNTasNet-ks2_WHAM_sepclean
|
mpariente
| 2021-09-23T16:12:22Z | 252 | 9 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"DPRNNTasNet",
"audio-to-audio",
"dataset:wham",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- DPRNNTasNet
- audio-to-audio
datasets:
- wham
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `mpariente/DPRNNTasNet-ks2_WHAM_sepclean`
Imported from [Zenodo](https://zenodo.org/record/3862942)
### Description:
This model was trained by Manuel Pariente
using the wham/DPRNN recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the WHAM! dataset.
### Training config:
```yaml
data:
mode: min
nondefault_nsrc: None
sample_rate: 8000
segment: 2.0
task: sep_clean
train_dir: data/wav8k/min/tr
valid_dir: data/wav8k/min/cv
filterbank:
kernel_size: 2
n_filters: 64
stride: 1
main_args:
exp_dir: exp/train_dprnn_new/
gpus: -1
help: None
masknet:
bidirectional: True
bn_chan: 128
chunk_size: 250
dropout: 0
hid_size: 128
hop_size: 125
in_chan: 64
mask_act: sigmoid
n_repeats: 6
n_src: 2
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1e-05
positional arguments:
training:
batch_size: 3
early_stop: True
epochs: 200
gradient_clipping: 5
half_lr: True
num_workers: 8
```
### Results:
```yaml
si_sdr: 19.316743490695334
si_sdr_imp: 19.317895273889842
sdr: 19.68085347190952
sdr_imp: 19.5298092932871
sir: 30.362213998701232
sir_imp: 30.21116982007881
sar: 20.15553251343315
sar_imp: -129.02091762351188
stoi: 0.97772664309074
stoi_imp: 0.23968091518217424
```
### License notice:
This work "DPRNNTasNet-ks2_WHAM_sepclean" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A)
by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for
Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only).
"DPRNNTasNet-ks2_WHAM_sepclean" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente.
|
mpariente/ConvTasNet_Libri3Mix_sepnoisy
|
mpariente
| 2021-09-23T16:12:18Z | 17 | 0 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:LibriMix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- LibriMix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model
Imported from this Zenodo [model page](https://zenodo.org/record/4020529).
## Description:
This model was trained by Takhir Mirzaev using the Librimix/ConvTasNet recipe in Asteroid.
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
## Training config:
```yaml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 4
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
## Results:
```yaml
si_sdr: 6.824750632456865
si_sdr_imp: 11.234803761803752
sdr: 7.715799858488098
sdr_imp: 11.778681386239114
sir: 16.442141130818637
sir_imp: 19.527535070051055
sar: 8.757864265661263
sar_imp: -0.15657258049670303
stoi: 0.7854554136619554
stoi_imp: 0.22267957718163015
```
## License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by
[Vassil Panayotov](https://github.com/vdp),
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente.
|
julien-c/DPRNNTasNet-ks16_WHAM_sepclean
|
julien-c
| 2021-09-23T16:04:27Z | 72 | 2 |
asteroid
|
[
"asteroid",
"pytorch",
"audio-to-audio",
"audio",
"audio-source-separation",
"dataset:wham",
"dataset:sep_clean",
"arxiv:2005.04132",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- audio-to-audio
- asteroid
- audio
- audio-source-separation
datasets:
- wham
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `mpariente/DPRNNTasNet(ks=16)_WHAM!_sepclean`
♻️ Imported from https://zenodo.org/record/3903795#.X8pMBRNKjUI
This model was trained by Manuel Pariente using the wham/DPRNN recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the sep_clean task of the WHAM! dataset.
### Demo: How to use in Asteroid
```python
# coming soon
```
### Training config
- data:
- mode: min
- nondefault_nsrc: None
- sample_rate: 8000
- segment: 2.0
- task: sep_clean
- train_dir: data/wav8k/min/tr
- valid_dir: data/wav8k/min/cv
- filterbank:
- kernel_size: 16
- n_filters: 64
- stride: 8
- main_args:
- exp_dir: exp/train_dprnn_ks16/
- help: None
- masknet:
- bidirectional: True
- bn_chan: 128
- chunk_size: 100
- dropout: 0
- hid_size: 128
- hop_size: 50
- in_chan: 64
- mask_act: sigmoid
- n_repeats: 6
- n_src: 2
- out_chan: 64
- optim:
- lr: 0.001
- optimizer: adam
- weight_decay: 1e-05
- positional arguments:
- training:
- batch_size: 6
- early_stop: True
- epochs: 200
- gradient_clipping: 5
- half_lr: True
- num_workers: 6
#### Results
- `si_sdr`: 18.227683982688003
- `si_sdr_imp`: 18.22883576588251
- `sdr`: 18.617789605060587
- `sdr_imp`: 18.466745426438173
- `sir`: 29.22773720052717
- `sir_imp`: 29.07669302190474
- `sar`: 19.116352171914485
- `sar_imp`: -130.06009796503054
- `stoi`: 0.9722025377865715
- `stoi_imp`: 0.23415680987800583
### Citing Asteroid
```BibTex
@inproceedings{Pariente2020Asteroid,
title={Asteroid: the {PyTorch}-based audio source separation toolkit for researchers},
author={Manuel Pariente and Samuele Cornell and Joris Cosentino and Sunit Sivasankaran and
Efthymios Tzinis and Jens Heitkaemper and Michel Olvera and Fabian-Robert Stöter and
Mathieu Hu and Juan M. Martín-Doñas and David Ditter and Ariel Frank and Antoine Deleforge
and Emmanuel Vincent},
year={2020},
booktitle={Proc. Interspeech},
}
```
Or on arXiv:
```bibtex
@misc{pariente2020asteroid,
title={Asteroid: the PyTorch-based audio source separation toolkit for researchers},
author={Manuel Pariente and Samuele Cornell and Joris Cosentino and Sunit Sivasankaran and Efthymios Tzinis and Jens Heitkaemper and Michel Olvera and Fabian-Robert Stöter and Mathieu Hu and Juan M. Martín-Doñas and David Ditter and Ariel Frank and Antoine Deleforge and Emmanuel Vincent},
year={2020},
eprint={2005.04132},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
JorisCos/DPTNet_Libri1Mix_enhsingle_16k
|
JorisCos
| 2021-09-23T15:49:20Z | 18 | 3 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"DPTNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- DPTNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/DPTNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 16
n_filters: 64
stride: 8
masknet:
bidirectional: true
chunk_size: 100
dropout: 0
ff_activation: relu
ff_hid: 256
hop_size: 50
in_chan: 64
mask_act: sigmoid
n_repeats: 2
n_src: 1
norm_type: gLN
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
scheduler:
d_model: 64
steps_per_epoch: 10000
training:
batch_size: 4
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.829670037349064
si_sdr_imp: 11.379888731489366
sdr: 15.395712644737149
sdr_imp: 11.893049845524112
sir: Infinity
sir_imp: NaN
sar: 15.395712644737149
sar_imp: 11.893049845524112
stoi: 0.9301948391058859
stoi_imp: 0.13427501556534832
```
License notice:
This work "DPTNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DPTNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/DPRNNTasNet-ks2_Libri1Mix_enhsingle_16k
|
JorisCos
| 2021-09-23T15:49:18Z | 28 | 1 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"DPRNNTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- DPRNNTasNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 1
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 2
n_filters: 64
stride: 1
masknet:
bidirectional: true
bn_chan: 128
chunk_size: 250
dropout: 0
hid_size: 128
hop_size: 125
in_chan: 64
mask_act: sigmoid
n_repeats: 6
n_src: 1
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.7228101708889
si_sdr_imp: 11.2730288650292
sdr: 15.35661405197161
sdr_imp: 11.853951252758595
sir: Infinity
sir_imp: NaN
sar: 15.35661405197161
sar_imp: 11.853951252758595
stoi: 0.9300461826351578
stoi_imp: 0.13412635909461715
```
License notice:
This work "DPRNNTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DPRNNTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/DCUNet_Libri1Mix_enhsingle_16k
|
JorisCos
| 2021-09-23T15:49:15Z | 631 | 5 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"DCUNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- DCUNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/DCUNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_n_filters: 1024
stft_kernel_size: 1024
stft_stride: 256
masknet:
architecture: Large-DCUNet-20
fix_length_mode: pad
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.154035391645971
si_sdr_imp: 9.704254085786271
sdr: 13.568058873121435
sdr_imp: 10.065396073908367
sar: 13.568058873121435
sar_imp: 10.065396073908367
stoi: 0.9199373340235417
stoi_imp: 0.12401751048300132
```
License notice:
This work "DCUNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCUNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/DCCRNet_Libri1Mix_enhsingle_16k
|
JorisCos
| 2021-09-23T15:49:13Z | 1,316 | 16 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"DCCRNet",
"audio-to-audio",
"speech-enhancement",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- DCCRNet
- audio-to-audio
- speech-enhancement
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/DCCRNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_kernel_size: 400
stft_n_filters: 512
stft_stride: 100
masknet:
architecture: DCCRN-CL
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 12
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.329767398333798
si_sdr_imp: 9.879986092474098
sdr: 13.87279932997016
sdr_imp: 10.370136530757103
sir: Infinity
sir_imp: NaN
sar: 13.87279932997016
sar_imp: 10.370136530757103
stoi: 0.9140907015623948
stoi_imp: 0.11817087802185405
```
License notice:
This work "DCCRNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCCRNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k
|
JorisCos
| 2021-09-23T15:49:10Z | 15 | 2 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.978836560066222
si_sdr_imp: 10.388889689413096
sdr: 6.8651365291740225
sdr_imp: 10.928018056925016
sir: 14.997089638783114
sir_imp: 18.08248357801549
sar: 8.127504792061933
sar_imp: -0.7869320540959925
stoi: 0.7669414686111115
stoi_imp: 0.20416563213078837
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri3Mix_sepnoisy_8k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k
|
JorisCos
| 2021-09-23T15:49:08Z | 43 | 1 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.926151147554517
si_sdr_imp: 10.282912158535625
sdr: 6.700975236867358
sdr_imp: 10.882972447337504
sir: 15.364110064569388
sir_imp: 18.574476587171688
sar: 7.918866830474568
sar_imp: -0.9638973409971135
stoi: 0.7713777027310713
stoi_imp: 0.2078696167973911
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/ConvTasNet_Libri3Mix_sepclean_8k
|
JorisCos
| 2021-09-23T15:49:06Z | 27 | 0 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.581797049575108
si_sdr_imp: 11.977037288467368
sdr' 9.305885208641385
sdr_imp: 12.3943409734845
sir: 16.42030534048559
sir_imp: 19.508759460400984
sar: 10.641943911079238
sar_imp: -56.4345187842095
stoi: 0.8365148408724333
stoi_imp: 0.24401766199806396
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris.
|
JorisCos/ConvTasNet_Libri3Mix_sepclean_16k
|
JorisCos
| 2021-09-23T15:49:03Z | 54 | 0 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yaml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.932601610824145
si_sdr_imp: 12.299341066588594
sdr: 9.557260814240447
sdr_imp: 12.76957128385349
sir: 17.387646884037455
sir_imp: 20.599955591768484
sar: 10.686885056960504
sar_imp: -55.8894643263213
stoi: 0.8481258332025354
stoi_imp: 0.25528367853750356
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris.
|
JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k
|
JorisCos
| 2021-09-23T15:49:01Z | 10 | 1 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k`
Imported from [Zenodo](https://zenodo.org/record/3874420#.X9I6NcLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri2Mix dataset.
Training config:
```yml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
Results:
On Libri2Mix min test set :
```yml
si_sdr: 9.944424856077259
si_sdr_imp: 11.939395359731192
sdr: 10.701526190782072
sdr_imp: 12.481757547845662
sir: 22.633644975545575
sir_imp: 22.45666740833025
sar: 11.131644100944868
sar_imp: 4.248489589311784
stoi: 0.852048619949357
stoi_imp: 0.2071994899565506
```
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_8k" is licensed under A[Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/ConvTasNet_Libri2Mix_sepclean_8k
|
JorisCos
| 2021-09-23T15:48:56Z | 54 | 1 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_8k`
Imported from [Zenodo](https://zenodo.org/record/3873572#.X9M69cLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 2
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 14.764543634468069
si_sdr_imp: 14.764029375607246
sdr: 15.29337970745095
sdr_imp: 15.114146605113111
sir: 24.092904661115366
sir_imp: 23.913669683141528
sar: 16.06055906916849
sar_imp: -51.980784441287454
stoi: 0.9311142440593033
stoi_imp: 0.21817376142710482
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris.
|
JorisCos/ConvTasNet_Libri2Mix_sepclean_16k
|
JorisCos
| 2021-09-23T15:48:54Z | 2,594 | 2 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 15.243671356901526
si_sdr_imp: 15.243034178473609
sdr: 15.668108919568112
sdr_imp: 15.578229918028036
sir: 25.295100756629957
sir_imp: 25.205219921301754
sar: 16.307682590197313
sar_imp: -51.64989963759405
stoi: 0.9394951175291422
stoi_imp: 0.22640192740016568
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris.
|
hiroshi-matsuda-rit/bert-base-japanese-basic-char-v2
|
hiroshi-matsuda-rit
| 2021-09-23T14:49:50Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
---
# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831)
This pretrained model is almost the same as [cl-tohoku/bert-base-japanese-char-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-char-v2) but do not need `fugashi` or `unidic_lite`.
The only difference is in `word_tokenzer_type` property (specify `basic` instead of `mecab`) in `tokenizer_config.json`.
|
flax-community/nordic-roberta-wiki
|
flax-community
| 2021-09-23T13:53:50Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"feature-extraction",
"swedish",
"fill-mask",
"sv",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: sv
license: cc-by-4.0
tags:
- swedish
- roberta
pipeline_tag: fill-mask
widget:
- text: Meninged med livet är <mask>.
---
# Nordic Roberta Wikipedia
## Description
Nord roberta model trainined on the swedish danish and norwegian wikipedia.
## Evaluation
Evaluation on Named Entity recognition in Danish.
I finetuned each model on 3 epochs on DaNE, repeated it 5 times for each model, and calculated 95% confidence intervals for the means. Here are the results:
xlm-roberta-base : 88.01 +- 0.43
flax-community/nordic-roberta-wiki: 85.75 +- 0.69 (this model)
Maltehb/danish-bert-botxo: 85.38 +- 0.55
flax-community/roberta-base-danish: 80.14 +- 1.47
flax-community/roberta-base-scandinavian : 78.03 +- 3.02
Maltehb/-l-ctra-danish-electra-small-cased: 57.87 +- 3.19
NbAiLab/nb-bert-base : 30.24 +- 1.21
Randomly initialised RoBERTa model: 19.79 +- 2.00
Evaluation on Sentiment analysis in Dansish
Here are the results on test set, where each model has been trained 5 times, and the “+-” refers to a 95% confidence interval of the mean score:
Maltehb/danish-bert-botxo: 65.19 +- 0.53
NbAiLab/nb-bert-base : 63.80 +- 0.77
xlm-roberta-base : 63.55 +- 1.59
flax-community/nordic-roberta-wiki : 56.46 +- 1.77
flax-community/roberta-base-danish : 54.73 +- 8.96
flax-community/roberta-base-scandinavian : 44.28 +- 9.21
Maltehb/-l-ctra-danish-electra-small-cased : 47.78 +- 12.65
Randomly initialised RoBERTa model: 36.96 +- 1.02
Maltehb/roberta-base-scandinavian : 33.65 +- 8.32
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
|
csae8092/de_RTA_NER
|
csae8092
| 2021-09-23T13:46:37Z | 6 | 0 |
spacy
|
[
"spacy",
"token-classification",
"de",
"license:cc-by-nc-4.0",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- de
license: cc-by-nc-4.0
model-index:
- name: de_RTA_NER
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8630136986
- name: NER Recall
type: recall
value: 0.8743253662
- name: NER F Score
type: f_score
value: 0.8686327078
---
Regensburger Reichstag von 1576
| Feature | Description |
| --- | --- |
| **Name** | `de_RTA_NER` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.1.0,<3.2.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `https://creativecommons.org/licenses/by-nc/4.0/` |
| **Author** | [n/a](https://reichstagsakten-1576.uni-graz.at) |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DATE`, `LOC`, `PER`, `TIME` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 86.86 |
| `ENTS_P` | 86.30 |
| `ENTS_R` | 87.43 |
| `TOK2VEC_LOSS` | 43588.74 |
| `NER_LOSS` | 95573.96 |
|
tohoku-nlp/bert-large-japanese
|
tohoku-nlp
| 2021-09-23T13:45:41Z | 1,246 | 9 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東北大学で[MASK]の研究をしています。
---
# BERT large Japanese (unidic-lite with whole word masking, jawiki-20200831)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0).
## Model architecture
The model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/).
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
tohoku-nlp/bert-large-japanese-char
|
tohoku-nlp
| 2021-09-23T13:45:39Z | 15 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東北大学で[MASK]の研究をしています。
---
# BERT large Japanese (character-level tokenization with whole word masking, jawiki-20200831)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by character-level tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0).
## Model architecture
The model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.
The vocabulary size is 6144.
We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/).
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
tohoku-nlp/bert-base-japanese-char-v2
|
tohoku-nlp
| 2021-09-23T13:45:24Z | 136,559 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東北大学で[MASK]の研究をしています。
---
# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by character-level tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.
The vocabulary size is 6144.
We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/).
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
birgermoell/roberta-swedish-scandi
|
birgermoell
| 2021-09-23T13:42:48Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"feature-extraction",
"translate",
"sv",
"dataset:mc4",
"license:cc-by-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: sv
license: cc-by-4.0
tags:
- translate
datasets:
- mc4
widget:
- text: Meningen med livet är <mask>
---
# Svensk Roberta
## Description
Swedish Roberta model trained on the MC4 dataset. The model performance needs to be assessed
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
|
arijitx/wav2vec2-large-xlsr-bengali
|
arijitx
| 2021-09-23T13:07:14Z | 118 | 6 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"audio",
"speech",
"dataset:OpenSLR",
"license:cc-by-sa-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: Bengali
datasets:
- OpenSLR
metrics:
- wer
tags:
- bn
- audio
- automatic-speech-recognition
- speech
license: cc-by-sa-4.0
model-index:
- name: XLSR Wav2Vec2 Bengali by Arijit
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: OpenSLR
args: ben
metrics:
- name: Test WER
type: wer
value: 32.45
---
# Wav2Vec2-Large-XLSR-Bengali
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using a subset of 40,000 utterances from [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/). Tested WER using ~4200 held out from training.
When using this model, make sure that your speech input is sampled at 16kHz.
Train Script can be Found at : train.py
Data Prep Notebook : https://colab.research.google.com/drive/1JMlZPU-DrezXjZ2t7sOVqn7CJjZhdK2q?usp=sharing
Inference Notebook : https://colab.research.google.com/drive/1uKC2cK9JfUPDTUHbrNdOYqKtNozhxqgZ?usp=sharing
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
# model = model.to("cuda")
resampler = torchaudio.transforms.Resample(TEST_AUDIO_SR, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch)
speech = resampler(speech_array).squeeze().numpy()
return speech
speech_array = speech_file_to_array_fn("test_file.wav")
inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
preds = processor.batch_decode(predicted_ids)[0]
print(preds.replace("[PAD]",""))
```
**Test Result**: WER on ~4200 utterance : 32.45 %
|
hiiamsid/BETO_es_binary_classification
|
hiiamsid
| 2021-09-23T11:16:37Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"es",
"ticket classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- es
tags:
- es
- ticket classification
license: "apache-2.0"
datasets:
- self made to classify whether text is related to technology or not.
metrics:
- fscore
- accuracy
- precision
- recall
---
# BETO(cased)
This model was built using pytorch.
## Model description
Input for the model: Any spanish text
Output for the model: Sentiment. (0 - Negative, 1 - Positive(i.e. technology relate))
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("hiiamsid/BETO_es_binary_classification")
model = AutoModelForSequenceClassification.from_pretrained("hiiamsid/BETO_es_binary_classification")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training procedure
I trained on the dataset on the [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased).
|
vishalz/paraphrase_model
|
vishalz
| 2021-09-23T10:00:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
pegasus paraphraser model
using <a href="https://huggingface.co/tuner007/pegasus_paraphrase" target="_blank">tuner007/pegasus_paraphrase</a>
|
adityavithaldas/distilbert-base-uncased-finetuned-ner
|
adityavithaldas
| 2021-09-22T19:33:37Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
LysandreJik/testing
|
LysandreJik
| 2021-09-22T19:19:12Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: testing
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6813725490196079
- name: F1
type: f1
value: 0.8104956268221574
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testing
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6644
- Accuracy: 0.6814
- F1: 0.8105
- Combined Score: 0.7459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.11.0
- Tokenizers 0.10.3
|
huggingtweets/rishiosaur
|
huggingtweets
| 2021-09-22T18:19:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/rishiosaur/1632334774825/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1429632040673103878/I5Xe_evK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">rishi ⠕</div>
<div style="text-align: center; font-size: 14px;">@rishiosaur</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from rishi ⠕.
| Data | rishi ⠕ |
| --- | --- |
| Tweets downloaded | 1333 |
| Retweets | 523 |
| Short tweets | 162 |
| Tweets kept | 648 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d049rbc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rishiosaur's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2n4pe9ce) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2n4pe9ce/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rishiosaur')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
flax-community/RoBERTa-large-finnish
|
flax-community
| 2021-09-22T17:31:14Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"finnish",
"fi",
"dataset:mc4",
"arxiv:1907.11692",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- fi
license: apache-2.0
tags:
- finnish
- roberta
datasets:
- mc4
widget:
- text: "Moikka olen <mask> kielimalli."
---
# NOTE: We have trained newer and better Finnish RoBERTa large model which can be found from different repository: [https://huggingface.co/Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish). Our future Finnish models will be available at the [Finnish-NLP](https://huggingface.co/Finnish-NLP) Hugging Face organization
# RoBERTa large model for Finnish
Pretrained model on Finnish language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
makes a difference between finnish and Finnish.
## Model description
RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='flax-community/RoBERTa-large-finnish')
>>> unmasker("Moikka olen <mask> kielimalli.")
[{'sequence': 'Moikka olen uusi kielimalli.',
'score': 0.05129234120249748,
'token': 1825,
'token_str': ' uusi'},
{'sequence': 'Moikka olen toinen kielimalli.',
'score': 0.03112379088997841,
'token': 2194,
'token_str': ' toinen'},
{'sequence': 'Moikka olen myös kielimalli.',
'score': 0.025534993037581444,
'token': 491,
'token_str': ' myös'},
{'sequence': 'Moikka olen ensimmäinen kielimalli.',
'score': 0.020146571099758148,
'token': 2832,
'token_str': ' ensimmäinen'},
{'sequence': 'Moikka olen vapaa kielimalli.',
'score': 0.018089469522237778,
'token': 2257,
'token_str': ' vapaa'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('flax-community/RoBERTa-large-finnish')
model = RobertaModel.from_pretrained('flax-community/RoBERTa-large-finnish')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('flax-community/RoBERTa-large-finnish')
model = TFRobertaModel.from_pretrained('flax-community/RoBERTa-large-finnish', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training data
This Finnish RoBERTa model was pretrained on the combination of two datasets:
- [mc4](https://huggingface.co/datasets/mc4), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset
- [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 51GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Hugging Face JAX/Flax community week event, for 2 epochs with a sequence length of 128 and continuing for one more epoch with a sequence length of 512. The optimizer used is Adafactor with a learning rate of 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after.
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) and to our newer [Finnish RoBERTa-large](https://huggingface.co/Finnish-NLP/roberta-large-finnish) trained with larger dataset:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|----------------------------------------|----------|---------------------|---------------------|----------------------|
|flax-community/RoBERTa-large-finnish |87.72 |94.42 |95.06 |73.67 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |**94.90** |**95.49** |**76.07** |
To conclude, this model slightly loses to our newer [Finnish RoBERTa-large](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model trained with larger dataset and also slightly loses to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model.
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
- Tommi Vehviläinen [Hugging Face profile](https://huggingface.co/Tommi)
Feel free to contact us for more details 🤗
|
databuzzword/JointBERT-snips
|
databuzzword
| 2021-09-22T14:02:14Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
https://github.com/monologg/JointBERT
|
Haotian/distilgpt2-finetuned-wikitext2
|
Haotian
| 2021-09-22T12:24:29Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
|
Tsurakawi/erererere
|
Tsurakawi
| 2021-09-22T11:41:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
The older generation has a vulnerability, so they need to be monitored and taken care of. A large number of people, young and old, play really responsibly, but such a pastime can turn into a big problem. Many authoritative blogs and news portals of the gambling world like QYTO share statistics about this area and recommend only trusted casinos that cooperate with health organizations.
|
eliza-dukim/bert-base-finetuned-sts
|
eliza-dukim
| 2021-09-22T11:01:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
- f1
model-index:
- name: bert-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metrics:
- name: Pearsonr
type: pearsonr
value: 0.8756147003619346
- name: F1
type: f1
value: 0.8416666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4115
- Pearsonr: 0.8756
- F1: 0.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7836 | 1.0 | 365 | 0.5507 | 0.8435 | 0.8121 |
| 0.1564 | 2.0 | 730 | 0.4396 | 0.8495 | 0.8136 |
| 0.0989 | 3.0 | 1095 | 0.4115 | 0.8756 | 0.8417 |
| 0.0682 | 4.0 | 1460 | 0.4466 | 0.8746 | 0.8449 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
cfisicaro/distilbert-base-uncased-finetuned-ner
|
cfisicaro
| 2021-09-22T10:25:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9281908990011098
- name: Recall
type: recall
value: 0.9355632621098557
- name: F1
type: f1
value: 0.9318624993035824
- name: Accuracy
type: accuracy
value: 0.9837641190207635
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0629
- Precision: 0.9282
- Recall: 0.9356
- F1: 0.9319
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2406 | 1.0 | 878 | 0.0721 | 0.9072 | 0.9172 | 0.9122 | 0.9801 |
| 0.0529 | 2.0 | 1756 | 0.0637 | 0.9166 | 0.9318 | 0.9241 | 0.9826 |
| 0.0315 | 3.0 | 2634 | 0.0629 | 0.9282 | 0.9356 | 0.9319 | 0.9838 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
imthanhlv/t5vi
|
imthanhlv
| 2021-09-22T09:57:47Z | 13 | 1 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
# T5 Vietnamese pretrain on news corpus
|
castorini/ance-dpr-context-multi
|
castorini
| 2021-09-22T09:41:18Z | 110 | 2 |
transformers
|
[
"transformers",
"pytorch",
"dpr",
"arxiv:2007.00808",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf)
For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
|
sadakmed/distiluse-base-multilingual-cased-v2
|
sadakmed
| 2021-09-22T09:37:21Z | 8 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"DistilBert",
"Universal Sentence Encoder",
"sentence-embeddings",
"sentence-similarity",
"multilingual",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language: multilingual
tags:
- DistilBert
- Universal Sentence Encoder
- sentence-embeddings
- sentence-transformers
- sentence-similarity
license: apache-2.0
---
While v1 model supports 15 languages, this version supports 50+ languages. However, performance on the 15 languages mentioned in v1 are reported to be a bit lower.
Note that ST has additional two layers(Pooling, Linear), that cannot be saved in any predefined model in HG.
|
sadakmed/distiluse-base-multilingual-cased-v1
|
sadakmed
| 2021-09-22T09:37:18Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"DistilBert",
"Universal Sentence Encoder",
"sentence-embeddings",
"sentence-similarity",
"multilingual",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language: multilingual
tags:
- DistilBert
- Universal Sentence Encoder
- sentence-embeddings
- sentence-transformers
- sentence-similarity
license: apache-2.0
---
Knowledge distilled version of multilingual Universal Sentence Encoder. Supports 15 languages: Arabic, Chinese, Dutch, English, French, German, Italian, Korean, Polish, Portuguese, Russian, Spanish, Turkish.
This Model is saved from 'distiluse-base-multilingual-cased-v1' in `sentence-transformers`, to be used directly from `transformers`
Note that ST has additional two layers(Pooling, Linear), that cannot be saved in any predefined model in HG.
|
pvl/labse_bert
|
pvl
| 2021-09-22T09:35:24Z | 3,062 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"embeddings",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
thumbnail:
tags:
- bert
- embeddings
license: apache-2.0
---
# LABSE BERT
## Model description
Model for "Language-agnostic BERT Sentence Embedding" paper from Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, Wei Wang. Model available in [TensorFlow Hub](https://tfhub.dev/google/LaBSE/1).
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
import torch
# from sentence-transformers
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
tokenizer = AutoTokenizer.from_pretrained("pvl/labse_bert", do_lower_case=False)
model = AutoModel.from_pretrained("pvl/labse_bert")
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
|
petabyte/unang_mang_bert
|
petabyte
| 2021-09-22T09:33:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"Tagalog",
"Mang Bert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- Tagalog
thumbnail:
tags:
- Tagalog
- Mang Bert
license: apache-2.0
datasets:
- OSCAR tl
---
# Mang Bert
## Model description
Fine-Tuned Roberta Model using RobertaForMaskedLM
Tagalog Dataset from OSCAR tl
## Training data
458206 text dataset from OSCAR
|
ozcangundes/mt5-small-turkish-summarization
|
ozcangundes
| 2021-09-22T09:31:27Z | 299 | 19 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"mt5",
"text2text-generation",
"summarization",
"tr",
"dataset:MLSUM",
"arxiv:2004.14900",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: tr
datasets:
- MLSUM
pipeline_tag: summarization
license: mit
---
# mT5-small based Turkish Summarization System
[Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [MLSUM Turkish news dataset](https://github.com/recitalAI/MLSUM) for **Summarization** downstream task by using Pytorch Lightning.⚡
mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it. The model is trained with 10 epochs, 8 batch size and 10e-4 learning rate. It took almost 4 hours. The max news length is kept as 784 and max summary length is determined as 64.
**Important Note**: mT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
excluding any supervised training. Therefore, the mT5 model has to be fine-tuned before it is useable on a downstream task.
## Dataset
MLSUM dataset has more than 250K Turkish news with their related summaries. Since the mT5 model size and vocabulary is so large, 20K data is used for training and 4K data is used for validation. For more information about the dataset, please read this [great paper](https://arxiv.org/abs/2004.14900).
## Usage 🚀
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-small-turkish-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-small-turkish-summarization")
def generate_summary(main_news):
source_encoding=tokenizer(
main_news,
max_length=784,
padding="max_length",
truncation=True,
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt")
generated_ids=model.generate(
input_ids=source_encoding["input_ids"],
attention_mask=source_encoding["attention_mask"],
num_beams=2,
max_length=120,
repetition_penalty=2.5,
length_penalty=2.0,
early_stopping=True,
use_cache=True
)
preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for gen_id in generated_ids]
return "".join(preds)
```
### Example 1
```python
main_news= "Final etabının üçüncü karşılaşması 29 Nisan Pazartesi günü saat 18.00 ’ de Burhan Felek
Voleybol Salonu ’ nda oynanacak . Sezonu FIVB Kulüpler Dünya Şampiyonluğu ile açan ve CEV
Avrupa Şampiyonlar Ligi'ni üçüncü olarak tamamlayan VakıfBank Kadın Voleybol Takımı ,
Vestel Venus Sultanlar Ligi final serisi ikinci maçında Eczacıbaşı VitrA'yı VakıfBank
Spor Sarayı'nda 16-25 , 25-10 , 25-18 ve 25-17'lik setlerle 3-1 mağlup ederek seride durumu
1-1 ' e getirdi . İlk setini 25-16 kaybettiği karşılaşmanın ikinci setinde etkili servisler
kullanan sarı-siyahlılar , teknik molasına 12-5 önde girdiği seti 25-10 almayı başardı .
Etkili servis performansını üçüncü sette de sürdüren VakıfBank , teknik molasına 12-5 önde
girdiği seti 25-18 alarak , karşılaşmada 2-1 öne geçti . Dördüncü sette rakibinin geri dönüşüne
izin vermeyen VakıfBank , seti 25-17 , maçı da 3-1 kazanarak seride durumu eşitledi."
generate_summary(main_news)
#original summary -> "Vestel Venus Sultanlar Ligi final etabı ikinci karşılaşmasında VakıfBank
kendi sahasında Eczacıbaşı VitrA'yı 3-1 mağlup etti ve seride durumu 1-1 ' e getirdi ."
#output -> "CEV Avrupa Şampiyonlar Ligi'ni üçüncü olarak tamamlayan VakıfBank Kadın Voleybol Takımı,
Vestel Venus Sultanlar Ligi final serisi ikinci maçında Eczacıbaşı VitrA'yı 3-1 mağlup
ederek seride durumu 1-1'e getirdi."
```
### Example 2
```python
main_news="2023'te yerli tank motoru : Bir taraftan da tankın motorunu yerlileştirmeye çalıştıklarını
ifade eden Öztürk , şu değerlendirmelerde bulundu : `` Bin 500 beygirlik , şanzımanıyla beraber
motoru yerlileştirmeye çalışıyoruz . Bu da bir aksilik çıkmazsa ilk tankımızın üzerine
2023'te koyacağız . Bundan sonra hiçbir ülkeye bağımlılığımız kalmadan bu araçları üretmeye
devam edeceğiz . Sorumluluğumuzun ağır olduğunu biliyoruz . Ülkemize hizmet etmeye çalışıyoruz .
Bunu daha da ileriye götürmek için elimizden gelen çabayı sarf ediyoruz . Ama bu tek başınıza
yapılan bir operasyon değil . Türkiye'deki yerli firmalarla beraber ortaklaşa bu işi yürütmeye çalışıyoruz."
generate_summary(main_news)
#output -> "TÜRKİYE'de bir taraftan da tankın motorunu yerlileştirmeye çalıştıklarını belirten Öztürk,
`` Bin 500 beygirlik, şanzımanıyla beraber motoru yerlileştirmeye çalışıyoruz. Bu da bir
aksilik çıkmazsa ilk tankımızın üzerine 2023'te koyacağız.'' dedi."
```
Created by Özcan Gündeş ✌️
---
Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a>
Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a>
Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a>
Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
|
ozcangundes/mt5-small-turkish-squad
|
ozcangundes
| 2021-09-22T09:31:24Z | 33 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"mt5",
"text2text-generation",
"question-answering",
"tr",
"dataset:TQUAD",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
language: tr
datasets:
- TQUAD
pipeline_tag: question-answering
license: mit
---
# mT5-small based Turkish Question Answering System
[Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [Turkish Question Answering dataset](https://github.com/TQuad/turkish-nlp-qa-dataset) for **Q&A** downstream task by using Pytorch Lightning.⚡
The notebook that includes all fine tuning process will be shared on my Github page later. mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it.
**Important Note**: mT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
excluding any supervised training. Therefore, the mT5 model has to be fine-tuned before it is useable on a downstream task.
## Usage 🚀
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-small-turkish-squad")
model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-small-turkish-squad")
def get_answer(question,context):
source_encoding=tokenizer(
question,
context,
max_length=512,
padding="max_length",
truncation="only_second",
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt")
generated_ids=model.generate(
input_ids=source_encoding["input_ids"],
attention_mask=source_encoding["attention_mask"],
max_length=120)
preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids]
return "".join(preds)
```
### Example 1
```python
question={
"context":"Pardus, Google'ın öğrencilerle staj ve kendini geliştirme imkânı ile \
tasarılara geliştirici ve katkı sağlamayı amaçladığı açık kaynak tasarısı \
Google Summer of Code'a 2008 ve 2009 olmak üzere iki kere katılmıştır. Bu organizasyona \
ilk katılan Türk tasarısı Pardus olmuştur. Bazı dönemlerde Pardus hakkındaki gelişmeleri \
halka duyurmak ve tasarıya olan ilgiyi arttırmak amacıyla CeBIT Eurasia Bilişim Fuarı'na \
katılım sağlanmaktadır. 2006, 2008, 2009, 2010, 2011,2013 ve 2014 bu fuarlarda Pardus \
standı kurulmuştur.2014 yılında ICT SummitT Now Bilişim Zirvesi'nde yer alınmıştır. \
BİLİŞİM’2014 TBD 31. Ulusal Bilişim Kurultayı ve CITEX’2014 Ankara Bilişim Fuarı’na \
Gümüş sponsorluk ile katkıda bulunulmuş ve Pardus standı kurulmuştur.",
"question":"Pardus’un Google Summer of Code'a katıldığı yıllar nelerdir?"
}
get_answer(question["question"],question["context"])
```
> 2008 ve 2009
### Example 2
```python
question2={
"context":"II. Bayezid ve I. Selim devrinde yaşadı ve iki defa hekimbaşılık yaptı. \
Böbrek ve idrar kesesindeki taş oluşumunun nedenlerini ve tedavisini incelediği \
eseriyle tanınır. Adı kaynaklarda Ahmed ve Mahmud olarak da geçer. Ahi Çelebi \
olarak ün yapmıştır. Babası Tabib Mevlana Kemal ile birlikte 1463’te İstanbul’a yerleşti. \
Mevlana Kemal, devrin ünlü hekimlerindendir. Tebriz ya da Şirvan asıllı olduğu çeşitli \
kaynaklarda belirtilir. Ahi Mehmet Çelebi, hekimliği daha çok babasından öğrendi. Onun \
ölümünden sonra devrin önemli hekimleri Kutbüddin ile Altunîzâde’den ders alıp kısa zamanda \
mesleğini ilerletti. Hekimlik becerisinin yanı sıra kuramsal bilgisiyle de kendisini \
kabul ettirerek önce Fâtih Darüşşifasına hekim, sonra da başhekim oldu. II. Bayezid’in \
güvenini kazanarak mutfak eminliğine, ardından da Hekimbaşılığa getirildi. Dört buçuk \
yıl bu görevde kalan Ahî Çelebi, II. Bayezid’in ölümü üzerine geleneğe uyularak azledildi. \
Bir müddet sonra Yavuz onu tekrar Hekimbaşılığa getirdi ve Mısır seferine beraberinde \
götürdü. I. Selim'in ölümünden sonra Hekimbaşılık tan tekrar azledildi. Kaynakların \
belirttiğine göre, yaşı doksanı geçmiş olduğu halde, hacdan dönerken Kahire’de \
ölmüş ve İmam Şafi'nin kabri civarına defnedilmiştir.",
"question":"Ahi Mehmet Çelebi hangi eseri ile tanınır?"
}
get_answer(question2["question"],question2["context"])
```
> Böbrek ve idrar kesesindeki taş oluşumunun nedenlerini ve tedavisini incelediği eseriyle
Created by Özcan Gündeş ✌️
---
Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a>
Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a>
Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a>
Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
|
marefa-nlp/marefa-mt-en-ar
|
marefa-nlp
| 2021-09-22T08:59:51Z | 391 | 13 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"Arabic Abjad Characters",
"Arabic",
"en",
"ar",
"dataset:marefa-mt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- en
- ar
tags:
- translation
- Arabic Abjad Characters
- Arabic
license: apache-2.0
datasets:
- marefa-mt
---
# Marefa-Mt-En-Ar
# نموذج المعرفة للترجمة الآلية من الإنجليزية للعربية
## Model description
This is a model for translating English to Arabic. The special about this model that is take into considration the
using of additional Arabic characters like `پ` or `گ`.
## عن النموذج
هذا النموذج للترجمة الآلية من اللغة الإنجليزية إلى اللغة العربية, هو أول نماذج الترجمة الآلية التي تصدر تحت رعاية
[موسوعة المعرفة](https://www.marefa.org)
يتميز هذا النموذج عن غيره من النماذج بدعمه لحروف الأبجدية العربية الإضافية لتمييز الصوتيات الخاصة في اللغة الإنجليزية مثل `پ` , `گ`.
يمكنك زيارة
[هذه الصفحة](https://www.marefa.org/%D8%A7%D9%84%D9%85%D8%B9%D8%B1%D9%81%D8%A9:%D8%AF%D9%84%D9%8A%D9%84_%D8%A7%D9%84%D8%A3%D8%B3%D9%84%D9%88%D8%A8#.D8.AD.D8.B1.D9.88.D9.81_.D8.A5.D8.B6.D8.A7.D9.81.D9.8A.D8.A9_.D9.84.D9.84.D9.86.D8.B7.D9.82_.D8.A7.D9.84.D8.B3.D9.84.D9.8A.D9.85)
لمعرفة أكثر عن أسلوب إستخدام هذه الحروف الأبجدية العربية
### How to use كيفية الإستخدام
Install transformers and sentencepiece (python >= 3.6)
`$ pip3 install transformers==4.3.0 sentencepiece==0.1.95 nltk==3.5 protobuf==3.15.3 torch==1.7.1`
> If you are using `Google Colab`, please restart your runtime after installing the packages.
-----------
```python
from transformers import MarianTokenizer, MarianMTModel
mname = "marefa-nlp/marefa-mt-en-ar"
tokenizer = MarianTokenizer.from_pretrained(mname)
model = MarianMTModel.from_pretrained(mname)
# English Sample Text
input = "President Putin went to the presidential palace in the capital, Kiev"
translated_tokens = model.generate(**tokenizer.prepare_seq2seq_batch([input], return_tensors="pt"))
translated_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated_tokens]
# translated Arabic Text
print(translated_text)
# ذهب الرئيس پوتن إلى القصر الرئاسي في العاصمة كييڤ
```
|
macedonizer/sr-roberta-base
|
macedonizer
| 2021-09-22T08:59:00Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"masked-lm",
"sr",
"dataset:wiki-sr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- sr
thumbnail: https://huggingface.co/macedonizer/sr-roberta-base/lets-talk-about-nlp-sr.jpg
tags:
- masked-lm
license: apache-2.0
datasets:
- wiki-sr
---
# SR-RoBERTa base model
Pretrained model on Serbian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје.
# Model description
RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2.
# How to use
You can use this model directly with a pipeline for masked language modeling: \
from transformers import pipeline \
unmasker = pipeline('fill-mask', model='macedonizer/sr-roberta-base') \
unmasker("Београд је <mask> град Србије.") \
[{'score': 0.7834128141403198,
'sequence': 'Београд је главни град Србије',
'token': 3087,
'token_str': ' главни'},
{'score': 0.15424974262714386,
'sequence': 'Београд је највећи град Србије',
'token': 3916,
'token_str': ' највећи'},
{'score': 0.0035441946238279343,
'sequence': 'Београд је најважнији град Србије',
'token': 18577,
'token_str': ' најважнији'},
{'score': 0.003132033161818981,
'sequence': 'Београд је велики град Србије',
'token': 2063,
'token_str': ' велики'},
{'score': 0.0030417360831052065,
'sequence': 'Београд је важан град Србије',
'token': 9463,
'token_str': ' важан'}]
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import RobertaTokenizer, RobertaModel \
tokenizer = RobertaTokenizer.from_pretrained('macedonizer/mk-roberta-base') \
model = RobertaModel.from_pretrained('macedonizer/sr-roberta-base') \
text = "Replace me by any text you'd like." \
encoded_input = tokenizer(text, return_tensors='pt') \
output = model(**encoded_input)
|
macedonizer/sr-gpt2
|
macedonizer
| 2021-09-22T08:58:57Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"sr",
"dataset:wiki-sr",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- sr
thumbnail: https://huggingface.co/macedonizer/sr-gpt2/desanka-maksimovic.jpeg
license: apache-2.0
datasets:
- wiki-sr
---
# sr-gpt2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model description
sr-gpt2 is a transformers model pretrained on a very large corpus of Serbian data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of the continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of the word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Macedonian language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for, however, which is generating texts from a
prompt.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
import random \
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('macedonizer/sr-gpt2') \
model = AutoModelWithLMHead.from_pretrained('macedonizer/sr-gpt2')
input_text = 'Ја сам био '
if len(input_text) == 0: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
) \
else: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
**encoded_input, \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
)
decoded_output = [] \
for sample in output: \
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output)
|
macedonizer/sl-roberta-base
|
macedonizer
| 2021-09-22T08:58:54Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"masked-lm",
"sl",
"dataset:wiki-sl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- sl
thumbnail: https://huggingface.co/macedonizer/sl-roberta-base/ivan-cankar.jpg
tags:
- masked-lm
license: apache-2.0
datasets:
- wiki-sl
---
# HR-RoBERTa base model
Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје.
# Model description
RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2.
# How to use
You can use this model directly with a pipeline for masked language modeling: \
from transformers import pipeline \
unmasker = pipeline('fill-mask', model='macedonizer/hr-roberta-base') \
unmasker("Zagrab je \\<mask\\> glavni grad Hrvatske.") \
[
{'sequence': 'Zagreb je glavni grad Hrvatske.', 'score': 0.8750431537628174, 'token': 2026, 'token_str': ' glavni'},
{'sequence': 'Zagreb je najveći grad Hrvatske.', 'score': 0.060711536556482315, 'token': 2474, 'token_str': ' najveći'},
{'sequence': 'Zagreb je prvi grad Hrvatske.', 'score': 0.005241130944341421, 'token': 780, 'token_str': ' prvi'},
{'sequence': 'Zagreb je jedini grad Hrvatske.', 'score': 0.004663003608584404, 'token':
3280, 'token_str': ' jedini'},
{'sequence': 'Zagreb je treći grad Hrvatske.', 'score': 0.003771631745621562, 'token': 3236, 'token_str': ' treći'
] \
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import RobertaTokenizer, RobertaModel \
tokenizer = RobertaTokenizer.from_pretrained('macedonizer/hr-roberta-base') \
model = RobertaModel.from_pretrained('macedonizer/hr-roberta-base') \
text = "Replace me by any text you'd like." \
encoded_input = tokenizer(text, return_tensors='pt') \
output = model(**encoded_input)
|
macedonizer/mk-roberta-base
|
macedonizer
| 2021-09-22T08:58:49Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"masked-lm",
"mk",
"dataset:wiki-mk",
"dataset:time-mk-news-2010-2015",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- mk
thumbnail: https://huggingface.co/macedonizer/mk-roberta-base/blaze-koneski.jpg
tags:
- masked-lm
license: apache-2.0
datasets:
- wiki-mk
- time-mk-news-2010-2015
---
# MK-RoBERTa base model
Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје.
# Model description
RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2.
# How to use
You can use this model directly with a pipeline for masked language modeling: \
from transformers import pipeline \
unmasker = pipeline('fill-mask', model='macedonizer/mk-roberta-base') \
unmasker("Скопје е \\<mask\\> град на Македонија.") \
[{'sequence': 'Скопје е главен град на Македонија.', \
'score': 0.5900368094444275, \
'token': 2782, \
'token_str': ' главен'}, \
{'sequence': 'Скопје е главниот град на Македонија.', \
'score': 0.1789761781692505, \
'token': 3177, \
'token_str': ' главниот'}, \
{'sequence': 'Скопје е административен град на Македонија.', \
'score': 0.01679774932563305, \
'token': 9563, \
'token_str': ' административен'}, \
{'sequence': 'Скопје е мал град на Македонија.', \
'score': 0.016263898462057114, \
'token': 2473, \
'token_str': ' мал'}, \
{'sequence': 'Скопје е најголемиот град на Македонија.', \
'score': 0.01312252413481474, \
'token': 4271, \
'token_str': ' најголемиот'}] \
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import RobertaTokenizer, RobertaModel \
tokenizer = RobertaTokenizer.from_pretrained('macedonizer/mk-roberta-base') \
model = RobertaModel.from_pretrained('macedonizer/mk-roberta-base') \
text = "Replace me by any text you'd like." \
encoded_input = tokenizer(text, return_tensors='pt') \
output = model(**encoded_input)
|
macedonizer/mk-gpt2
|
macedonizer
| 2021-09-22T08:58:46Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"mk",
"dataset:wiki-mk",
"dataset:time-mk-news-2010-2015",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- mk
thumbnail: https://huggingface.co/macedonizer/mk-roberta-base/blaze-koneski.jpg
license: apache-2.0
datasets:
- wiki-mk
- time-mk-news-2010-2015
---
# mk-gpt2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model description
mk-gpt2 is a transformers model pretrained on a very large corpus of Macedonian data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Macedonian language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
import random
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('macedonizer/mk-gpt2') \
model = AutoModelWithLMHead.from_pretrained('macedonizer/mk-gpt2')
input_text = 'Скопје е '
if len(input_text) == 0: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
) \
else: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
**encoded_input, \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
)
decoded_output = [] \
for sample in output: \
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output)
|
macedonizer/hr-roberta-base
|
macedonizer
| 2021-09-22T08:58:43Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"masked-lm",
"hr",
"dataset:wiki-hr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- hr
thumbnail: https://huggingface.co/macedonizer/hr-roberta-base/ivo-andric.jpg
tags:
- masked-lm
license: apache-2.0
datasets:
- wiki-hr
---
# HR-RoBERTa base model
Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје.
# Model description
RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2.
# How to use
You can use this model directly with a pipeline for masked language modeling: \
from transformers import pipeline \
unmasker = pipeline('fill-mask', model='macedonizer/hr-roberta-base') \
unmasker("Zagrab je \\<mask\\> glavni grad Hrvatske.") \
[
{'sequence': 'Zagreb je glavni grad Hrvatske.', 'score': 0.8750431537628174, 'token': 2026, 'token_str': ' glavni'},
{'sequence': 'Zagreb je najveći grad Hrvatske.', 'score': 0.060711536556482315, 'token': 2474, 'token_str': ' najveći'},
{'sequence': 'Zagreb je prvi grad Hrvatske.', 'score': 0.005241130944341421, 'token': 780, 'token_str': ' prvi'},
{'sequence': 'Zagreb je jedini grad Hrvatske.', 'score': 0.004663003608584404, 'token':
3280, 'token_str': ' jedini'},
{'sequence': 'Zagreb je treći grad Hrvatske.', 'score': 0.003771631745621562, 'token': 3236, 'token_str': ' treći'
] \
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import RobertaTokenizer, RobertaModel \
tokenizer = RobertaTokenizer.from_pretrained('macedonizer/hr-roberta-base') \
model = RobertaModel.from_pretrained('macedonizer/hr-roberta-base') \
text = "Replace me by any text you'd like." \
encoded_input = tokenizer(text, return_tensors='pt') \
output = model(**encoded_input)
|
macedonizer/hr-gpt2
|
macedonizer
| 2021-09-22T08:58:40Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"hr",
"dataset:wiki-hr",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- hr
thumbnail: https://huggingface.co/macedonizer/hr-gpt2/lets-talk-about-nlp-hr.jpg
license: apache-2.0
datasets:
- wiki-hr
---
# hr-gpt2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model description
hr-gpt2 is a transformers model pretrained on a very large corpus of Croation data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of the continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of the word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Macedonian language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for, however, which is generating texts from a
prompt.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
import random \\nfrom transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('macedonizer/hr-gpt2') \
model = AutoModelWithLMHead.from_pretrained('macedonizer/sr-gpt2')
input_text = 'Ja sam bio '
if len(input_text) == 0: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
) \
else: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
**encoded_input, \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
)
decoded_output = [] \
for sample in output: \
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output)
|
macedonizer/gr-roberta-base
|
macedonizer
| 2021-09-22T08:58:38Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"masked-lm",
"gr",
"dataset:wiki-gr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- gr
thumbnail: https://huggingface.co/macedonizer/gr-roberta-base/lets-talk-about-nlp-gr.jpg
tags:
- masked-lm
license: apache-2.0
datasets:
- wiki-gr
---
# GR-RoBERTa base model
Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between Athens and athens.
# Model description
RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Greek language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2.
# How to use
You can use this model directly with a pipeline for masked language modeling:
from transformers import pipeline \
unmasker = pipeline('fill-mask', model='macedonizer/gr-roberta-base') \
unmasker("Η Αθήνα είναι η \<mask\> της Ελλάδας") \
[{'score': 0.8832866549491882, \
'sequence': 'Η Αθήνα είναι η πρωτεύουσα της Ελλάδας', \
'token': 2788, \
'token_str': ' πρωτεύουσα'}, \
{'score': 0.018105432391166687, \
'sequence': 'Η Αθήνα είναι η μεγαλύτερη της Ελλάδας', \
'token': 2363, \
'token_str': ' μεγαλύτερη'}, \
{'score': 0.015836946666240692, \
'sequence': 'Η Αθήνα είναι η έδρα της Ελλάδας', \
'token': 1950, \
'token_str': ' έδρα'}, \
{'score': 0.015673324465751648, \
'sequence': 'Η Αθήνα είναι η μόνη της Ελλάδας', \
'token': 6548, \
'token_str': ' μόνη'}, \
{'score': 0.01375910360366106, \
'sequence': 'Η Αθήνα είναι η πόλη της Ελλάδας', \
'token': 825, \
'token_str': ' πόλη'}]
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import RobertaTokenizer, RobertaModel \
tokenizer = RobertaTokenizer.from_pretrained('macedonizer/gr-roberta-base') \
model = RobertaModel.from_pretrained('macedonizer/gr-roberta-base') \
text = "Replace me by any text you'd like." \
encoded_input = tokenizer(text, return_tensors='pt') \
output = model(**encoded_input)
|
macedonizer/blaze-koneski
|
macedonizer
| 2021-09-22T08:58:34Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"mk",
"dataset:wiki-mk",
"dataset:blaze-koneski-poetry",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- mk
thumbnail: https://huggingface.co/macedonizer/blaze-koneski/blaze-koneski.jpg
license: apache-2.0
datasets:
- wiki-mk
- blaze-koneski-poetry
---
# blaze-koneski
GPT-2 type of model. We finetuned macedonizer/mk-gpt-2 with Blaze Koneski's poetry.
## About Blaze Koneski
Born in a village near Prilep in 1921. Studied philology at Skopje University and worked there as a professor. Was the first chairman of the Macedonian Academy of Sciences and Arts, corresponding member of the Yugoslav Academy of Sciences and Arts, as well as of the Serbian and Slovene Academies, and honorary doctor of the Universities of Chicago and Krakow.
Wrote poetry, short stories, and essays, as well as scholarly works, many of them on the Macedonian language. Editor of the Dictionarv of the Macedonian Language, translator of Heine and Shakespeare. His works have been translated into Serbian, Croatian, Slovene, Albanian, Turkish, Hungarian, French, Russian, Italian, Greek, Polish, Romanian, German, and English.
Winner of numerous prizes, including the Golden Wreath of the Struga Poetry Evenings.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
import random
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('macedonizer/blaze-koneski')
nmodel = AutoModelWithLMHead.from_pretrained('macedonizer/blaze-koneski')
input_text = 'Москва '
if len(input_text) == 0: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
) \
else: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
**encoded_input, \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
)
decoded_output = [] \
for sample in output: \
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output)
|
macedonizer/ba-roberta-base
|
macedonizer
| 2021-09-22T08:58:31Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"masked-lm",
"ba",
"dataset:wiki-bs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- ba
thumbnail: https://huggingface.co/macedonizer/ba-roberta-base/abdulah-sidran.jpg
tags:
- masked-lm
license: apache-2.0
datasets:
- wiki-bs
---
# BA-RoBERTa base model
Pretrained model on Bosnian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between sarajevo and Sarajevo.
# Model description
RoBERTa is a transformers model pre-trained on a large corpus of Bosnian texts in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2.
# How to use
You can use this model directly with a pipeline for masked language modeling: \
from transformers import pipeline \
unmasker = pipeline('fill-mask', model='macedonizer/ba-roberta-base') \
unmasker("Sarajevo je \\<mask\\> grad Bosne i Hercegovine.") \
[{'score': 0.6210788488388062, \
'sequence': 'Sarajevo je glavni grad Bosne i Hercegovine', \
'token': 2006, \
'token_str': ' glavni'}, \
{'score': 0.19640550017356873, \
'sequence': 'Sarajevo je najveći grad Bosne i Hercegovine', \
'token': 1707, \
'token_str': ' najveći'}, \
{'score': 0.0210184995085001, \
'sequence': 'Sarajevo je srednjovjekovni grad Bosne i Hercegovine', \
'token': 22596, \
'token_str': ' srednjovjekovni'}, \
{'score': 0.010822420939803123, \
'sequence': 'Sarajevo je najmnogoljudniji grad Bosne i Hercegovine', \
'token': 40186, \
'token_str': ' najmnogoljudniji'}, \
{'score': 0.006114463787525892, \
'sequence': 'Sarajevo je službeni grad Bosne i Hercegovine', \
'token': 8546, \
'token_str': ' službeni'}] \
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import RobertaTokenizer, RobertaModel \
tokenizer = RobertaTokenizer.from_pretrained('macedonizer/ba-roberta-base') \
model = RobertaModel.from_pretrained('macedonizer/ba-roberta-base') \
text = "Replace me by any text you'd like." \
encoded_input = tokenizer(text, return_tensors='pt') \
output = model(**encoded_input)
|
liaad/ud_srl-pt_xlmr-large
|
liaad
| 2021-09-22T08:56:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"xlm-roberta-large",
"semantic role labeling",
"finetuned",
"dependency parsing",
"multilingual",
"pt",
"arxiv:2101.01213",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- pt
tags:
- xlm-roberta-large
- semantic role labeling
- finetuned
- dependency parsing
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
- Universal Dependencies
metrics:
- F1 Measure
---
# XLM-R large fine-tune in Portuguese Universal Dependencies and semantic role labeling
## Model description
This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned first on the Universal Dependencies Portuguese dataset and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/ud_srl-pt_xlmr-large")
model = AutoModel.from_pretrained("liaad/ud_srl-pt_xlmr-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The model was trained only for 10 epochs in the Universal Dependencies dataset.
## Training procedure
The model was trained on the Universal Dependencies Portuguese dataset; then on the CoNLL formatted OntoNotes v5.0; then on Portuguese semantic role labeling data (PropBank.Br) using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
liaad/srl-pt_xlmr-large
|
liaad
| 2021-09-22T08:56:37Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"xlm-roberta-large",
"semantic role labeling",
"finetuned",
"multilingual",
"pt",
"dataset:PropBank.Br",
"arxiv:2101.01213",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- pt
tags:
- xlm-roberta-large
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
metrics:
- F1 Measure
---
# XLM-R large fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_xlmr-large")
model = AutoModel.from_pretrained("liaad/srl-pt_xlmr-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
liaad/srl-pt_xlmr-base
|
liaad
| 2021-09-22T08:56:34Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"xlm-roberta-base",
"semantic role labeling",
"finetuned",
"multilingual",
"pt",
"dataset:PropBank.Br",
"arxiv:2101.01213",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- pt
tags:
- xlm-roberta-base
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
metrics:
- F1 Measure
---
# XLM-R base fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_xlmr-base")
model = AutoModel.from_pretrained("liaad/srl-pt_xlmr-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
liaad/srl-pt_mbert-base
|
liaad
| 2021-09-22T08:56:31Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"bert-base-multilingual-cased",
"semantic role labeling",
"finetuned",
"multilingual",
"pt",
"dataset:PropBank.Br",
"arxiv:2101.01213",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- pt
tags:
- bert-base-multilingual-cased
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
metrics:
- F1 Measure
---
# mBERT fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_mbert-base")
model = AutoModel.from_pretrained("liaad/srl-pt_mbert-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
liaad/srl-pt_bertimbau-base
|
liaad
| 2021-09-22T08:56:26Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"bert-base-portuguese-cased",
"semantic role labeling",
"finetuned",
"multilingual",
"pt",
"dataset:PropBank.Br",
"arxiv:2101.01213",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- pt
tags:
- bert-base-portuguese-cased
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
metrics:
- F1 Measure
---
# BERTimbau base fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`neuralmind/bert-base-portuguese-cased`](https://huggingface.co/neuralmind/bert-base-portuguese-cased) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_bertimbau-base")
model = AutoModel.from_pretrained("liaad/srl-pt_bertimbau-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
liaad/srl-enpt_xlmr-base
|
liaad
| 2021-09-22T08:56:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"xlm-roberta-base",
"semantic role labeling",
"finetuned",
"multilingual",
"pt",
"en",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"arxiv:2101.01213",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- pt
- en
tags:
- xlm-roberta-base
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# XLM-R base fine-tune in English and Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) fine-tuned first on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-enpt_xlmr-base")
model = AutoModel.from_pretrained("liaad/srl-enpt_xlmr-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The model was first fine-tuned on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data; then it was fine-tuned in the PropBank.Br dataset using 10-fold Cross-Validation. The resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
liaad/srl-en_xlmr-base
|
liaad
| 2021-09-22T08:56:11Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"xlm-roberta-base",
"semantic role labeling",
"finetuned",
"multilingual",
"pt",
"en",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"arxiv:2101.01213",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- pt
- en
tags:
- xlm-roberta-base
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# XLM-R base fine-tuned on English semantic role labeling
## Model description
This model is the [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) fine-tuned on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-en_xlmr-base")
model = AutoModel.from_pretrained("liaad/srl-en_xlmr-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The models were trained only for 5 epochs.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The models were trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data. They were tested on the PropBank.Br data set as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
liaad/srl-en_mbert-base
|
liaad
| 2021-09-22T08:56:08Z | 525 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"bert-base-multilingual-cased",
"semantic role labeling",
"finetuned",
"multilingual",
"pt",
"en",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"arxiv:2101.01213",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- pt
- en
tags:
- bert-base-multilingual-cased
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# mBERT fine-tuned on English semantic role labeling
## Model description
This model is the [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) fine-tuned on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-en_mbert-base")
model = AutoModel.from_pretrained("liaad/srl-en_mbert-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- The models were trained only for 5 epochs.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The model was trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data. They were tested on the PropBank.Br data set as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
kanishka/GlossBERT
|
kanishka
| 2021-09-22T08:54:41Z | 134 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"glossbert",
"en",
"dataset:SemCor3.0",
"arxiv:1908.07245",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags:
- glossbert
license: mit
datasets:
- SemCor3.0
---
## GlossBERT
A BERT-based model fine-tuned on SemCor 3.0 to perform word-sense-disambiguation by leveraging gloss information. This model is the research output of the paper titled: '[GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge](https://arxiv.org/pdf/1908.07245.pdf)'
Disclaimer: This model was built and trained by a group of researchers different than the repository's author. The original model code can be found on github: https://github.com/HSLCY/GlossBERT
## Usage
The following code loads GlossBERT:
```py
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('kanishka/GlossBERT')
model = BertForSequenceClassification.from_pretrained('kanishka/GlossBERT')
```
## Citation
If you use this model in any of your projects, please cite the original authors using the following bibtex:
```
@inproceedings{huang-etal-2019-glossbert,
title = "{G}loss{BERT}: {BERT} for Word Sense Disambiguation with Gloss Knowledge",
author = "Huang, Luyao and
Sun, Chi and
Qiu, Xipeng and
Huang, Xuanjing",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1355",
doi = "10.18653/v1/D19-1355",
pages = "3507--3512"
}
```
|
junnyu/electra_small_discriminator
|
junnyu
| 2021-09-22T08:54:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"en",
"dataset:openwebtext",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/junnyu
tags:
- pytorch
- electra
license: mit
datasets:
- openwebtext
---
# 一、 个人在openwebtext数据集上训练得到的electra-small模型
# 二、 复现结果(dev dataset)
|Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.|
|---|---|---|---|---|---|---|---|---|---|
|Metrics|MCC|Acc|Acc|Spearman|Acc|Acc|Acc|Acc||
|ELECTRA-Small-OWT(original)|56.8|88.3|87.4|86.8|88.3|78.9|87.9|68.5|80.36|
|**ELECTRA-Small-OWT (this)**| 55.82 |89.67|87.0|86.96|89.28|80.08|87.50|66.07|80.30|
# 三、 训练细节
- 数据集 openwebtext
- 训练batch_size 256
- 学习率lr 5e-4
- 最大句子长度max_seqlen 128
- 训练total step 62.5W
- GPU RTX3090
- 训练时间总共耗费2.5天
# 四、 使用
```python
import torch
from transformers.models.electra import ElectraModel, ElectraTokenizer
tokenizer = ElectraTokenizer.from_pretrained("junnyu/electra_small_discriminator")
model = ElectraModel.from_pretrained("junnyu/electra_small_discriminator")
inputs = tokenizer("Beijing is the capital of China.", return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
print(outputs[0].shape)
```
|
jimregan/wav2vec2-large-xlsr-irish-basic
|
jimregan
| 2021-09-22T08:52:55Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ga",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ga
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Irish by Jim O'Regan
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ga-IE
type: common_voice
args: ga-IE
metrics:
- name: Test WER
type: wer
value: 47.4
---
# Wav2Vec2-Large-XLSR-Irish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Irish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Irish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ga-IE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
model.to("cuda")
# So, tolower() for Irish is a bit complicated: tAthar -> t-athair
# toupper() is non-deterministic :)
def is_upper_vowel(letter):
if letter in ['A', 'E', 'I', 'O', 'U', 'Á', 'É', 'Í', 'Ó', 'Ú']:
return True
else:
return False
def irish_lower(word):
if len(word) > 1 and word[0] in ['n', 't'] and is_upper_vowel(word[1]):
return word[0] + '-' + word[1:].lower()
else:
return word.lower()
def irish_lower_sentence(sentence):
return " ".join([irish_lower(w) for w in sentence.split(" ")])
chars_to_ignore_regex = '[,\?\.\!\;\:\"\“\%\‘\”\(\)\*]'
def remove_special_characters(sentence):
tmp = re.sub('’ ', ' ', sentence)
tmp = re.sub("’$", '', tmp)
tmp = re.sub('’', '\'', tmp)
tmp = re.sub(chars_to_ignore_regex, '', tmp)
sentence = irish_lower_sentence(tmp) + ' '
return sentence
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = remove_special_characters(batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 43.7 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/irish/fine-tune-xlsr-wav2vec2-on-irish-asr-with-transformers.ipynb)
|
jannesg/takalane_zul_roberta
|
jannesg
| 2021-09-22T08:52:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"zul",
"masked-lm",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zul
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- zul
- fill-mask
- pytorch
- roberta
- masked-lm
license: mit
---
# Takalani Sesame - Zulu 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_zul_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_zul_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 410000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_xho_roberta
|
jannesg
| 2021-09-22T08:52:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"xho",
"masked-lm",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- xho
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- xho
- fill-mask
- pytorch
- roberta
- masked-lm
license: mit
---
# Takalani Sesame - Xhosa 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_xho_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_xho_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 100000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_ven_roberta
|
jannesg
| 2021-09-22T08:52:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"ven",
"masked-lm",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- ven
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- ven
- fill-mask
- pytorch
- roberta
- masked-lm
license: mit
---
# Takalani Sesame - Venda 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_ven_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_ven_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 9279
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_ssw_roberta
|
jannesg
| 2021-09-22T08:52:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"tn",
"masked-lm",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- tn
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- tn
- fill-mask
- pytorch
- roberta
- masked-lm
license: mit
---
# Takalani Sesame - Tswana 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_ssw_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_ssw_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 380
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_sot_roberta
|
jannesg
| 2021-09-22T08:52:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"sot",
"masked-lm",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- sot
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- sot
- fill-mask
- pytorch
- roberta
- masked-lm
license: mit
---
# Takalani Sesame - Southern Sotho 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_sot_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_sot_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 20000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_nso_roberta
|
jannesg
| 2021-09-22T08:52:04Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"nso",
"masked-lm",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- nso
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- nso
- fill-mask
- pytorch
- roberta
- masked-lm
license: mit
---
# Takalani Sesame - Northern Sotho 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_nso_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_nso_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 4746
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
gorkemgoknar/gpt2-small-turkish
|
gorkemgoknar
| 2021-09-22T08:29:21Z | 241 | 10 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"turkish",
"tr",
"dataset:wikipedia-turkish",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- tr
thumbnail:
tags:
- gpt2
- turkish
license: apache-2.0
datasets:
- wikipedia-turkish
metrics:
- perplexity
- accuracy
widget:
- text: Bu yazıyı bir bilgisayar yazdı. Yazarken
context: ''
- text: İnternete kolay erişim sayesinde dünya daha da küçüldü. Bunun sonucunda
context: ''
---
# Turkish GPT2 Model Finetuned
# Türkçe GPT2 Modeli
## Model description
This is a GPT2-Small English based model finetuned and additionaly trainied with Wikipedia Articles in Turkish as of 28-10-2020
Live demo based on this work at : https://www.metayazar.com/
Fine tuned writer on this model: https://huggingface.co/gorkemgoknar/gpt2-turkish-writer
Work has been done on Pierre Guillou tutorial as on this page.
(https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb)
Code is converted to work with Fastai 2.X .
Using Google Colab for training.
Additional tutorial and source will be in https://github.com/gorkemgoknar in later stage.
Current accuracy 33 % , Perplexity : 51.88
Models are available:
* [gpt2-small-tuned-tr] (https://huggingface.co/gorkemgoknar/gpt2-small-turkish)
* [gpt2-small-turkish-writer] (https://huggingface.co/gorkemgoknar/gpt2-turkish-writer)
## Intended uses & limitations
#### How to use
#### Install
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
import torch
tokenizer = AutoTokenizer.from_pretrained("gorkemgoknar/gpt2-small-turkish")
model = AutoModelWithLMHead.from_pretrained("gorkemgoknar/gpt2-small-turkish")
# Get sequence length max of 1024
tokenizer.model_max_length=1024
model.eval() # disable dropout (or leave in train mode to finetune)
```
#### Generate 1 word
```python
# input sequence
text = "Bu yazıyı bilgisayar yazdı."
inputs = tokenizer(text, return_tensors="pt")
# model output
outputs = model(**inputs, labels=inputs["input_ids"])
loss, logits = outputs[:2]
predicted_index = torch.argmax(logits[0, -1, :]).item()
predicted_text = tokenizer.decode([predicted_index])
# results
print('input text:', text)
print('predicted text:', predicted_text)
# input text:
# predicted text:
```
#### Generate Full Sequence
```python
# input sequence
text = "Bu yazıyı bilgisayar yazdı."
inputs = tokenizer(text, return_tensors="pt")
# model output using Top-k sampling text generation method
sample_outputs = model.generate(inputs.input_ids,
pad_token_id=50256,
do_sample=True,
max_length=50, # put the token number you want
top_k=40,
num_return_sequences=1)
# generated sequence
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\\\\
\\\\
{}".format(i+1, tokenizer.decode(sample_output.tolist())))
# >> Generated text
#
```
#### Limitations and bias
The training data used for this model come from Turkish Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral.
## Training data
Wikipedia Turkish article dump as of 28-10-2020
## Training procedure
## Eval results
| epoch\\\\t|train_loss\\\\t|valid_loss\\\\t|accuracy\\\\t|perplexity\\\\t|time |
| ----- | -------- |--------- | ---------- | --------- | ----- |
|0\\\\t|4.777015\\\\t|4.621834\\\\t|0.292547\\\\t|101.680367\\\\t|2:42:05|
|1\\\\t|4.509412\\\\t|4.403999\\\\t|0.305574\\\\t|81.777267\\\\t|1:09:38|
|2\\\\t|4.169529\\\\t|4.120755\\\\t|0.324908\\\\t|61.605747\\\\t|1:07:45|
|3\\\\t|4.293973\\\\t|4.177899\\\\t|0.317211\\\\t|65.228653\\\\t|1:07:02|
|4\\\\t|4.049848\\\\t|3.949103\\\\t|0.338347\\\\t|51.888783\\\\t|1:05:53|
#Epoch 0 on Tesla T4, others on V100
```
|
gagan3012/k2t-base
|
gagan3012
| 2021-09-22T08:27:23Z | 87 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytotext",
"k2t-base",
"Keywords to Sentences",
"en",
"dataset:WebNLG",
"dataset:Dart",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: Keywords to Sentences
tags:
- keytotext
- k2t-base
- Keywords to Sentences
license: mit
datasets:
- WebNLG
- Dart
metrics:
- NLG
---
# keytotext

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Keytotext is powered by Huggingface 🤗
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
## Model:
Keytotext is based on the Amazing T5 Model:
- `k2t`: [Model](https://huggingface.co/gagan3012/k2t)
- `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny)
- `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base)
Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder
## Usage:
Example usage: [](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder
```
pip install keytotext
```

## UI:
UI: [](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
```
pip install streamlit-tags
```
This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags)

|
flax-community/medclip
|
flax-community
| 2021-09-22T08:25:55Z | 4 | 2 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"hybrid-clip",
"vision",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- vision
license: apache-2.0
---
# MedCLIP
## Model description
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
```
|
flax-community/gpt-neo-1.3B-apps-all-2
|
flax-community
| 2021-09-22T08:25:21Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt_neo",
"text-generation",
"code_synthesis",
"dataset:apps",
"arxiv:2107.03374",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
- python
license: mit
tags:
- gpt_neo
- code_synthesis
datasets:
- apps
---
# GPT-Code-Clippy-1.3B-APPS-all
## Model Description
GPT-Neo-1.3B-APPS-all is a GPT-Neo-1.3B fine-tuned on APPS dataset. This model is specialized to solve programming tasks.
## Training data
The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each.
This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-1.3B-apps).
## Training procedure
The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py).
Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script:
```
python run_clm_apps.py \
--output_dir ./gpt-neo-125M-apps \
--model_name_or_path EleutherAI/gpt-neo-125B \
--dataset_name ./apps.py \
--dataset_config_name formatted \
--do_train --do_eval \
--block_size="1024" \
--per_device_train_batch_size="3" \
--per_device_eval_batch_size="3" \
--preprocessing_num_workers="16" \
--learning_rate="8e-5" \
--warmup_steps="800" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--weight_decay="0.1" \
--overwrite_output_dir \
--num_train_epochs="5" \
--logging_steps="50" \
--eval_steps="2000" \
--report_to="wandb" \
--dtype="bfloat16" \
--save_strategy epoch \
--gradient_accumulation_steps 1 \
--all_data true \
```
## Intended Use and Limitations
The model is fine-tuned to solve programming problems given a text description and optional starter code.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-1.3B-apps-all-2")
tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-1.3B-apps-all-2")
prompt = """
A function to greet user. Given a user name it should say hello
def greet(name):
ANSWER:
"""
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device)
start = input_ids.size(1)
out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2,
early_stopping=True, eos_token_id=tokenizer.eos_token_id, )
print(tokenizer.decode(out[0][start:]))
```
### Limitations and Biases
The model is intended to be used for research purposes and comes with no guarantees of quality of generated code.
The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.
1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model.
2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset.
This model is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details.
## Eval results
Coming soon...
|
digitalepidemiologylab/covid-twitter-bert-v2-mnli
|
digitalepidemiologylab
| 2021-09-22T08:20:04Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"Twitter",
"COVID-19",
"tensorflow",
"zero-shot-classification",
"en",
"dataset:mnli",
"arxiv:1909.00161",
"arxiv:2005.07503",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
thumbnail: https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png
tags:
- Twitter
- COVID-19
- text-classification
- pytorch
- tensorflow
- bert
license: mit
datasets:
- mnli
pipeline_tag: zero-shot-classification
widget:
- text: To stop the pandemic it is important that everyone turns up for their shots.
candidate_labels: health, sport, vaccine, guns
---
# COVID-Twitter-BERT v2 MNLI
## Model description
This model provides a zero-shot classifier to be used in cases where it is not possible to finetune CT-BERT on a specific task, due to lack of labelled data.
The technique is based on [Yin et al.](https://arxiv.org/abs/1909.00161).
The article describes a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers.
The model is already finetuned on 400'000 generaic logical tasks.
We can then use it as a zero-shot classifier by reformulating the classification task as a question.
Let's say we want to classify COVID-tweets as vaccine-related and not vaccine-related.
The typical way would be to collect a few hunder pre-annotated tweets and organise them in two classes.
Then you would finetune the model on this.
With the zero-shot mnli-classifier, you can instead reformulate your question as "This text is about vaccines", and use this directly on inference - without any training.
Find more info about the model on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert).
## Usage
Please note that how you formulate the question can give slightly different results.
Collecting a training set and finetuning on this, will most likely give you better accuracy.
The easiest way to try this out is by using the Hugging Face pipeline.
This uses the default Enlish template where it puts the text "This example is " in front of the text.
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="digitalepidemiologylab/covid-twitter-bert-v2-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = 'To stop the pandemic it is important that everyone turns up for their shots.'
candidate_labels = ['health', 'sport', 'vaccine','guns']
hypothesis_template = 'This example is {}.'
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True)
```
## Training procedure
The model is finetuned on the 400k large [MNLI-task](https://cims.nyu.edu/~sbowman/multinli/).
## References
```bibtex
@article{muller2020covid,
title={COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter},
author={M{\"u}ller, Martin and Salath{\'e}, Marcel and Kummervold, Per E},
journal={arXiv preprint arXiv:2005.07503},
year={2020}
}
```
or
```
Martin Müller, Marcel Salathé, and Per E. Kummervold.
COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter.
arXiv preprint arXiv:2005.07503 (2020).
```
|
Coolhand/Abuela
|
Coolhand
| 2021-09-22T08:19:41Z | 0 | 1 | null |
[
"image_restoration",
"superresolution",
"en",
"arxiv:2009.07047",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
language:
- en
thumbnail: https://github.com/Nick-Harvey/for_my_abuela/blob/master/cuban_large.jpg
tags:
- image_restoration
- superresolution
license: mit
metrics:
---
@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}
@article{wan2020old,
title={Old Photo Restoration via Deep Latent Space Translation},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
journal={arXiv preprint arXiv:2009.07047},
year={2020}
}
|
cristian-popa/bart-tl-ng
|
cristian-popa
| 2021-09-22T08:18:06Z | 21 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"topic labeling",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
<!-- thumbnail: https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png
-->
tags:
- topic labeling
license: apache-2.0
metrics:
- ndcg
---
# MyModel
## Model description
This is the `BART-TL-ng` model from the paper [BART-TL: Weakly-Supervised Topic Label Generation](https://www.aclweb.org/anthology/2021.eacl-main.121.pdf). We aim to solve the topic labeling task using generative methods, rather than selection from a pool of labels as was done in previous State of the Art works.
For more details not covered here, you can read the paper or look at the open-source implementation: https://github.com/CristianViorelPopa/BART-TL-topic-label-generation.
There are two models made available from the paper:
* [BART-TL-all](https://huggingface.co/cristian-popa/bart-tl-all)
* [BART-TL-ng](https://huggingface.co/cristian-popa/bart-tl-ng)
## Intended uses & limitations
#### How to use
The model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
mname = "cristian-popa/bart-tl-ng"
tokenizer = AutoTokenizer.from_pretrained(mname)
model = AutoModelForSeq2SeqLM.from_pretrained(mname)
input = "site web google search website online internet social content user"
enc = tokenizer(input, return_tensors="pt", truncation=True, padding="max_length", max_length=128)
outputs = model.generate(
input_ids=enc.input_ids,
attention_mask=enc.attention_mask,
max_length=15,
min_length=1,
do_sample=False,
num_beams=25,
length_penalty=1.0,
repetition_penalty=1.5
)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # windows live messenger
```
#### Limitations and bias
The model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy.
## Training data
The model was fine-tuned on 5 different StackExchange corpora (see https://archive.org/download/stackexchange for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here.
## Training procedure
The large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the [NETL](https://www.aclweb.org/anthology/C16-1091.pdf) method, along with n-grams from the topics. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the [paper](https://www.aclweb.org/anthology/2021.eacl-main.121.pdf) or by following [this notebook](https://github.com/CristianViorelPopa/BART-TL-topic-label-generation/blob/main/notebooks/end_to_end_workflow.ipynb).
## Eval results
model | Top-1 Avg. | Top-3 Avg. | Top-5 Avg. | nDCG-1 | nDCG-3 | nDCG-5
------------|------------|------------|------------|--------|--------|-------
NETL (U) | 2.66 | 2.59 | 2.50 | 0.83 | 0.85 | 0.87
NETL (S) | 2.74 | 2.57 | 2.49 | 0.88 | 0.85 | 0.88
BART-TL-all | 2.64 | 2.52 | 2.43 | 0.83 | 0.84 | 0.87
BART-TL-ng | 2.62 | 2.50 | 2.33 | 0.82 | 0.84 | 0.85
### BibTeX entry and citation info
```bibtex
@inproceedings{popa-rebedea-2021-bart,
title = "{BART}-{TL}: Weakly-Supervised Topic Label Generation",
author = "Popa, Cristian and
Rebedea, Traian",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.eacl-main.121",
pages = "1418--1425",
abstract = "We propose a novel solution for assigning labels to topic models by using multiple weak labelers. The method leverages generative transformers to learn accurate representations of the most important topic terms and candidate labels. This is achieved by fine-tuning pre-trained BART models on a large number of potential labels generated by state of the art non-neural models for topic labeling, enriched with different techniques. The proposed BART-TL model is able to generate valuable and novel labels in a weakly-supervised manner and can be improved by adding other weak labelers or distant supervision on similar tasks.",
}
```
|
bagdaebhishek/IndianPoliticalTweetsLM
|
bagdaebhishek
| 2021-09-22T07:49:02Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"India",
"politics",
"tweets",
"BJP",
"Congress",
"AAP",
"lm-head",
"en",
"dataset:Twitter",
"dataset:IndianPolitics",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg
tags:
- India
- politics
- tweets
- BJP
- Congress
- AAP
- pytorch
- gpt2
- lm-head
- text-generation
license: apache-2.0
datasets:
- Twitter
- IndianPolitics
---
# Model name
Indian Political Tweets LM
## Model description
Note: This model is based on GPT2, if you want a bigger model based on GPT2-medium and finetuned on the same data please take a look at the [IndianPoliticalTweetsLMMedium](https://huggingface.co/bagdaebhishek/IndianPoliticalTweetsLMMedium) model.
This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post.
## Intended uses & limitations
This finetuned model can be used to generate tweets which are related to Indian politics.
#### How to use
```python
from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline
tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer)
init_sentence = "India will always be"
print(text_generator(init_sentence))
```
#### Limitations and bias
1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text.
2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc.
3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.
## Training data
I used the pre-trained gpt2 model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
## Training procedure
For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.
I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.
### Hardware
1. GPU: GTX 1080Ti
2. CPU: Ryzen 3900x
3. RAM: 32GB
This model took roughly 36 hours to fine-tune.
|
csukuangfj/icefall_asr_yesno_tdnn
|
csukuangfj
| 2021-09-22T02:33:22Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
## Pre-trained TDNN models for the yesno dataset with icefall.
Refer to <https://github.com/k2-fsa/icefall/tree/master/egs/yesno/ASR>
for more information about this pre-trained model.
You can find usage instructions there.
## Sound files for testing the pre-trained model
The folder `test_waves` contains test sound files. They
are downloaded from <https://www.openslr.org/1/>.
There are 60 files in the dataset, 30 are used for training.
The remaining 30 files, contained in `test_waves` are kept for testing.
The code for splitting the dataset can be found at
<https://github.com/lhotse-speech/lhotse/blob/master/lhotse/recipes/yesno.py#L138>
```python
wave_files = list(corpus_dir.glob("*.wav"))
assert len(wave_files) == 60
wave_files.sort()
train_set = wave_files[::2]
test_set = wave_files[1::2]
assert len(train_set) == 30
assert len(test_set) == 30
```
|
SIKU-BERT/sikuroberta
|
SIKU-BERT
| 2021-09-22T00:22:36Z | 224 | 14 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"roberta",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- "zh"
thumbnail: "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "roberta"
- "pytorch"
inference: false
license: "apache-2.0"
---
# SikuBERT
## Model description

Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikuroberta")
model = AutoModel.from_pretrained("SIKU-BERT/sikuroberta")
```
## About Us
We are from Nanjing Agricultural University.
> Created with by SIKU-BERT [](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
|
VirenS13117/distilbert-base-uncased-finetuned-cola
|
VirenS13117
| 2021-09-21T22:22:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5286324175580216
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7809
- Matthews Correlation: 0.5286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5299 | 1.0 | 535 | 0.5040 | 0.4383 |
| 0.3472 | 2.0 | 1070 | 0.5284 | 0.4911 |
| 0.2333 | 3.0 | 1605 | 0.6633 | 0.5091 |
| 0.1733 | 4.0 | 2140 | 0.7809 | 0.5286 |
| 0.1255 | 5.0 | 2675 | 0.8894 | 0.5282 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingtweets/boss_lady_fenja-ladyfenja_promo
|
huggingtweets
| 2021-09-21T16:19:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/boss_lady_fenja-ladyfenja_promo/1632241140819/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1424482960749776907/NL5l0P9Q_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1432371607977275395/j60VC-cp_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">✨Boss Lady Fenja✨ 9.6% 🦋 & Boss_Lady_Fenja_promo</div>
<div style="text-align: center; font-size: 14px;">@boss_lady_fenja-ladyfenja_promo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ✨Boss Lady Fenja✨ 9.6% 🦋 & Boss_Lady_Fenja_promo.
| Data | ✨Boss Lady Fenja✨ 9.6% 🦋 | Boss_Lady_Fenja_promo |
| --- | --- | --- |
| Tweets downloaded | 3153 | 654 |
| Retweets | 380 | 240 |
| Short tweets | 646 | 160 |
| Tweets kept | 2127 | 254 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jpqrjjb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @boss_lady_fenja-ladyfenja_promo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10coew7p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10coew7p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/boss_lady_fenja-ladyfenja_promo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lighteternal/nli-xlm-r-greek
|
lighteternal
| 2021-09-21T16:01:42Z | 57 | 2 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"xlm-roberta-base",
"zero-shot-classification",
"el",
"en",
"dataset:multi_nli",
"dataset:snli",
"dataset:allnli_greek",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- el
- en
tags:
- xlm-roberta-base
datasets:
- multi_nli
- snli
- allnli_greek
metrics:
- accuracy
pipeline_tag: zero-shot-classification
widget:
- text: "Η Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας."
candidate_labels: "τεχνολογία, πολιτική, αθλητισμός"
multi_class: false
license: apache-2.0
---
# Cross-Encoder for Greek Natural Language Inference (Textual Entailment) & Zero-Shot Classification
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
The model was trained on the the combined Greek+English version of the AllNLI dataset(sum of [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/)). The Greek part was created using the EN2EL NMT model available [here](https://huggingface.co/lighteternal/SSE-TUC-mt-en-el-cased).
The model can be used in two ways:
* NLI/Textual Entailment: For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
* Zero-shot classification through the Huggingface pipeline: Given a sentence and a set of labels/topics, it will output the likelihood of the sentence belonging to each of the topic. Under the hood, the logit for entailment between the sentence and each label is taken as the logit for the candidate label being valid.
## Performance
Evaluation on classification accuracy (entailment, contradiction, neutral) on mixed (Greek+English) AllNLI-dev set:
| Metric | Value |
| --- | --- |
| Accuracy | 0.8409 |
## To use the model for NLI/Textual Entailment
#### Usage with sentence_transformers
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('lighteternal/nli-xlm-r-greek')
scores = model.predict([('Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'),
('Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο'),
('Δυο γυναίκες μιλάνε στο κινητό', 'Το τραπέζι ήταν πράσινο')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
print(scores, labels)
# Οutputs
#[[-3.1526504 2.9981945 -0.3108107]
# [ 5.0549307 -2.757949 -1.6220676]
# [-0.5124733 -2.2671669 3.1630592]] ['entailment', 'contradiction', 'neutral']
```
#### Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('lighteternal/nli-xlm-r-greek')
tokenizer = AutoTokenizer.from_pretrained('lighteternal/nli-xlm-r-greek')
features = tokenizer(['Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'],
['Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο.'],
padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## To use the model for Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='lighteternal/nli-xlm-r-greek')
sent = "Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας"
candidate_labels = ["πολιτική", "τεχνολογία", "αθλητισμός"]
res = classifier(sent, candidate_labels)
print(res)
#outputs:
#{'sequence': 'Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας', 'labels': ['τεχνολογία', 'αθλητισμός', 'πολιτική'], 'scores': [0.8380699157714844, 0.09086982160806656, 0.07106029987335205]}
```
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
### Citation info
Citation for the Greek model TBA.
Based on the work [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
Kudos to @nreimers (Nils Reimers) for his support on Github .
|
alexanderfalk/danbert-small-cased
|
alexanderfalk
| 2021-09-21T15:57:39Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"named entity recognition",
"token criticality",
"da",
"en",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- da
- en
thumbnail:
tags:
- named entity recognition
- token criticality
license: apache-2.0
datasets:
- custom danish dataset
inference: false
metrics:
- array of metric identifiers
---
# DanBERT
## Model description
DanBERT is a danish pre-trained model based on BERT-Base. The pre-trained model has been trained on more than 2 million sentences and 40 millions, danish words. The training has been conducted as part of a thesis.
The model can be found at:
* [danbert-da](https://huggingface.co/alexanderfalk/danbert-small-cased)
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("alexanderfalk/danbert-small-cased")
model = AutoModel.from_pretrained("alexanderfalk/danbert-small-cased")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020},
title={Anonymization of Danish, Real-Time Data, and Personalized Modelling},
author={Alexander Falk},
}
```
|
AkshaySg/LanguageIdentification
|
AkshaySg
| 2021-09-21T15:45:47Z | 2 | 0 | null |
[
"LID",
"spoken language recognition",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
language: multilingual
tags:
- LID
- spoken language recognition
license: apache-2.0
datasets:
- VoxLingua107
metrics:
- ER
inference: false
---
# Spoken Language Identification Model
## Model description
The model can classify a speech utterance according to the language spoken.
It covers following different languages (
English,
Indonesian,
Japanese,
Korean,
Thai,
Vietnamese,
Mandarin Chinese).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.