pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
# danbooru-pretrained
- Repo: https://github.com/RF5/danbooru-pretrained
- https://github.com/RF5/danbooru-pretrained/releases/tag/v0.1
- https://github.com/RF5/danbooru-pretrained/releases/download/v0.1/resnet50-13306192.pth
- https://github.com/RF5/danbooru-pretrained/raw/master/config/class_names_6000.json
|
{}
|
public-data/danbooru-pretrained
| null |
[
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#has_space #region-us
|
# danbooru-pretrained
- Repo: URL
- URL
- URL
- URL
|
[
"# danbooru-pretrained\n\n- Repo: URL\n - URL\n - URL\n - URL"
] |
[
"TAGS\n#has_space #region-us \n",
"# danbooru-pretrained\n\n- Repo: URL\n - URL\n - URL\n - URL"
] |
null | null |
# yolov5_anime
- Repo: https://github.com/zymk9/yolov5_anime
- https://drive.google.com/file/d/1-MO9RYPZxnBfpNiGY6GdsqCeQWYNxBdl/view
|
{}
|
public-data/yolov5_anime
| null |
[
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#has_space #region-us
|
# yolov5_anime
- Repo: URL
- URL
|
[
"# yolov5_anime\n\n- Repo: URL\n - URL"
] |
[
"TAGS\n#has_space #region-us \n",
"# yolov5_anime\n\n- Repo: URL\n - URL"
] |
text-generation
|
transformers
|
## Model description
+ Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["convAI", "conversational", "facebook"], "datasets": ["blended_skill_talk"], "metrics": ["perplexity"]}
|
hyunwoongko/blenderbot-9B
| null |
[
"transformers",
"pytorch",
"blenderbot",
"text2text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"arxiv:1907.06616",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.06616"
] |
[
"en"
] |
TAGS
#transformers #pytorch #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## Model description
+ Paper: Recipes for building an open-domain chatbot
+ Original PARLAI Code
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
[
"## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code",
"### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models."
] |
[
"TAGS\n#transformers #pytorch #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code",
"### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models."
] |
text2text-generation
|
transformers
|
## KoBART-base-v2
With the addition of chatting data, the model is trained to handle the semantics of sequences longer than KoBART.
```python
from transformers import PreTrainedTokenizerFast, BartModel
tokenizer = PreTrainedTokenizerFast.from_pretrained('hyunwoongko/kobart')
model = BartModel.from_pretrained('hyunwoongko/kobart')
```
### Performance
NSMC
- acc. : 0.901
### hyunwoongko/kobart
- Added bos/eos post processor
- Removed token_type_ids
|
{"language": "ko", "license": "mit", "tags": ["bart"]}
|
hyunwoongko/kobart
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #bart #text2text-generation #ko #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## KoBART-base-v2
With the addition of chatting data, the model is trained to handle the semantics of sequences longer than KoBART.
### Performance
NSMC
- acc. : 0.901
### hyunwoongko/kobart
- Added bos/eos post processor
- Removed token_type_ids
|
[
"## KoBART-base-v2\n\nWith the addition of chatting data, the model is trained to handle the semantics of sequences longer than KoBART.",
"### Performance \n\nNSMC\n- acc. : 0.901",
"### hyunwoongko/kobart\n- Added bos/eos post processor\n- Removed token_type_ids"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #ko #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## KoBART-base-v2\n\nWith the addition of chatting data, the model is trained to handle the semantics of sequences longer than KoBART.",
"### Performance \n\nNSMC\n- acc. : 0.901",
"### hyunwoongko/kobart\n- Added bos/eos post processor\n- Removed token_type_ids"
] |
text-generation
|
transformers
|
## Model description
+ Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["convAI", "conversational", "facebook"], "datasets": ["blended_skill_talk"], "metrics": ["perplexity"]}
|
hyunwoongko/reddit-3B
| null |
[
"transformers",
"pytorch",
"blenderbot",
"text2text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"arxiv:1907.06616",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.06616"
] |
[
"en"
] |
TAGS
#transformers #pytorch #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
## Model description
+ Paper: Recipes for building an open-domain chatbot
+ Original PARLAI Code
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
[
"## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code",
"### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models."
] |
[
"TAGS\n#transformers #pytorch #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code",
"### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models."
] |
text-generation
|
transformers
|
## Model description
+ Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["convAI", "conversational", "facebook"], "datasets": ["blended_skill_talk"], "metrics": ["perplexity"]}
|
hyunwoongko/reddit-9B
| null |
[
"transformers",
"pytorch",
"blenderbot",
"text2text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"arxiv:1907.06616",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.06616"
] |
[
"en"
] |
TAGS
#transformers #pytorch #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## Model description
+ Paper: Recipes for building an open-domain chatbot
+ Original PARLAI Code
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
[
"## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code",
"### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models."
] |
[
"TAGS\n#transformers #pytorch #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code",
"### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models."
] |
automatic-speech-recognition
|
transformers
|
# wav2vec2-xlsr-korean-senior
Futher fine-tuned [fleek/wav2vec-large-xlsr-korean](https://huggingface.co/fleek/wav2vec-large-xlsr-korean) using the [AIhub 자유대화 음성(노인남녀)](https://aihub.or.kr/aidata/30704).
- Total train data size: 808,642
- Total vaild data size: 159,970
When using this model, make sure that your speech input is sampled at 16kHz.
The script used for training can be found here: https://github.com/hyyoka/wav2vec2-korean-senior
### Inference
``` py
import torchaudio
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import re
def clean_up(transcription):
hangul = re.compile('[^ ㄱ-ㅣ가-힣]+')
result = hangul.sub('', transcription)
return result
model_name "hyyoka/wav2vec2-xlsr-korean-senior"
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
speech_array, sampling_rate = torchaudio.load(wav_file)
feat = processor(speech_array[0],
sampling_rate=16000,
padding=True,
max_length=800000,
truncation=True,
return_attention_mask=True,
return_tensors="pt",
pad_token_id=49
)
input = {'input_values': feat['input_values'],'attention_mask':feat['attention_mask']}
outputs = model(**input, output_attentions=True)
logits = outputs.logits
predicted_ids = logits.argmax(axis=-1)
transcription = processor.decode(predicted_ids[0])
stt_result = clean_up(transcription)
```
|
{"language": "kr", "license": "apache-2.0", "tags": ["automatic-speech-recognition"], "datasets": ["aihub \uc790\uc720\ub300\ud654 \uc74c\uc131(\ub178\uc778\ub0a8\ub140)"]}
|
hyyoka/wav2vec2-xlsr-korean-senior
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"kr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"kr"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #kr #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-xlsr-korean-senior
Futher fine-tuned fleek/wav2vec-large-xlsr-korean using the AIhub 자유대화 음성(노인남녀).
- Total train data size: 808,642
- Total vaild data size: 159,970
When using this model, make sure that your speech input is sampled at 16kHz.
The script used for training can be found here: URL
### Inference
|
[
"# wav2vec2-xlsr-korean-senior\n\nFuther fine-tuned fleek/wav2vec-large-xlsr-korean using the AIhub 자유대화 음성(노인남녀).\n\n- Total train data size: 808,642\n- Total vaild data size: 159,970\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nThe script used for training can be found here: URL",
"### Inference"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #kr #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-xlsr-korean-senior\n\nFuther fine-tuned fleek/wav2vec-large-xlsr-korean using the AIhub 자유대화 음성(노인남녀).\n\n- Total train data size: 808,642\n- Total vaild data size: 159,970\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nThe script used for training can be found here: URL",
"### Inference"
] |
null | null |
Hugging Face Test Model
|
{}
|
iSandro19/Hugging
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
Hugging Face Test Model
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
# Bender DialoGPT model
|
{"tags": ["conversational"]}
|
iamalpharius/GPT-Small-BenderBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Bender DialoGPT model
|
[
"# Bender DialoGPT model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Bender DialoGPT model"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec-OSR
Finetuned facebook's wav2vec2 model for speech to text module of [The Sound Of AI open source research group](https://thesoundofaiosr.github.io/).
The original base model is pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
## Paper
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
## Abstract
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
The original model can also be found in hugging face public model repository [here](https://huggingface.co/facebook/wav2vec2-base-960h)
## Usage
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import soundfile as sf
import torch
# load tokenizer, data_processor and model
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("iamtarun/wav2vec-osr")
processor = Wav2Vec2Processor.from_pretrained("iamtarun/wav2vec-osr")
model = Wav2Vec2ForCTC.from_pretrained("iamtarun/wav2vec-osr")
model = model.eval()
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
# define function to read in sound file
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# speech data is passed to data processor whose output is then fed to model
input_values = processor(ds["speech"][:2], sampling_rate=rate, padding="longest", return_tensors="pt").input_values.to(device)
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim =-1)
transcriptions = tokenizer.decode(predicted_ids[0])
print(transcriptions)
```
|
{"language": "en", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech to text"], "datasets": ["librispeech_asr"], "widget": [{"example_title": "OSR sample 1", "src": "https://github.com/TheSoundOfAIOSR/rg_speech_to_text/blob/main/data/finetuning-dataset/audiofiles/TA-5.wav?raw=true"}, {"example_title": "OSR sample 2", "src": "https://github.com/TheSoundOfAIOSR/rg_speech_to_text/blob/main/data/finetuning-dataset/audiofiles/TK-17.wav?raw=true"}, {"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
iamtarun/wav2vec-osr
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech to text",
"en",
"dataset:librispeech_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech to text #en #dataset-librispeech_asr #license-apache-2.0 #endpoints_compatible #region-us
|
# Wav2Vec-OSR
Finetuned facebook's wav2vec2 model for speech to text module of The Sound Of AI open source research group.
The original base model is pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
## Paper
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
## Abstract
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under URL
The original model can also be found in hugging face public model repository here
## Usage
|
[
"# Wav2Vec-OSR\nFinetuned facebook's wav2vec2 model for speech to text module of The Sound Of AI open source research group.\n\nThe original base model is pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.",
"## Paper\n\nAuthors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli",
"## Abstract\n\nWe show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.\n\nThe original model can be found under URL\nThe original model can also be found in hugging face public model repository here",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech to text #en #dataset-librispeech_asr #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Wav2Vec-OSR\nFinetuned facebook's wav2vec2 model for speech to text module of The Sound Of AI open source research group.\n\nThe original base model is pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.",
"## Paper\n\nAuthors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli",
"## Abstract\n\nWe show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.\n\nThe original model can be found under URL\nThe original model can also be found in hugging face public model repository here",
"## Usage"
] |
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
ianc89/hagrid
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-classification
|
transformers
|
# BERT-base-cased-qa-evaluator
This model takes a question answer pair as an input and outputs a value representing its prediction about whether the input was a valid question and answer pair or not. The model is a pretrained [BERT-base-cased](https://huggingface.co/bert-base-cased) with a sequence classification head.
## Intended uses
The QA evaluator was originally designed to be used with the [t5-base-question-generator](https://huggingface.co/iarfmoose/t5-base-question-generator) for evaluating the quality of generated questions.
The input for the QA evaluator follows the format for `BertForSequenceClassification`, but using the question and answer as the two sequences. Inputs should take the following format:
```
[CLS] <question> [SEP] <answer [SEP]
```
## Limitations and bias
The model is trained to evaluate if a question and answer are semantically related, but cannot determine whether an answer is actually true/correct or not.
## Training data
The training data was made up of question-answer pairs from the following datasets:
- [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)
- [RACE](http://www.cs.cmu.edu/~glai1/data/race/)
- [CoQA](https://stanfordnlp.github.io/coqa/)
- [MSMARCO](https://microsoft.github.io/msmarco/)
## Training procedure
The question and answer were concatenated 50% of the time. In the other 50% of the time a corruption operation was performed (either swapping the answer for an unrelated answer, or by copying part of the question into the answer). The model was then trained to predict whether the input sequence represented one of the original QA pairs or a corrupted input.
|
{}
|
iarfmoose/bert-base-cased-qa-evaluator
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# BERT-base-cased-qa-evaluator
This model takes a question answer pair as an input and outputs a value representing its prediction about whether the input was a valid question and answer pair or not. The model is a pretrained BERT-base-cased with a sequence classification head.
## Intended uses
The QA evaluator was originally designed to be used with the t5-base-question-generator for evaluating the quality of generated questions.
The input for the QA evaluator follows the format for 'BertForSequenceClassification', but using the question and answer as the two sequences. Inputs should take the following format:
## Limitations and bias
The model is trained to evaluate if a question and answer are semantically related, but cannot determine whether an answer is actually true/correct or not.
## Training data
The training data was made up of question-answer pairs from the following datasets:
- SQuAD
- RACE
- CoQA
- MSMARCO
## Training procedure
The question and answer were concatenated 50% of the time. In the other 50% of the time a corruption operation was performed (either swapping the answer for an unrelated answer, or by copying part of the question into the answer). The model was then trained to predict whether the input sequence represented one of the original QA pairs or a corrupted input.
|
[
"# BERT-base-cased-qa-evaluator\n\nThis model takes a question answer pair as an input and outputs a value representing its prediction about whether the input was a valid question and answer pair or not. The model is a pretrained BERT-base-cased with a sequence classification head.",
"## Intended uses\n\nThe QA evaluator was originally designed to be used with the t5-base-question-generator for evaluating the quality of generated questions. \n\nThe input for the QA evaluator follows the format for 'BertForSequenceClassification', but using the question and answer as the two sequences. Inputs should take the following format:",
"## Limitations and bias\n\nThe model is trained to evaluate if a question and answer are semantically related, but cannot determine whether an answer is actually true/correct or not.",
"## Training data\n\nThe training data was made up of question-answer pairs from the following datasets: \n- SQuAD\n- RACE\n- CoQA\n- MSMARCO",
"## Training procedure\n\nThe question and answer were concatenated 50% of the time. In the other 50% of the time a corruption operation was performed (either swapping the answer for an unrelated answer, or by copying part of the question into the answer). The model was then trained to predict whether the input sequence represented one of the original QA pairs or a corrupted input."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# BERT-base-cased-qa-evaluator\n\nThis model takes a question answer pair as an input and outputs a value representing its prediction about whether the input was a valid question and answer pair or not. The model is a pretrained BERT-base-cased with a sequence classification head.",
"## Intended uses\n\nThe QA evaluator was originally designed to be used with the t5-base-question-generator for evaluating the quality of generated questions. \n\nThe input for the QA evaluator follows the format for 'BertForSequenceClassification', but using the question and answer as the two sequences. Inputs should take the following format:",
"## Limitations and bias\n\nThe model is trained to evaluate if a question and answer are semantically related, but cannot determine whether an answer is actually true/correct or not.",
"## Training data\n\nThe training data was made up of question-answer pairs from the following datasets: \n- SQuAD\n- RACE\n- CoQA\n- MSMARCO",
"## Training procedure\n\nThe question and answer were concatenated 50% of the time. In the other 50% of the time a corruption operation was performed (either swapping the answer for an unrelated answer, or by copying part of the question into the answer). The model was then trained to predict whether the input sequence represented one of the original QA pairs or a corrupted input."
] |
token-classification
|
transformers
|
# RoBERTa-base-bulgarian-POS
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This model is a version of [RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian) fine-tuned for part-of-speech tagging.
## Intended uses
The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.
An example of this can be found [here](https://github.com/iarfmoose/bulgarian-nlp/blob/master/models/postagger.py).
## Limitations and bias
The pretraining data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
In addition to the pretraining data used in [RoBERTa-base-Bulgarian]([RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian)), the model was trained on the UPOS tags from [UD_Bulgarian-BTB](https://github.com/UniversalDependencies/UD_Bulgarian-BTB).
## Training procedure
The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 97% on the test set.
|
{"language": "bg"}
|
iarfmoose/roberta-base-bulgarian-pos
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"bg",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[
"bg"
] |
TAGS
#transformers #pytorch #tf #jax #roberta #token-classification #bg #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
# RoBERTa-base-bulgarian-POS
The RoBERTa model was originally introduced in this paper. This model is a version of RoBERTa-base-Bulgarian fine-tuned for part-of-speech tagging.
## Intended uses
The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.
An example of this can be found here.
## Limitations and bias
The pretraining data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
In addition to the pretraining data used in RoBERTa-base-Bulgarian), the model was trained on the UPOS tags from UD_Bulgarian-BTB.
## Training procedure
The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 97% on the test set.
|
[
"# RoBERTa-base-bulgarian-POS\r\n\r\n\r\nThe RoBERTa model was originally introduced in this paper. This model is a version of RoBERTa-base-Bulgarian fine-tuned for part-of-speech tagging.",
"## Intended uses\r\n\r\nThe model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.\r\n\r\nAn example of this can be found here.",
"## Limitations and bias\r\n\r\nThe pretraining data is unfiltered text from the internet and may contain all sorts of biases.",
"## Training data\r\n\r\nIn addition to the pretraining data used in RoBERTa-base-Bulgarian), the model was trained on the UPOS tags from UD_Bulgarian-BTB.",
"## Training procedure\r\n\r\nThe model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 97% on the test set."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #token-classification #bg #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"# RoBERTa-base-bulgarian-POS\r\n\r\n\r\nThe RoBERTa model was originally introduced in this paper. This model is a version of RoBERTa-base-Bulgarian fine-tuned for part-of-speech tagging.",
"## Intended uses\r\n\r\nThe model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.\r\n\r\nAn example of this can be found here.",
"## Limitations and bias\r\n\r\nThe pretraining data is unfiltered text from the internet and may contain all sorts of biases.",
"## Training data\r\n\r\nIn addition to the pretraining data used in RoBERTa-base-Bulgarian), the model was trained on the UPOS tags from UD_Bulgarian-BTB.",
"## Training procedure\r\n\r\nThe model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 97% on the test set."
] |
fill-mask
|
transformers
|
# RoBERTa-base-bulgarian
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This is a version of [RoBERTa-base](https://huggingface.co/roberta-base) pretrained on Bulgarian text.
## Intended uses
This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.
## Limitations and bias
The training data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
This model was trained on the following data:
- [bg_dedup from OSCAR](https://oscar-corpus.com/)
- [Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
- [Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
## Training procedure
The model was pretrained using a masked language-modeling objective with dynamic masking as described [here](https://huggingface.co/roberta-base#preprocessing)
It was trained for 200k steps. The batch size was limited to 8 due to GPU memory limitations.
|
{"language": "bg"}
|
iarfmoose/roberta-base-bulgarian
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"bg",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[
"bg"
] |
TAGS
#transformers #pytorch #tf #jax #roberta #fill-mask #bg #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
# RoBERTa-base-bulgarian
The RoBERTa model was originally introduced in this paper. This is a version of RoBERTa-base pretrained on Bulgarian text.
## Intended uses
This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.
## Limitations and bias
The training data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
This model was trained on the following data:
- bg_dedup from OSCAR
- Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection
- Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection
## Training procedure
The model was pretrained using a masked language-modeling objective with dynamic masking as described here
It was trained for 200k steps. The batch size was limited to 8 due to GPU memory limitations.
|
[
"# RoBERTa-base-bulgarian\r\n\r\n\r\nThe RoBERTa model was originally introduced in this paper. This is a version of RoBERTa-base pretrained on Bulgarian text.",
"## Intended uses\r\n\r\nThis model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.",
"## Limitations and bias\r\n\r\nThe training data is unfiltered text from the internet and may contain all sorts of biases.",
"## Training data\r\n\r\nThis model was trained on the following data:\r\n- bg_dedup from OSCAR\r\n- Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection\r\n- Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection",
"## Training procedure\r\n\r\nThe model was pretrained using a masked language-modeling objective with dynamic masking as described here\r\n\r\nIt was trained for 200k steps. The batch size was limited to 8 due to GPU memory limitations."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #bg #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"# RoBERTa-base-bulgarian\r\n\r\n\r\nThe RoBERTa model was originally introduced in this paper. This is a version of RoBERTa-base pretrained on Bulgarian text.",
"## Intended uses\r\n\r\nThis model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.",
"## Limitations and bias\r\n\r\nThe training data is unfiltered text from the internet and may contain all sorts of biases.",
"## Training data\r\n\r\nThis model was trained on the following data:\r\n- bg_dedup from OSCAR\r\n- Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection\r\n- Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection",
"## Training procedure\r\n\r\nThe model was pretrained using a masked language-modeling objective with dynamic masking as described here\r\n\r\nIt was trained for 200k steps. The batch size was limited to 8 due to GPU memory limitations."
] |
token-classification
|
transformers
|
# RoBERTa-small-bulgarian-POS
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This model is a version of [RoBERTa-small-Bulgarian](https://huggingface.co/iarfmoose/roberta-small-bulgarian) fine-tuned for part-of-speech tagging.
## Intended uses
The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.
An example of this can be found [here](https://github.com/iarfmoose/bulgarian-nlp/blob/master/models/postagger.py).
## Limitations and bias
The pretraining data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
In addition to the pretraining data used in [RoBERTa-base-Bulgarian]([RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian)), the model was trained on the UPOS tags from (UD_Bulgarian-BTB)[https://github.com/UniversalDependencies/UD_Bulgarian-BTB].
## Training procedure
The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 98% on the test set.
|
{"language": "bg"}
|
iarfmoose/roberta-small-bulgarian-pos
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"bg",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[
"bg"
] |
TAGS
#transformers #pytorch #tf #jax #roberta #token-classification #bg #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
# RoBERTa-small-bulgarian-POS
The RoBERTa model was originally introduced in this paper. This model is a version of RoBERTa-small-Bulgarian fine-tuned for part-of-speech tagging.
## Intended uses
The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.
An example of this can be found here.
## Limitations and bias
The pretraining data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
In addition to the pretraining data used in RoBERTa-base-Bulgarian), the model was trained on the UPOS tags from (UD_Bulgarian-BTB)[URL
## Training procedure
The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 98% on the test set.
|
[
"# RoBERTa-small-bulgarian-POS\r\n\r\n\r\nThe RoBERTa model was originally introduced in this paper. This model is a version of RoBERTa-small-Bulgarian fine-tuned for part-of-speech tagging.",
"## Intended uses\r\n\r\nThe model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.\r\n\r\nAn example of this can be found here.",
"## Limitations and bias\r\n\r\nThe pretraining data is unfiltered text from the internet and may contain all sorts of biases.",
"## Training data\r\n\r\nIn addition to the pretraining data used in RoBERTa-base-Bulgarian), the model was trained on the UPOS tags from (UD_Bulgarian-BTB)[URL",
"## Training procedure\r\n\r\nThe model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 98% on the test set."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #token-classification #bg #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"# RoBERTa-small-bulgarian-POS\r\n\r\n\r\nThe RoBERTa model was originally introduced in this paper. This model is a version of RoBERTa-small-Bulgarian fine-tuned for part-of-speech tagging.",
"## Intended uses\r\n\r\nThe model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.\r\n\r\nAn example of this can be found here.",
"## Limitations and bias\r\n\r\nThe pretraining data is unfiltered text from the internet and may contain all sorts of biases.",
"## Training data\r\n\r\nIn addition to the pretraining data used in RoBERTa-base-Bulgarian), the model was trained on the UPOS tags from (UD_Bulgarian-BTB)[URL",
"## Training procedure\r\n\r\nThe model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 98% on the test set."
] |
fill-mask
|
transformers
|
# RoBERTa-small-bulgarian
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This is a smaller version of [RoBERTa-base-bulgarian](https://huggingface.co/iarfmoose/roberta-small-bulgarian) with only 6 hidden layers, but similar performance.
## Intended uses
This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.
## Limitations and bias
The training data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
This model was trained on the following data:
- [bg_dedup from OSCAR](https://oscar-corpus.com/)
- [Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
- [Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
## Training procedure
The model was pretrained using a masked language-modeling objective with dynamic masking as described [here](https://huggingface.co/roberta-base#preprocessing)
It was trained for 160k steps. The batch size was limited to 8 due to GPU memory limitations.
|
{"language": "bg"}
|
iarfmoose/roberta-small-bulgarian
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"bg",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[
"bg"
] |
TAGS
#transformers #pytorch #tf #jax #roberta #fill-mask #bg #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
# RoBERTa-small-bulgarian
The RoBERTa model was originally introduced in this paper. This is a smaller version of RoBERTa-base-bulgarian with only 6 hidden layers, but similar performance.
## Intended uses
This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.
## Limitations and bias
The training data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
This model was trained on the following data:
- bg_dedup from OSCAR
- Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection
- Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection
## Training procedure
The model was pretrained using a masked language-modeling objective with dynamic masking as described here
It was trained for 160k steps. The batch size was limited to 8 due to GPU memory limitations.
|
[
"# RoBERTa-small-bulgarian\r\n\r\n\r\nThe RoBERTa model was originally introduced in this paper. This is a smaller version of RoBERTa-base-bulgarian with only 6 hidden layers, but similar performance.",
"## Intended uses\r\n\r\nThis model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.",
"## Limitations and bias\r\n\r\nThe training data is unfiltered text from the internet and may contain all sorts of biases.",
"## Training data\r\n\r\nThis model was trained on the following data:\r\n- bg_dedup from OSCAR\r\n- Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection\r\n- Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection",
"## Training procedure\r\n\r\nThe model was pretrained using a masked language-modeling objective with dynamic masking as described here\r\n\r\nIt was trained for 160k steps. The batch size was limited to 8 due to GPU memory limitations."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #bg #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"# RoBERTa-small-bulgarian\r\n\r\n\r\nThe RoBERTa model was originally introduced in this paper. This is a smaller version of RoBERTa-base-bulgarian with only 6 hidden layers, but similar performance.",
"## Intended uses\r\n\r\nThis model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.",
"## Limitations and bias\r\n\r\nThe training data is unfiltered text from the internet and may contain all sorts of biases.",
"## Training data\r\n\r\nThis model was trained on the following data:\r\n- bg_dedup from OSCAR\r\n- Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection\r\n- Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection",
"## Training procedure\r\n\r\nThe model was pretrained using a masked language-modeling objective with dynamic masking as described here\r\n\r\nIt was trained for 160k steps. The batch size was limited to 8 due to GPU memory limitations."
] |
text2text-generation
|
transformers
|
# Model name
## Model description
This model is a sequence-to-sequence question generator which takes an answer and context as an input, and generates a question as an output. It is based on a pretrained `t5-base` model.
## Intended uses & limitations
The model is trained to generate reading comprehension-style questions with answers extracted from a text. The model performs best with full sentence answers, but can also be used with single word or short phrase answers.
#### How to use
The model takes concatenated answers and context as an input sequence, and will generate a full question sentence as an output sequence. The max sequence length is 512 tokens. Inputs should be organised into the following format:
```
<answer> answer text here <context> context text here
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
For best results, a large number of questions can be generated, and then filtered using [iarfmoose/bert-base-cased-qa-evaluator](https://huggingface.co/iarfmoose/bert-base-cased-qa-evaluator).
For examples, please see https://github.com/iarfmoose/question_generator.
#### Limitations and bias
The model is limited to generating questions in the same style as those found in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), [CoQA](https://stanfordnlp.github.io/coqa/), and [MSMARCO](https://microsoft.github.io/msmarco/). The generated questions can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context and answer do not match, the generated question is likely to be incoherent.
## Training data
The model was fine-tuned on a dataset made up of several well-known QA datasets ([SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), [CoQA](https://stanfordnlp.github.io/coqa/), and [MSMARCO](https://microsoft.github.io/msmarco/)). The datasets were restructured by concatenating the answer and context fields into the previously-mentioned format. The question field from the datasets was used as the target during training. The full training set was roughly 200,000 examples.
## Training procedure
The model was trained for 20 epochs over the training set with a learning rate of 1e-3. The batch size was only 4 due to GPU memory limitations when training on Google Colab.
|
{}
|
iarfmoose/t5-base-question-generator
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model name
## Model description
This model is a sequence-to-sequence question generator which takes an answer and context as an input, and generates a question as an output. It is based on a pretrained 't5-base' model.
## Intended uses & limitations
The model is trained to generate reading comprehension-style questions with answers extracted from a text. The model performs best with full sentence answers, but can also be used with single word or short phrase answers.
#### How to use
The model takes concatenated answers and context as an input sequence, and will generate a full question sentence as an output sequence. The max sequence length is 512 tokens. Inputs should be organised into the following format:
The input sequence can then be encoded and passed as the 'input_ids' argument in the model's 'generate()' method.
For best results, a large number of questions can be generated, and then filtered using iarfmoose/bert-base-cased-qa-evaluator.
For examples, please see URL
#### Limitations and bias
The model is limited to generating questions in the same style as those found in SQuAD, CoQA, and MSMARCO. The generated questions can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context and answer do not match, the generated question is likely to be incoherent.
## Training data
The model was fine-tuned on a dataset made up of several well-known QA datasets (SQuAD, CoQA, and MSMARCO). The datasets were restructured by concatenating the answer and context fields into the previously-mentioned format. The question field from the datasets was used as the target during training. The full training set was roughly 200,000 examples.
## Training procedure
The model was trained for 20 epochs over the training set with a learning rate of 1e-3. The batch size was only 4 due to GPU memory limitations when training on Google Colab.
|
[
"# Model name",
"## Model description\n\nThis model is a sequence-to-sequence question generator which takes an answer and context as an input, and generates a question as an output. It is based on a pretrained 't5-base' model.",
"## Intended uses & limitations\n\nThe model is trained to generate reading comprehension-style questions with answers extracted from a text. The model performs best with full sentence answers, but can also be used with single word or short phrase answers.",
"#### How to use\n\nThe model takes concatenated answers and context as an input sequence, and will generate a full question sentence as an output sequence. The max sequence length is 512 tokens. Inputs should be organised into the following format:\n\nThe input sequence can then be encoded and passed as the 'input_ids' argument in the model's 'generate()' method.\n\nFor best results, a large number of questions can be generated, and then filtered using iarfmoose/bert-base-cased-qa-evaluator.\n\nFor examples, please see URL",
"#### Limitations and bias\n\nThe model is limited to generating questions in the same style as those found in SQuAD, CoQA, and MSMARCO. The generated questions can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context and answer do not match, the generated question is likely to be incoherent.",
"## Training data\n\nThe model was fine-tuned on a dataset made up of several well-known QA datasets (SQuAD, CoQA, and MSMARCO). The datasets were restructured by concatenating the answer and context fields into the previously-mentioned format. The question field from the datasets was used as the target during training. The full training set was roughly 200,000 examples.",
"## Training procedure\n\nThe model was trained for 20 epochs over the training set with a learning rate of 1e-3. The batch size was only 4 due to GPU memory limitations when training on Google Colab."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model name",
"## Model description\n\nThis model is a sequence-to-sequence question generator which takes an answer and context as an input, and generates a question as an output. It is based on a pretrained 't5-base' model.",
"## Intended uses & limitations\n\nThe model is trained to generate reading comprehension-style questions with answers extracted from a text. The model performs best with full sentence answers, but can also be used with single word or short phrase answers.",
"#### How to use\n\nThe model takes concatenated answers and context as an input sequence, and will generate a full question sentence as an output sequence. The max sequence length is 512 tokens. Inputs should be organised into the following format:\n\nThe input sequence can then be encoded and passed as the 'input_ids' argument in the model's 'generate()' method.\n\nFor best results, a large number of questions can be generated, and then filtered using iarfmoose/bert-base-cased-qa-evaluator.\n\nFor examples, please see URL",
"#### Limitations and bias\n\nThe model is limited to generating questions in the same style as those found in SQuAD, CoQA, and MSMARCO. The generated questions can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context and answer do not match, the generated question is likely to be incoherent.",
"## Training data\n\nThe model was fine-tuned on a dataset made up of several well-known QA datasets (SQuAD, CoQA, and MSMARCO). The datasets were restructured by concatenating the answer and context fields into the previously-mentioned format. The question field from the datasets was used as the target during training. The full training set was roughly 200,000 examples.",
"## Training procedure\n\nThe model was trained for 20 epochs over the training set with a learning rate of 1e-3. The batch size was only 4 due to GPU memory limitations when training on Google Colab."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Frisian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
tbatch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Frisian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fy-NL", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\–\\—\\¬\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 21.72 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Frisian/XLSR_Frisian.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Frisian/wav2vec2_fyNL_eval.ipynb)
|
{"language": "fy-NL", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Frisian by Adam Montgomerie", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fy-NL", "type": "common_voice", "args": "fy-NL"}, "metrics": [{"type": "wer", "value": 21.72, "name": "Test WER"}]}]}]}
|
iarfmoose/wav2vec2-large-xlsr-frisian
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fy-NL"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Frisian
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Frisian using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Frisian test data of Common Voice.
Test Result: 21.72 %
## Training
The Common Voice 'train', 'validation' datasets were used for training.
The script used for training can be found here
A notebook of the evaluation script can be found here
|
[
"# Wav2Vec2-Large-XLSR-53-Frisian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Frisian using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Frisian test data of Common Voice.\n\n\n\n\nTest Result: 21.72 %",
"## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here\n\nA notebook of the evaluation script can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Frisian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Frisian using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Frisian test data of Common Voice.\n\n\n\n\nTest Result: 21.72 %",
"## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here\n\nA notebook of the evaluation script can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Kyrgyz
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Kyrgyz using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ky", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Kyrgyz test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ky", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�\\\\\\\\–\\\\\\\\—\\\\\\\\¬\\\\\\\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.71 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Kyrgyz/XLSR_Kyrgyz.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Kyrgyz/wav2vec2_ky_eval.ipynb)
|
{"language": "ky", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Kyrgyz by Adam Montgomerie", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ky", "type": "common_voice", "args": "ky"}, "metrics": [{"type": "wer", "value": 34.71, "name": "Test WER"}]}]}]}
|
iarfmoose/wav2vec2-large-xlsr-kyrgyz
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ky",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ky"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ky #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Kyrgyz
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Kyrgyz using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Kyrgyz test data of Common Voice.
Test Result: 34.71 %
## Training
The Common Voice 'train', 'validation' datasets were used for training.
The script used for training can be found here
A notebook of the evaluation script can be found here
|
[
"# Wav2Vec2-Large-XLSR-53-Kyrgyz\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Kyrgyz using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Kyrgyz test data of Common Voice.\n\n\n\n\nTest Result: 34.71 %",
"## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here\n\nA notebook of the evaluation script can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ky #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Kyrgyz\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Kyrgyz using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Kyrgyz test data of Common Voice.\n\n\n\n\nTest Result: 34.71 %",
"## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here\n\nA notebook of the evaluation script can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Sorbian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Sorbian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hsb", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
tbatch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Sorbian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hsb", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\–\\\\\\\\\\\\\\\\—\\\\\\\\\\\\\\\\¬\\\\\\\\\\\\\\\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 41.74 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Sorbian/XLSR_Sorbian.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Sorbian/wav2vec2_hsb_eval.ipynb)
|
{"language": "hsb", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Sorbian by Adam Montgomerie", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hsb", "type": "common_voice", "args": "hsb"}, "metrics": [{"type": "wer", "value": 41.74, "name": "Test WER"}]}]}]}
|
iarfmoose/wav2vec2-large-xlsr-sorbian
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hsb",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hsb"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hsb #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Sorbian
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Sorbian using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Sorbian test data of Common Voice.
Test Result: 41.74 %
## Training
The Common Voice 'train', 'validation' datasets were used for training.
The script used for training can be found here
A notebook of the evaluation script can be found here
|
[
"# Wav2Vec2-Large-XLSR-53-Sorbian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Sorbian using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Sorbian test data of Common Voice.\n\n\n\n\nTest Result: 41.74 %",
"## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here\n\nA notebook of the evaluation script can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hsb #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Sorbian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Sorbian using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Sorbian test data of Common Voice.\n\n\n\n\nTest Result: 41.74 %",
"## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here\n\nA notebook of the evaluation script can be found here"
] |
null | null |
SenDM model described at https://arxiv.org/pdf/2201.02026
---
language:
- en
tags:
- discourse-markers
license: apache-2.0
---
|
{}
|
ibm/tslm-discourse-markers
| null |
[
"arxiv:2201.02026",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2201.02026"
] |
[] |
TAGS
#arxiv-2201.02026 #region-us
|
SenDM model described at URL
---
language:
- en
tags:
- discourse-markers
license: apache-2.0
---
|
[] |
[
"TAGS\n#arxiv-2201.02026 #region-us \n"
] |
image-classification
|
transformers
|
# swin-age-classifier
Trained on 80 epochs -
Data from: Ai Crowd - Blitz
ai-blitz-xiii - Age Prediction
https://www.aicrowd.com/challenges/ai-blitz-xiii/problems/age-prediction/
Notebook based on HuggingPics
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
ibombonato/swin-age-classifier
| null |
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #swin #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# swin-age-classifier
Trained on 80 epochs -
Data from: Ai Crowd - Blitz
ai-blitz-xiii - Age Prediction
URL
Notebook based on HuggingPics
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
|
[
"# swin-age-classifier\n\n\nTrained on 80 epochs - \n\nData from: Ai Crowd - Blitz \nai-blitz-xiii - Age Prediction\nURL\n\nNotebook based on HuggingPics\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo."
] |
[
"TAGS\n#transformers #pytorch #tensorboard #swin #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# swin-age-classifier\n\n\nTrained on 80 epochs - \n\nData from: Ai Crowd - Blitz \nai-blitz-xiii - Age Prediction\nURL\n\nNotebook based on HuggingPics\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo."
] |
image-classification
|
transformers
|
# vit-age-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
ibombonato/vit-age-classifier
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# vit-age-classifier
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
|
[
"# vit-age-classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo."
] |
[
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# vit-age-classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo."
] |
text-classification
|
transformers
|
# XLMIndic Base Multiscript
This model is finetuned from [this model](https://huggingface.co/ibraheemmoosa/xlmindic-base-multiscript) on Soham Bangla News Classification task which is part of the IndicGLUE benchmark.
## Model description
This model has the same configuration as the [ALBERT Base v2 model](https://huggingface.co/albert-base-v2/). Specifically, this model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
- 512 sequence length
## Training data
This model was fine-tuned on Soham dataset that is part of the IndicGLUE benchmark.
## Training procedure
### Preprocessing
The texts are tokenized using SentencePiece and a vocabulary size of 50,000.
### Training
The model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*.
## Evaluation results
See results specific to Soham in the following table.
### IndicGLUE
Task | mBERT | XLM-R | IndicBERT-Base | XLMIndic-Base-Uniscript | XLMIndic-Base-Multiscript (This Model)
-----| ----- | ----- | ------ | ------- | --------
Wikipedia Section Title Prediction | 71.90 | 65.45 | 69.40 | **81.78 ± 0.60** | 77.17 ± 0.76
Article Genre Classification | 88.64 | 96.61 | 97.72 | **98.70 ± 0.29** | 98.30 ± 0.26
Named Entity Recognition (F1-score) | 71.29 | 62.18 | 56.69 | **89.85 ± 1.14** | 83.19 ± 1.58
BBC Hindi News Article Classification | 60.55 | 75.52 | 74.60 | **79.14 ± 0.60** | 77.28 ± 1.50
Soham Bangla News Article Classification | 80.23 | 87.6 | 78.45 | **93.89 ± 0.48** | 93.22 ± 0.49
INLTK Gujarati Headlines Genre Classification | - | - | **92.91** | 90.73 ± 0.75 | 90.41 ± 0.69
INLTK Marathi Headlines Genre Classification | - | - | **94.30** | 92.04 ± 0.47 | 92.21 ± 0.23
IITP Hindi Product Reviews Sentiment Classification | 74.57 | **78.97** | 71.32 | 77.18 ± 0.77 | 76.33 ± 0.84
IITP Hindi Movie Reviews Sentiment Classification | 56.77 | 61.61 | 59.03 | **66.34 ± 0.16** | 65.91 ± 2.20
MIDAS Hindi Discourse Type Classification | 71.20 | **79.94** | 78.44 | 78.54 ± 0.91 | 78.39 ± 0.33
Cloze Style Question Answering (Fill-mask task) | - | - | 37.16 | **41.54** | 38.21
## Intended uses & limitations
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages.
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=xlmindic) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Then you can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='ibraheemmoosa/xlmindic-base-multiscript')
>>> text = "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি [MASK], ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।"
>>> unmasker(text)
[{'score': 0.34163928031921387,
'token': 5399,
'token_str': 'কবি',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.30519795417785645,
'token': 33436,
'token_str': 'people',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি people, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.29130080342292786,
'token': 30476,
'token_str': 'সাহিত্যিক',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি সাহিত্যিক, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.031051287427544594,
'token': 6139,
'token_str': 'লেখক',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি লেখক, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.002705035964027047,
'token': 38443,
'token_str': 'শিল্পীরা',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি শিল্পীরা, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'}]
```
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
## Contact
Feel free to contact us if you have any ideas or if you want to know more about our models.
- Ibraheem Muhammad Moosa (ibraheemmoosa1347@gmail.com)
- Mahmud Elahi Akhter (mahmud.akhter01@northsouth.edu)
- Ashfia Binte Habib
## BibTeX entry and citation info
```bibtex
@article{Moosa2022DoesTH,
title={Does Transliteration Help Multilingual Language Modeling?},
author={Ibraheem Muhammad Moosa and Mahmuda Akhter and Ashfia Binte Habib},
journal={ArXiv},
year={2022},
volume={abs/2201.12501}
}
```
|
{"language": ["as", "bn", "gu", "hi", "mr", "ne", "or", "pa", "si", "sa", "bpy", "bh", "gom", "mai"], "license": "apache-2.0", "tags": ["multilingual", "albert", "fill-mask", "xlmindic", "nlp", "indoaryan", "indicnlp", "iso15919", "text-classification"], "datasets": ["oscar"], "widget": [{"text": "\u099a\u09c0\u09a8\u09c7\u09b0 \u09ae\u09a7\u09cd\u09af\u09be\u099e\u09cd\u099a\u09b2\u09c7 \u0986\u09b0\u0993 \u098f\u0995\u099f\u09bf \u09b6\u09b9\u09b0\u09c7\u09b0 \u09ac\u09be\u09b8\u09bf\u09a8\u09cd\u09a6\u09be\u09b0\u09be \u0986\u09ac\u09be\u09b0 \u0998\u09b0\u09ac\u09a8\u09cd\u09a6\u09c0 \u09b9\u09df\u09c7 \u09aa\u09dc\u09c7\u099b\u09c7\u09a8\u0964 \u0986\u099c \u09ae\u0999\u09cd\u0997\u09b2\u09ac\u09be\u09b0 \u09a8\u09a4\u09c1\u09a8 \u0995\u09b0\u09c7 \u09b2\u0995\u09a1\u09be\u0989\u09a8\u2013\u09b8\u0982\u0995\u09cd\u09b0\u09be\u09a8\u09cd\u09a4 \u09ac\u09bf\u09a7\u09bf\u09a8\u09bf\u09b7\u09c7\u09a7 \u099c\u09be\u09b0\u09bf \u09b9\u0993\u09df\u09be\u09b0 \u09aa\u09b0 \u0998\u09b0\u09c7 \u0986\u099f\u0995\u09be \u09aa\u09dc\u09c7\u099b\u09c7\u09a8 \u09a4\u09be\u0981\u09b0\u09be\u0964 \u0995\u09b0\u09cb\u09a8\u09be\u09b0 \u0985\u09a4\u09bf \u09b8\u0982\u0995\u09cd\u09b0\u09be\u09ae\u0995 \u09a8\u09a4\u09c1\u09a8 \u09a7\u09b0\u09a8 \u0985\u09ae\u09bf\u0995\u09cd\u09b0\u09a8\u09c7\u09b0 \u09ac\u09bf\u09b8\u09cd\u09a4\u09be\u09b0 \u09a0\u09c7\u0995\u09be\u09a4\u09c7 \u098f\u09ae\u09a8 \u09aa\u09a6\u0995\u09cd\u09b7\u09c7\u09aa \u09a8\u09bf\u09df\u09c7\u099b\u09c7 \u0995\u09b0\u09cd\u09a4\u09c3\u09aa\u0995\u09cd\u09b7\u0964 \u0996\u09ac\u09b0 \u09ac\u09be\u09b0\u09cd\u09a4\u09be \u09b8\u0982\u09b8\u09cd\u09a5\u09be \u098f\u098f\u09ab\u09aa\u09bf\u09b0\u0964"}], "co2_eq_emissions": {"emissions": "0.21 in grams of CO2", "source": "calculated using this webstie https://mlco2.github.io/impact/#compute", "training_type": "fine-tuning", "geographical_location": "NA", "hardware_used": "P100 for about 1.5 hours"}}
|
ibraheemmoosa/xlmindic-base-multiscript-soham
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"albert",
"text-classification",
"multilingual",
"fill-mask",
"xlmindic",
"nlp",
"indoaryan",
"indicnlp",
"iso15919",
"as",
"bn",
"gu",
"hi",
"mr",
"ne",
"or",
"pa",
"si",
"sa",
"bpy",
"bh",
"gom",
"mai",
"dataset:oscar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"as",
"bn",
"gu",
"hi",
"mr",
"ne",
"or",
"pa",
"si",
"sa",
"bpy",
"bh",
"gom",
"mai"
] |
TAGS
#transformers #pytorch #tf #jax #albert #text-classification #multilingual #fill-mask #xlmindic #nlp #indoaryan #indicnlp #iso15919 #as #bn #gu #hi #mr #ne #or #pa #si #sa #bpy #bh #gom #mai #dataset-oscar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
XLMIndic Base Multiscript
=========================
This model is finetuned from this model on Soham Bangla News Classification task which is part of the IndicGLUE benchmark.
Model description
-----------------
This model has the same configuration as the ALBERT Base v2 model. Specifically, this model has the following configuration:
* 12 repeating layers
* 128 embedding dimension
* 768 hidden dimension
* 12 attention heads
* 11M parameters
* 512 sequence length
Training data
-------------
This model was fine-tuned on Soham dataset that is part of the IndicGLUE benchmark.
Training procedure
------------------
### Preprocessing
The texts are tokenized using SentencePiece and a vocabulary size of 50,000.
### Training
The model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*.
Evaluation results
------------------
See results specific to Soham in the following table.
### IndicGLUE
Intended uses & limitations
---------------------------
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages.
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Then you can use this model directly with a pipeline for masked language modeling:
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
Contact
-------
Feel free to contact us if you have any ideas or if you want to know more about our models.
* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)
* Mahmud Elahi Akhter (mahmud.akhter01@URL)
* Ashfia Binte Habib
BibTeX entry and citation info
------------------------------
|
[
"### Preprocessing\n\n\nThe texts are tokenized using SentencePiece and a vocabulary size of 50,000.",
"### Training\n\n\nThe model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*.\n\n\nEvaluation results\n------------------\n\n\nSee results specific to Soham in the following table.",
"### IndicGLUE\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThis model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages.\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nThen you can use this model directly with a pipeline for masked language modeling:",
"### Limitations and bias\n\n\nEven though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.\n\n\nContact\n-------\n\n\nFeel free to contact us if you have any ideas or if you want to know more about our models.\n\n\n* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)\n* Mahmud Elahi Akhter (mahmud.akhter01@URL)\n* Ashfia Binte Habib\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #albert #text-classification #multilingual #fill-mask #xlmindic #nlp #indoaryan #indicnlp #iso15919 #as #bn #gu #hi #mr #ne #or #pa #si #sa #bpy #bh #gom #mai #dataset-oscar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Preprocessing\n\n\nThe texts are tokenized using SentencePiece and a vocabulary size of 50,000.",
"### Training\n\n\nThe model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*.\n\n\nEvaluation results\n------------------\n\n\nSee results specific to Soham in the following table.",
"### IndicGLUE\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThis model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages.\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nThen you can use this model directly with a pipeline for masked language modeling:",
"### Limitations and bias\n\n\nEven though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.\n\n\nContact\n-------\n\n\nFeel free to contact us if you have any ideas or if you want to know more about our models.\n\n\n* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)\n* Mahmud Elahi Akhter (mahmud.akhter01@URL)\n* Ashfia Binte Habib\n\n\nBibTeX entry and citation info\n------------------------------"
] |
fill-mask
|
transformers
|
# XLMIndic Base Multiscript
This model is identical in all aspects to [this model](https://huggingface.co/ibraheemmoosa/xlmindic-base-uniscript) except that we do not perform the ISO-15919 transliteration. Thus it is intended to serve as an ablation model for our study. See [this](https://huggingface.co/ibraheemmoosa/xlmindic-base-uniscript) to understand the details.
## Model description
This model has the same configuration as the [ALBERT Base v2 model](https://huggingface.co/albert-base-v2/). Specifically, this model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
- 512 sequence length
## Training data
This model was pretrained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset which is a medium sized multilingual corpus containing text from 163 languages. We select a subset of 14 languages based on the following criteria:
- Belongs to the [Indo-Aryan language family](https://en.wikipedia.org/wiki/Indo-Aryan_languages).
- Uses a [Brahmic script](https://en.wikipedia.org/wiki/Brahmic_scripts).
These are the 14 languages we pretrain this model on:
- Assamese
- Bangla
- Bihari
- Bishnupriya Manipuri
- Goan Konkani
- Gujarati
- Hindi
- Maithili
- Marathi
- Nepali
- Oriya
- Panjabi
- Sanskrit
- Sinhala
## Training procedure
### Preprocessing
The texts are tokenized using SentencePiece and a vocabulary size of 50,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
Training objective is the same as the original ALBERT.
.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
The details of the sentence order prediction example generation procedure for each sentence are the following:
- Split the sentence into two parts A and B at a random index.
- With 50% probability swap the two parts.
The model was pretrained on TPUv3-8 for 1M steps. We have checkpoints available at every 100k pretraining steps. These are available at different branches of this repository. You can load these checkpoints by passing the `revision` parameter. For example to load the checkpoint at 500k you can use the following code.
```python
>>> AutoModel.from_pretrained('ibraheemmoosa/xlmindic-base-multiscript', revision='checkpoint_500k')
```
## Evaluation results
We evaluated this model on the Indo-Aryan subset of languages (Panjabi, Oriya, Assamese, Bangla, Hindi, Marathi, Gujarati) from the [IndicGLUE](https://huggingface.co/datasets/indic_glue) benchmark dataset. We report the mean and standard deviation of nine fine-tuning runs for this model.
### IndicGLUE
Task | mBERT | XLM-R | IndicBERT-Base | XLMIndic-Base-Uniscript | XLMIndic-Base-Multiscript (This Model)
-----| ----- | ----- | ------ | ------- | --------
Wikipedia Section Title Prediction | 71.90 | 65.45 | 69.40 | **81.78 ± 0.60** | 77.17 ± 0.76
Article Genre Classification | 88.64 | 96.61 | 97.72 | **98.70 ± 0.29** | 98.30 ± 0.26
Named Entity Recognition (F1-score) | 71.29 | 62.18 | 56.69 | **89.85 ± 1.14** | 83.19 ± 1.58
BBC Hindi News Article Classification | 60.55 | 75.52 | 74.60 | **79.14 ± 0.60** | 77.28 ± 1.50
Soham Bangla News Article Classification | 80.23 | 87.6 | 78.45 | **93.89 ± 0.48** | 93.22 ± 0.49
INLTK Gujarati Headlines Genre Classification | - | - | **92.91** | 90.73 ± 0.75 | 90.41 ± 0.69
INLTK Marathi Headlines Genre Classification | - | - | **94.30** | 92.04 ± 0.47 | 92.21 ± 0.23
IITP Hindi Product Reviews Sentiment Classification | 74.57 | **78.97** | 71.32 | 77.18 ± 0.77 | 76.33 ± 0.84
IITP Hindi Movie Reviews Sentiment Classification | 56.77 | 61.61 | 59.03 | **66.34 ± 0.16** | 65.91 ± 2.20
MIDAS Hindi Discourse Type Classification | 71.20 | **79.94** | 78.44 | 78.54 ± 0.91 | 78.39 ± 0.33
Cloze Style Question Answering (Fill-mask task) | - | - | 37.16 | **41.54** | 38.21
## Intended uses & limitations
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages.
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=xlmindic) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Then you can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='ibraheemmoosa/xlmindic-base-multiscript')
>>> text = "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি [MASK], ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।"
>>> unmasker(text)
[{'score': 0.34163928031921387,
'token': 5399,
'token_str': 'কবি',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.30519795417785645,
'token': 33436,
'token_str': 'people',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি people, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.29130080342292786,
'token': 30476,
'token_str': 'সাহিত্যিক',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি সাহিত্যিক, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.031051287427544594,
'token': 6139,
'token_str': 'লেখক',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি লেখক, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.002705035964027047,
'token': 38443,
'token_str': 'শিল্পীরা',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি শিল্পীরা, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'}]
```
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
## Contact
Feel free to contact us if you have any ideas or if you want to know more about our models.
- Ibraheem Muhammad Moosa (ibraheemmoosa1347@gmail.com)
- Mahmud Elahi Akhter (mahmud.akhter01@northsouth.edu)
- Ashfia Binte Habib
## BibTeX entry and citation info
```bibtex
@article{Moosa2022DoesTH,
title={Does Transliteration Help Multilingual Language Modeling?},
author={Ibraheem Muhammad Moosa and Mahmuda Akhter and Ashfia Binte Habib},
journal={ArXiv},
year={2022},
volume={abs/2201.12501}
}
```
|
{"language": ["as", "bn", "gu", "hi", "mr", "ne", "or", "pa", "si", "sa", "bpy", "bh", "gom", "mai"], "license": "apache-2.0", "tags": ["multilingual", "albert", "masked-language-modeling", "sentence-order-prediction", "fill-mask", "xlmindic", "nlp", "indoaryan", "indicnlp", "iso15919"], "datasets": ["oscar"], "widget": [{"text": "\u09b0\u09ac\u09c0\u09a8\u09cd\u09a6\u09cd\u09b0\u09a8\u09be\u09a5 \u09a0\u09be\u0995\u09c1\u09b0 \u098f\u09ab\u0986\u09b0\u098f\u098f\u09b8 (\u09ed \u09ae\u09c7 \u09e7\u09ee\u09ec\u09e7 - \u09ed \u0986\u0997\u09b8\u09cd\u099f \u09e7\u09ef\u09ea\u09e7; \u09e8\u09eb \u09ac\u09c8\u09b6\u09be\u0996 \u09e7\u09e8\u09ec\u09ee - \u09e8\u09e8 \u09b6\u09cd\u09b0\u09be\u09ac\u09a3 \u09e7\u09e9\u09ea\u09ee \u09ac\u0999\u09cd\u0997\u09be\u09ac\u09cd\u09a6) \u099b\u09bf\u09b2\u09c7\u09a8 \u0985\u0997\u09cd\u09b0\u09a3\u09c0 \u09ac\u09be\u0999\u09be\u09b2\u09bf [MASK], \u0994\u09aa\u09a8\u09cd\u09af\u09be\u09b8\u09bf\u0995, \u09b8\u0982\u0997\u09c0\u09a4\u09b8\u09cd\u09b0\u09b7\u09cd\u099f\u09be, \u09a8\u09be\u099f\u09cd\u09af\u0995\u09be\u09b0, \u099a\u09bf\u09a4\u09cd\u09b0\u0995\u09b0, \u099b\u09cb\u099f\u0997\u09b2\u09cd\u09aa\u0995\u09be\u09b0, \u09aa\u09cd\u09b0\u09be\u09ac\u09a8\u09cd\u09a7\u09bf\u0995, \u0985\u09ad\u09bf\u09a8\u09c7\u09a4\u09be, \u0995\u09a3\u09cd\u09a0\u09b6\u09bf\u09b2\u09cd\u09aa\u09c0 \u0993 \u09a6\u09be\u09b0\u09cd\u09b6\u09a8\u09bf\u0995\u0964 \u09e7\u09ef\u09e7\u09e9 \u09b8\u09be\u09b2\u09c7 \u0997\u09c0\u09a4\u09be\u099e\u09cd\u099c\u09b2\u09bf \u0995\u09be\u09ac\u09cd\u09af\u0997\u09cd\u09b0\u09a8\u09cd\u09a5\u09c7\u09b0 \u0987\u0982\u09b0\u09c7\u099c\u09bf \u0985\u09a8\u09c1\u09ac\u09be\u09a6\u09c7\u09b0 \u099c\u09a8\u09cd\u09af \u09a4\u09bf\u09a8\u09bf \u098f\u09b6\u09c0\u09af\u09bc\u09a6\u09c7\u09b0 \u09ae\u09a7\u09cd\u09af\u09c7 \u09b8\u09be\u09b9\u09bf\u09a4\u09cd\u09af\u09c7 \u09aa\u09cd\u09b0\u09a5\u09ae \u09a8\u09cb\u09ac\u09c7\u09b2 \u09aa\u09c1\u09b0\u09b8\u09cd\u0995\u09be\u09b0 \u09b2\u09be\u09ad \u0995\u09b0\u09c7\u09a8\u0964"}], "co2_eq_emissions": {"emissions": 28.53, "source": "calculated using this webstie https://mlco2.github.io/impact/#compute", "training_type": "pretraining", "geographical_location": "NA", "hardware_used": "TPUv3-8 for about 180 hours or 7.5 days"}}
|
ibraheemmoosa/xlmindic-base-multiscript
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"albert",
"pretraining",
"multilingual",
"masked-language-modeling",
"sentence-order-prediction",
"fill-mask",
"xlmindic",
"nlp",
"indoaryan",
"indicnlp",
"iso15919",
"as",
"bn",
"gu",
"hi",
"mr",
"ne",
"or",
"pa",
"si",
"sa",
"bpy",
"bh",
"gom",
"mai",
"dataset:oscar",
"license:apache-2.0",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"as",
"bn",
"gu",
"hi",
"mr",
"ne",
"or",
"pa",
"si",
"sa",
"bpy",
"bh",
"gom",
"mai"
] |
TAGS
#transformers #pytorch #tf #jax #albert #pretraining #multilingual #masked-language-modeling #sentence-order-prediction #fill-mask #xlmindic #nlp #indoaryan #indicnlp #iso15919 #as #bn #gu #hi #mr #ne #or #pa #si #sa #bpy #bh #gom #mai #dataset-oscar #license-apache-2.0 #co2_eq_emissions #endpoints_compatible #region-us
|
XLMIndic Base Multiscript
=========================
This model is identical in all aspects to this model except that we do not perform the ISO-15919 transliteration. Thus it is intended to serve as an ablation model for our study. See this to understand the details.
Model description
-----------------
This model has the same configuration as the ALBERT Base v2 model. Specifically, this model has the following configuration:
* 12 repeating layers
* 128 embedding dimension
* 768 hidden dimension
* 12 attention heads
* 11M parameters
* 512 sequence length
Training data
-------------
This model was pretrained on the OSCAR dataset which is a medium sized multilingual corpus containing text from 163 languages. We select a subset of 14 languages based on the following criteria:
* Belongs to the Indo-Aryan language family.
* Uses a Brahmic script.
These are the 14 languages we pretrain this model on:
* Assamese
* Bangla
* Bihari
* Bishnupriya Manipuri
* Goan Konkani
* Gujarati
* Hindi
* Maithili
* Marathi
* Nepali
* Oriya
* Panjabi
* Sanskrit
* Sinhala
Training procedure
------------------
### Preprocessing
The texts are tokenized using SentencePiece and a vocabulary size of 50,000. The inputs of the model are
then of the form:
### Training
Training objective is the same as the original ALBERT.
.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
The details of the sentence order prediction example generation procedure for each sentence are the following:
* Split the sentence into two parts A and B at a random index.
* With 50% probability swap the two parts.
The model was pretrained on TPUv3-8 for 1M steps. We have checkpoints available at every 100k pretraining steps. These are available at different branches of this repository. You can load these checkpoints by passing the 'revision' parameter. For example to load the checkpoint at 500k you can use the following code.
Evaluation results
------------------
We evaluated this model on the Indo-Aryan subset of languages (Panjabi, Oriya, Assamese, Bangla, Hindi, Marathi, Gujarati) from the IndicGLUE benchmark dataset. We report the mean and standard deviation of nine fine-tuning runs for this model.
### IndicGLUE
Intended uses & limitations
---------------------------
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages.
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Then you can use this model directly with a pipeline for masked language modeling:
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
Contact
-------
Feel free to contact us if you have any ideas or if you want to know more about our models.
* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)
* Mahmud Elahi Akhter (mahmud.akhter01@URL)
* Ashfia Binte Habib
BibTeX entry and citation info
------------------------------
|
[
"### Preprocessing\n\n\nThe texts are tokenized using SentencePiece and a vocabulary size of 50,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nTraining objective is the same as the original ALBERT.\n.\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nThe details of the sentence order prediction example generation procedure for each sentence are the following:\n\n\n* Split the sentence into two parts A and B at a random index.\n* With 50% probability swap the two parts.\n\n\nThe model was pretrained on TPUv3-8 for 1M steps. We have checkpoints available at every 100k pretraining steps. These are available at different branches of this repository. You can load these checkpoints by passing the 'revision' parameter. For example to load the checkpoint at 500k you can use the following code.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated this model on the Indo-Aryan subset of languages (Panjabi, Oriya, Assamese, Bangla, Hindi, Marathi, Gujarati) from the IndicGLUE benchmark dataset. We report the mean and standard deviation of nine fine-tuning runs for this model.",
"### IndicGLUE\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThis model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages.\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nThen you can use this model directly with a pipeline for masked language modeling:",
"### Limitations and bias\n\n\nEven though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.\n\n\nContact\n-------\n\n\nFeel free to contact us if you have any ideas or if you want to know more about our models.\n\n\n* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)\n* Mahmud Elahi Akhter (mahmud.akhter01@URL)\n* Ashfia Binte Habib\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #albert #pretraining #multilingual #masked-language-modeling #sentence-order-prediction #fill-mask #xlmindic #nlp #indoaryan #indicnlp #iso15919 #as #bn #gu #hi #mr #ne #or #pa #si #sa #bpy #bh #gom #mai #dataset-oscar #license-apache-2.0 #co2_eq_emissions #endpoints_compatible #region-us \n",
"### Preprocessing\n\n\nThe texts are tokenized using SentencePiece and a vocabulary size of 50,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nTraining objective is the same as the original ALBERT.\n.\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nThe details of the sentence order prediction example generation procedure for each sentence are the following:\n\n\n* Split the sentence into two parts A and B at a random index.\n* With 50% probability swap the two parts.\n\n\nThe model was pretrained on TPUv3-8 for 1M steps. We have checkpoints available at every 100k pretraining steps. These are available at different branches of this repository. You can load these checkpoints by passing the 'revision' parameter. For example to load the checkpoint at 500k you can use the following code.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated this model on the Indo-Aryan subset of languages (Panjabi, Oriya, Assamese, Bangla, Hindi, Marathi, Gujarati) from the IndicGLUE benchmark dataset. We report the mean and standard deviation of nine fine-tuning runs for this model.",
"### IndicGLUE\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThis model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages.\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nThen you can use this model directly with a pipeline for masked language modeling:",
"### Limitations and bias\n\n\nEven though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.\n\n\nContact\n-------\n\n\nFeel free to contact us if you have any ideas or if you want to know more about our models.\n\n\n* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)\n* Mahmud Elahi Akhter (mahmud.akhter01@URL)\n* Ashfia Binte Habib\n\n\nBibTeX entry and citation info\n------------------------------"
] |
text-classification
|
transformers
|
# XLMIndic Base Uniscript
This model is finetuned from [this model](https://huggingface.co/ibraheemmoosa/xlmindic-base-uniscript) on Soham Bangla News Classification task which is part of the IndicGLUE benchmark. **Before pretraining this model we transliterate the text to [ISO-15919](https://en.wikipedia.org/wiki/ISO_15919) format using the [Aksharamukha](https://pypi.org/project/aksharamukha/)
library.** A demo of Aksharamukha library is hosted [here](https://aksharamukha.appspot.com/converter)
where you can transliterate your text and use it on our model on the inference widget.
## Model description
This model has the same configuration as the [ALBERT Base v2 model](https://huggingface.co/albert-base-v2/). Specifically, this model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
- 512 sequence length
## Training data
This model was fine-tuned on Soham dataset that is part of the IndicGLUE benchmark.
## Transliteration
*The unique component of this model is that it takes in ISO-15919 transliterated text.*
The motivation behind this is this. When two languages share vocabularies, a machine learning model can exploit that to learn good cross-lingual representations. However if these two languages use different writing scripts it is difficult for a model to make the connection. Thus if if we can write the two languages in a single script then it is easier for the model to learn good cross-lingual representation.
For many of the scripts currently in use, there are standard transliteration schemes to convert to the Latin script. In particular, for the Indic scripts the ISO-15919 transliteration scheme is designed to consistently transliterate texts written in different Indic scripts to the Latin script.
An example of ISO-15919 transliteration for a piece of **Bangla** text is the following:
**Original:** "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক।"
**Transliterated:** 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika.'
Another example for a piece of **Hindi** text is the following:
**Original:** "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है"
**Transliterated:** "cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
## Training procedure
### Preprocessing
The texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000.
### Training
The model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*.
## Evaluation results
See results specific to Soham in the following table.
### IndicGLUE
Task | mBERT | XLM-R | IndicBERT-Base | XLMIndic-Base-Uniscript (This Model) | XLMIndic-Base-Multiscript (Ablation Model)
-----| ----- | ----- | ------ | ------- | --------
Wikipedia Section Title Prediction | 71.90 | 65.45 | 69.40 | **81.78 ± 0.60** | 77.17 ± 0.76
Article Genre Classification | 88.64 | 96.61 | 97.72 | **98.70 ± 0.29** | 98.30 ± 0.26
Named Entity Recognition (F1-score) | 71.29 | 62.18 | 56.69 | **89.85 ± 1.14** | 83.19 ± 1.58
BBC Hindi News Article Classification | 60.55 | 75.52 | 74.60 | **79.14 ± 0.60** | 77.28 ± 1.50
Soham Bangla News Article Classification | 80.23 | 87.6 | 78.45 | **93.89 ± 0.48** | 93.22 ± 0.49
INLTK Gujarati Headlines Genre Classification | - | - | **92.91** | 90.73 ± 0.75 | 90.41 ± 0.69
INLTK Marathi Headlines Genre Classification | - | - | **94.30** | 92.04 ± 0.47 | 92.21 ± 0.23
IITP Hindi Product Reviews Sentiment Classification | 74.57 | **78.97** | 71.32 | 77.18 ± 0.77 | 76.33 ± 0.84
IITP Hindi Movie Reviews Sentiment Classification | 56.77 | 61.61 | 59.03 | **66.34 ± 0.16** | 65.91 ± 2.20
MIDAS Hindi Discourse Type Classification | 71.20 | **79.94** | 78.44 | 78.54 ± 0.91 | 78.39 ± 0.33
Cloze Style Question Answering (Fill-mask task) | - | - | 37.16 | **41.54** | 38.21
## Intended uses & limitations
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919).
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=xlmindic) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
To use this model you will need to first install the [Aksharamukha](https://pypi.org/project/aksharamukha/) library.
```bash
pip install aksharamukha
```
Using this library you can transliterate any text wriiten in Indic scripts in the following way:
```python
>>> from aksharamukha import transliterate
>>> text = "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है"
>>> transliterated_text = transliterate.process('autodetect', 'ISO', text)
>>> transliterated_text
"cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
```
Then you can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> from aksharamukha import transliterate
>>> unmasker = pipeline('fill-mask', model='ibraheemmoosa/xlmindic-base-uniscript')
>>> text = "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি [MASK], ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।"
>>> transliterated_text = transliterate.process('Bengali', 'ISO', text)
>>> transliterated_text
'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli [MASK], aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama [MASK] puraskāra lābha karēna.'
>>> unmasker(transliterated_text)
[{'score': 0.39705055952072144,
'token': 1500,
'token_str': 'abhinētā',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli abhinētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.20499080419540405,
'token': 3585,
'token_str': 'kabi',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.1314290314912796,
'token': 15402,
'token_str': 'rājanētā',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli rājanētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.060830358415842056,
'token': 3212,
'token_str': 'kalākāra',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kalākāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.035522934049367905,
'token': 11586,
'token_str': 'sāhityakāra',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli sāhityakāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}]
```
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
## Contact
Feel free to contact us if you have any ideas or if you want to know more about our models.
- Ibraheem Muhammad Moosa (ibraheemmoosa1347@gmail.com)
- Mahmud Elahi Akhter (mahmud.akhter01@northsouth.edu)
- Ashfia Binte Habib
## BibTeX entry and citation info
Coming soon!
|
{"language": ["as", "bn", "gu", "hi", "mr", "ne", "or", "pa", "si", "sa", "bpy", "mai", "bh", "gom"], "license": "apache-2.0", "tags": ["multilingual", "albert", "xlmindic", "nlp", "indoaryan", "indicnlp", "iso15919", "transliteration", "text-classification"], "datasets": ["oscar"], "widget": [{"text": "c\u012bn\u0113ra madhy\u0101\u00f1cal\u0113 \u0101ra\u014d \u0113ka\u1e6di \u015bahar\u0113ra b\u0101sind\u0101r\u0101 \u0101b\u0101ra gharaband\u012b ha\u1e8f\u0113 pa\u1e5b\u0113ch\u0113na. \u0101ja ma\u1e45galab\u0101ra natuna kar\u0113 laka\u1e0d\u0101una\u2013sa\u1e41kr\u0101nta bidhini\u1e63\u0113dha j\u0101ri ha\u014d\u1e8f\u0101ra para ghar\u0113 \u0101\u1e6dak\u0101 pa\u1e5b\u0113ch\u0113na t\u0101m\u0310r\u0101. kar\u014dn\u0101ra ati sa\u1e41kr\u0101maka natuna dharana amikran\u0113ra bist\u0101ra \u1e6dh\u0113k\u0101t\u0113 \u0113mana padak\u1e63\u0113pa ni\u1e8f\u0113ch\u0113 kartr\u0325pak\u1e63a. khabara b\u0101rt\u0101 sa\u1e41sth\u0101 \u0113\u0113phapira."}], "co2_eq_emissions": {"emissions": "0.21 in grams of CO2", "source": "calculated using this webstie https://mlco2.github.io/impact/#compute", "training_type": "fine-tuning", "geographical_location": "NA", "hardware_used": "P100 for about 1.5 hours"}}
|
ibraheemmoosa/xlmindic-base-uniscript-soham
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"albert",
"text-classification",
"multilingual",
"xlmindic",
"nlp",
"indoaryan",
"indicnlp",
"iso15919",
"transliteration",
"as",
"bn",
"gu",
"hi",
"mr",
"ne",
"or",
"pa",
"si",
"sa",
"bpy",
"mai",
"bh",
"gom",
"dataset:oscar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"as",
"bn",
"gu",
"hi",
"mr",
"ne",
"or",
"pa",
"si",
"sa",
"bpy",
"mai",
"bh",
"gom"
] |
TAGS
#transformers #pytorch #tf #jax #albert #text-classification #multilingual #xlmindic #nlp #indoaryan #indicnlp #iso15919 #transliteration #as #bn #gu #hi #mr #ne #or #pa #si #sa #bpy #mai #bh #gom #dataset-oscar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
XLMIndic Base Uniscript
=======================
This model is finetuned from this model on Soham Bangla News Classification task which is part of the IndicGLUE benchmark. Before pretraining this model we transliterate the text to ISO-15919 format using the Aksharamukha
library. A demo of Aksharamukha library is hosted here
where you can transliterate your text and use it on our model on the inference widget.
Model description
-----------------
This model has the same configuration as the ALBERT Base v2 model. Specifically, this model has the following configuration:
* 12 repeating layers
* 128 embedding dimension
* 768 hidden dimension
* 12 attention heads
* 11M parameters
* 512 sequence length
Training data
-------------
This model was fine-tuned on Soham dataset that is part of the IndicGLUE benchmark.
Transliteration
---------------
*The unique component of this model is that it takes in ISO-15919 transliterated text.*
The motivation behind this is this. When two languages share vocabularies, a machine learning model can exploit that to learn good cross-lingual representations. However if these two languages use different writing scripts it is difficult for a model to make the connection. Thus if if we can write the two languages in a single script then it is easier for the model to learn good cross-lingual representation.
For many of the scripts currently in use, there are standard transliteration schemes to convert to the Latin script. In particular, for the Indic scripts the ISO-15919 transliteration scheme is designed to consistently transliterate texts written in different Indic scripts to the Latin script.
An example of ISO-15919 transliteration for a piece of Bangla text is the following:
Original: "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক।"
Transliterated: 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika.'
Another example for a piece of Hindi text is the following:
Original: "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है"
Transliterated: "cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
Training procedure
------------------
### Preprocessing
The texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000.
### Training
The model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*.
Evaluation results
------------------
See results specific to Soham in the following table.
### IndicGLUE
Intended uses & limitations
---------------------------
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919).
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
To use this model you will need to first install the Aksharamukha library.
Using this library you can transliterate any text wriiten in Indic scripts in the following way:
Then you can use this model directly with a pipeline for masked language modeling:
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
Contact
-------
Feel free to contact us if you have any ideas or if you want to know more about our models.
* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)
* Mahmud Elahi Akhter (mahmud.akhter01@URL)
* Ashfia Binte Habib
BibTeX entry and citation info
------------------------------
Coming soon!
|
[
"### Preprocessing\n\n\nThe texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000.",
"### Training\n\n\nThe model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*.\n\n\nEvaluation results\n------------------\n\n\nSee results specific to Soham in the following table.",
"### IndicGLUE\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThis model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919).\n\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nTo use this model you will need to first install the Aksharamukha library.\n\n\nUsing this library you can transliterate any text wriiten in Indic scripts in the following way:\n\n\nThen you can use this model directly with a pipeline for masked language modeling:",
"### Limitations and bias\n\n\nEven though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.\n\n\nContact\n-------\n\n\nFeel free to contact us if you have any ideas or if you want to know more about our models.\n\n\n* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)\n* Mahmud Elahi Akhter (mahmud.akhter01@URL)\n* Ashfia Binte Habib\n\n\nBibTeX entry and citation info\n------------------------------\n\n\nComing soon!"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #albert #text-classification #multilingual #xlmindic #nlp #indoaryan #indicnlp #iso15919 #transliteration #as #bn #gu #hi #mr #ne #or #pa #si #sa #bpy #mai #bh #gom #dataset-oscar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Preprocessing\n\n\nThe texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000.",
"### Training\n\n\nThe model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*.\n\n\nEvaluation results\n------------------\n\n\nSee results specific to Soham in the following table.",
"### IndicGLUE\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThis model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919).\n\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nTo use this model you will need to first install the Aksharamukha library.\n\n\nUsing this library you can transliterate any text wriiten in Indic scripts in the following way:\n\n\nThen you can use this model directly with a pipeline for masked language modeling:",
"### Limitations and bias\n\n\nEven though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.\n\n\nContact\n-------\n\n\nFeel free to contact us if you have any ideas or if you want to know more about our models.\n\n\n* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)\n* Mahmud Elahi Akhter (mahmud.akhter01@URL)\n* Ashfia Binte Habib\n\n\nBibTeX entry and citation info\n------------------------------\n\n\nComing soon!"
] |
fill-mask
|
transformers
|
# XLMIndic Base Uniscript
This model is pretrained on a subset of the [OSCAR](https://huggingface.co/datasets/oscar) corpus spanning 14 Indo-Aryan languages. **Before pretraining this model we transliterate the text to [ISO-15919](https://en.wikipedia.org/wiki/ISO_15919) format using the [Aksharamukha](https://pypi.org/project/aksharamukha/)
library.** A demo of Aksharamukha library is hosted [here](https://aksharamukha.appspot.com/converter)
where you can transliterate your text and use it on our model on the inference widget.
## Model description
This model has the same configuration as the [ALBERT Base v2 model](https://huggingface.co/albert-base-v2/). Specifically, this model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
- 512 sequence length
## Training data
This model was pretrained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset which is a medium sized multilingual corpus containing text from 163 languages. We select a subset of 14 languages based on the following criteria:
- Belongs to the [Indo-Aryan language family](https://en.wikipedia.org/wiki/Indo-Aryan_languages).
- Uses a [Brahmic script](https://en.wikipedia.org/wiki/Brahmic_scripts).
These are the 14 languages we pretrain this model on:
- Assamese
- Bangla
- Bihari
- Bishnupriya Manipuri
- Goan Konkani
- Gujarati
- Hindi
- Maithili
- Marathi
- Nepali
- Oriya
- Panjabi
- Sanskrit
- Sinhala
## Transliteration
*The unique component of this model is that it takes in ISO-15919 transliterated text.*
The motivation behind this is this. When two languages share vocabularies, a machine learning model can exploit that to learn good cross-lingual representations. However if these two languages use different writing scripts it is difficult for a model to make the connection. Thus if if we can write the two languages in a single script then it is easier for the model to learn good cross-lingual representation.
For many of the scripts currently in use, there are standard transliteration schemes to convert to the Latin script. In particular, for the Indic scripts the ISO-15919 transliteration scheme is designed to consistently transliterate texts written in different Indic scripts to the Latin script.
An example of ISO-15919 transliteration for a piece of **Bangla** text is the following:
**Original:** "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক।"
**Transliterated:** 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika.'
Another example for a piece of **Hindi** text is the following:
**Original:** "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है"
**Transliterated:** "cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
## Training procedure
### Preprocessing
The texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
Training objective is the same as the original ALBERT.
.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
The details of the sentence order prediction example generation procedure for each sentence are the following:
- Split the sentence into two parts A and B at a random index.
- With 50% probability swap the two parts.
The model was pretrained on TPUv3-8 for 1M steps. We have checkpoints available at every 100k pretraining steps. These are available at different branches of this repository. You can load these checkpoints by passing the `revision` parameter. For example to load the checkpoint at 500k you can use the following code.
```python
>>> AutoModel.from_pretrained('ibraheemmoosa/xlmindic-base-uniscript', revision='checkpoint_500k')
```
## Evaluation results
We evaluated this model on the Indo-Aryan subset of languages (Panjabi, Oriya, Assamese, Bangla, Hindi, Marathi, Gujarati) from the [IndicGLUE](https://huggingface.co/datasets/indic_glue) benchmark dataset. We report the mean and standard deviation of nine fine-tuning runs for this model. We compare with an [ablation model](https://huggingface.co/ibraheemmoosa/xlmindic-base-multiscript) that do not use transliteration and is instead trained on original scripts.
### IndicGLUE
Task | mBERT | XLM-R | IndicBERT-Base | XLMIndic-Base-Uniscript (This Model) | XLMIndic-Base-Multiscript (Ablation Model)
-----| ----- | ----- | ------ | ------- | --------
Wikipedia Section Title Prediction | 71.90 | 65.45 | 69.40 | **81.78 ± 0.60** | 77.17 ± 0.76
Article Genre Classification | 88.64 | 96.61 | 97.72 | **98.70 ± 0.29** | 98.30 ± 0.26
Named Entity Recognition (F1-score) | 71.29 | 62.18 | 56.69 | **89.85 ± 1.14** | 83.19 ± 1.58
BBC Hindi News Article Classification | 60.55 | 75.52 | 74.60 | **79.14 ± 0.60** | 77.28 ± 1.50
Soham Bangla News Article Classification | 80.23 | 87.6 | 78.45 | **93.89 ± 0.48** | 93.22 ± 0.49
INLTK Gujarati Headlines Genre Classification | - | - | **92.91** | 90.73 ± 0.75 | 90.41 ± 0.69
INLTK Marathi Headlines Genre Classification | - | - | **94.30** | 92.04 ± 0.47 | 92.21 ± 0.23
IITP Hindi Product Reviews Sentiment Classification | 74.57 | **78.97** | 71.32 | 77.18 ± 0.77 | 76.33 ± 0.84
IITP Hindi Movie Reviews Sentiment Classification | 56.77 | 61.61 | 59.03 | **66.34 ± 0.16** | 65.91 ± 2.20
MIDAS Hindi Discourse Type Classification | 71.20 | **79.94** | 78.44 | 78.54 ± 0.91 | 78.39 ± 0.33
Cloze Style Question Answering (Fill-mask task) | - | - | 37.16 | **41.54** | 38.21
## Intended uses & limitations
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919).
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=xlmindic) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
To use this model you will need to first install the [Aksharamukha](https://pypi.org/project/aksharamukha/) library.
```bash
pip install aksharamukha
```
Using this library you can transliterate any text wriiten in Indic scripts in the following way:
```python
>>> from aksharamukha import transliterate
>>> text = "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है"
>>> transliterated_text = transliterate.process('autodetect', 'ISO', text)
>>> transliterated_text
"cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
```
Then you can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> from aksharamukha import transliterate
>>> unmasker = pipeline('fill-mask', model='ibraheemmoosa/xlmindic-base-uniscript')
>>> text = "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি [MASK], ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।"
>>> transliterated_text = transliterate.process('Bengali', 'ISO', text)
>>> transliterated_text
'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli [MASK], aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama [MASK] puraskāra lābha karēna.'
>>> unmasker(transliterated_text)
[{'score': 0.39705055952072144,
'token': 1500,
'token_str': 'abhinētā',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli abhinētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.20499080419540405,
'token': 3585,
'token_str': 'kabi',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.1314290314912796,
'token': 15402,
'token_str': 'rājanētā',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli rājanētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.060830358415842056,
'token': 3212,
'token_str': 'kalākāra',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kalākāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.035522934049367905,
'token': 11586,
'token_str': 'sāhityakāra',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli sāhityakāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}]
```
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
## Contact
Feel free to contact us if you have any ideas or if you want to know more about our models.
- Ibraheem Muhammad Moosa (ibraheemmoosa1347@gmail.com)
- Mahmud Elahi Akhter (mahmud.akhter01@northsouth.edu)
- Ashfia Binte Habib
## BibTeX entry and citation info
```bibtex
@article{Moosa2022DoesTH,
title={Does Transliteration Help Multilingual Language Modeling?},
author={Ibraheem Muhammad Moosa and Mahmuda Akhter and Ashfia Binte Habib},
journal={ArXiv},
year={2022},
volume={abs/2201.12501}
}
```
|
{"language": ["as", "bn", "gu", "hi", "mr", "ne", "or", "pa", "si", "sa", "bpy", "mai", "bh", "gom"], "license": "apache-2.0", "tags": ["multilingual", "albert", "masked-language-modeling", "sentence-order-prediction", "fill-mask", "xlmindic", "nlp", "indoaryan", "indicnlp", "iso15919", "transliteration"], "datasets": ["oscar"], "widget": [{"text": "rab\u012bndran\u0101tha \u1e6dh\u0101kura \u0113pha\u0101ra\u0113\u0113sa (7 m\u0113 1861 - 7 \u0101gas\u1e6da 1941; 25 bai\u015b\u0101kha 1268 - 22 \u015br\u0101ba\u1e47a 1348 ba\u1e45g\u0101bda) chil\u0113na agra\u1e47\u012b b\u0101\u1e45\u0101li [MASK], aupany\u0101sika, sa\u1e41g\u012btasra\u1e63\u1e6d\u0101, n\u0101\u1e6dyak\u0101ra, citrakara, ch\u014d\u1e6dagalpak\u0101ra, pr\u0101bandhika, abhin\u0113t\u0101, ka\u1e47\u1e6dha\u015bilp\u012b \u014d d\u0101r\u015banika. 1913 s\u0101l\u0113 g\u012bt\u0101\u00f1jali k\u0101byagranth\u0113ra i\u1e41r\u0113ji anub\u0101d\u0113ra janya tini \u0113\u015b\u012b\u1e8fad\u0113ra madhy\u0113 s\u0101hity\u0113 prathama n\u014db\u0113la purask\u0101ra l\u0101bha kar\u0113na."}], "co2_eq_emissions": {"emissions": 28.53, "source": "calculated using this webstie https://mlco2.github.io/impact/#compute", "training_type": "pretraining", "geographical_location": "NA", "hardware_used": "TPUv3-8 for about 180 hours or 7.5 days"}}
|
ibraheemmoosa/xlmindic-base-uniscript
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"albert",
"pretraining",
"multilingual",
"masked-language-modeling",
"sentence-order-prediction",
"fill-mask",
"xlmindic",
"nlp",
"indoaryan",
"indicnlp",
"iso15919",
"transliteration",
"as",
"bn",
"gu",
"hi",
"mr",
"ne",
"or",
"pa",
"si",
"sa",
"bpy",
"mai",
"bh",
"gom",
"dataset:oscar",
"license:apache-2.0",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"as",
"bn",
"gu",
"hi",
"mr",
"ne",
"or",
"pa",
"si",
"sa",
"bpy",
"mai",
"bh",
"gom"
] |
TAGS
#transformers #pytorch #tf #jax #albert #pretraining #multilingual #masked-language-modeling #sentence-order-prediction #fill-mask #xlmindic #nlp #indoaryan #indicnlp #iso15919 #transliteration #as #bn #gu #hi #mr #ne #or #pa #si #sa #bpy #mai #bh #gom #dataset-oscar #license-apache-2.0 #co2_eq_emissions #endpoints_compatible #region-us
|
XLMIndic Base Uniscript
=======================
This model is pretrained on a subset of the OSCAR corpus spanning 14 Indo-Aryan languages. Before pretraining this model we transliterate the text to ISO-15919 format using the Aksharamukha
library. A demo of Aksharamukha library is hosted here
where you can transliterate your text and use it on our model on the inference widget.
Model description
-----------------
This model has the same configuration as the ALBERT Base v2 model. Specifically, this model has the following configuration:
* 12 repeating layers
* 128 embedding dimension
* 768 hidden dimension
* 12 attention heads
* 11M parameters
* 512 sequence length
Training data
-------------
This model was pretrained on the OSCAR dataset which is a medium sized multilingual corpus containing text from 163 languages. We select a subset of 14 languages based on the following criteria:
* Belongs to the Indo-Aryan language family.
* Uses a Brahmic script.
These are the 14 languages we pretrain this model on:
* Assamese
* Bangla
* Bihari
* Bishnupriya Manipuri
* Goan Konkani
* Gujarati
* Hindi
* Maithili
* Marathi
* Nepali
* Oriya
* Panjabi
* Sanskrit
* Sinhala
Transliteration
---------------
*The unique component of this model is that it takes in ISO-15919 transliterated text.*
The motivation behind this is this. When two languages share vocabularies, a machine learning model can exploit that to learn good cross-lingual representations. However if these two languages use different writing scripts it is difficult for a model to make the connection. Thus if if we can write the two languages in a single script then it is easier for the model to learn good cross-lingual representation.
For many of the scripts currently in use, there are standard transliteration schemes to convert to the Latin script. In particular, for the Indic scripts the ISO-15919 transliteration scheme is designed to consistently transliterate texts written in different Indic scripts to the Latin script.
An example of ISO-15919 transliteration for a piece of Bangla text is the following:
Original: "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক।"
Transliterated: 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika.'
Another example for a piece of Hindi text is the following:
Original: "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है"
Transliterated: "cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
Training procedure
------------------
### Preprocessing
The texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000. The inputs of the model are
then of the form:
### Training
Training objective is the same as the original ALBERT.
.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
The details of the sentence order prediction example generation procedure for each sentence are the following:
* Split the sentence into two parts A and B at a random index.
* With 50% probability swap the two parts.
The model was pretrained on TPUv3-8 for 1M steps. We have checkpoints available at every 100k pretraining steps. These are available at different branches of this repository. You can load these checkpoints by passing the 'revision' parameter. For example to load the checkpoint at 500k you can use the following code.
Evaluation results
------------------
We evaluated this model on the Indo-Aryan subset of languages (Panjabi, Oriya, Assamese, Bangla, Hindi, Marathi, Gujarati) from the IndicGLUE benchmark dataset. We report the mean and standard deviation of nine fine-tuning runs for this model. We compare with an ablation model that do not use transliteration and is instead trained on original scripts.
### IndicGLUE
Intended uses & limitations
---------------------------
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919).
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
To use this model you will need to first install the Aksharamukha library.
Using this library you can transliterate any text wriiten in Indic scripts in the following way:
Then you can use this model directly with a pipeline for masked language modeling:
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
Contact
-------
Feel free to contact us if you have any ideas or if you want to know more about our models.
* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)
* Mahmud Elahi Akhter (mahmud.akhter01@URL)
* Ashfia Binte Habib
BibTeX entry and citation info
------------------------------
|
[
"### Preprocessing\n\n\nThe texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nTraining objective is the same as the original ALBERT.\n.\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nThe details of the sentence order prediction example generation procedure for each sentence are the following:\n\n\n* Split the sentence into two parts A and B at a random index.\n* With 50% probability swap the two parts.\n\n\nThe model was pretrained on TPUv3-8 for 1M steps. We have checkpoints available at every 100k pretraining steps. These are available at different branches of this repository. You can load these checkpoints by passing the 'revision' parameter. For example to load the checkpoint at 500k you can use the following code.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated this model on the Indo-Aryan subset of languages (Panjabi, Oriya, Assamese, Bangla, Hindi, Marathi, Gujarati) from the IndicGLUE benchmark dataset. We report the mean and standard deviation of nine fine-tuning runs for this model. We compare with an ablation model that do not use transliteration and is instead trained on original scripts.",
"### IndicGLUE\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThis model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919).\n\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nTo use this model you will need to first install the Aksharamukha library.\n\n\nUsing this library you can transliterate any text wriiten in Indic scripts in the following way:\n\n\nThen you can use this model directly with a pipeline for masked language modeling:",
"### Limitations and bias\n\n\nEven though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.\n\n\nContact\n-------\n\n\nFeel free to contact us if you have any ideas or if you want to know more about our models.\n\n\n* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)\n* Mahmud Elahi Akhter (mahmud.akhter01@URL)\n* Ashfia Binte Habib\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #albert #pretraining #multilingual #masked-language-modeling #sentence-order-prediction #fill-mask #xlmindic #nlp #indoaryan #indicnlp #iso15919 #transliteration #as #bn #gu #hi #mr #ne #or #pa #si #sa #bpy #mai #bh #gom #dataset-oscar #license-apache-2.0 #co2_eq_emissions #endpoints_compatible #region-us \n",
"### Preprocessing\n\n\nThe texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nTraining objective is the same as the original ALBERT.\n.\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nThe details of the sentence order prediction example generation procedure for each sentence are the following:\n\n\n* Split the sentence into two parts A and B at a random index.\n* With 50% probability swap the two parts.\n\n\nThe model was pretrained on TPUv3-8 for 1M steps. We have checkpoints available at every 100k pretraining steps. These are available at different branches of this repository. You can load these checkpoints by passing the 'revision' parameter. For example to load the checkpoint at 500k you can use the following code.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated this model on the Indo-Aryan subset of languages (Panjabi, Oriya, Assamese, Bangla, Hindi, Marathi, Gujarati) from the IndicGLUE benchmark dataset. We report the mean and standard deviation of nine fine-tuning runs for this model. We compare with an ablation model that do not use transliteration and is instead trained on original scripts.",
"### IndicGLUE\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThis model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919).\n\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\n\nTo use this model you will need to first install the Aksharamukha library.\n\n\nUsing this library you can transliterate any text wriiten in Indic scripts in the following way:\n\n\nThen you can use this model directly with a pipeline for masked language modeling:",
"### Limitations and bias\n\n\nEven though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.\n\n\nContact\n-------\n\n\nFeel free to contact us if you have any ideas or if you want to know more about our models.\n\n\n* Ibraheem Muhammad Moosa (ibraheemmoosa1347@URL)\n* Mahmud Elahi Akhter (mahmud.akhter01@URL)\n* Ashfia Binte Habib\n\n\nBibTeX entry and citation info\n------------------------------"
] |
fill-mask
|
transformers
|
### SpaceBERT
This is one of the 3 further pre-trained models from the SpaceTransformers family presented in [SpaceTransformers: Language Modeling for Space Systems](https://ieeexplore.ieee.org/document/9548078). The original Git repo is [strath-ace/smart-nlp](https://github.com/strath-ace/smart-nlp).
The further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceBERT was further pre-trained on this domain-specific corpus from [BERT-Base (uncased)](https://huggingface.co/bert-base-uncased). In our paper, it is then fine-tuned for a Concept Recognition task.
### BibTeX entry and citation info
```
@ARTICLE{
9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659}
}
```
|
{"language": "en", "license": "mit"}
|
icelab/spacebert
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #fill-mask #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### SpaceBERT
This is one of the 3 further pre-trained models from the SpaceTransformers family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp.
The further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceBERT was further pre-trained on this domain-specific corpus from BERT-Base (uncased). In our paper, it is then fine-tuned for a Concept Recognition task.
### BibTeX entry and citation info
|
[
"### SpaceBERT\n\nThis is one of the 3 further pre-trained models from the SpaceTransformers family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp.\n\nThe further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceBERT was further pre-trained on this domain-specific corpus from BERT-Base (uncased). In our paper, it is then fine-tuned for a Concept Recognition task.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### SpaceBERT\n\nThis is one of the 3 further pre-trained models from the SpaceTransformers family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp.\n\nThe further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceBERT was further pre-trained on this domain-specific corpus from BERT-Base (uncased). In our paper, it is then fine-tuned for a Concept Recognition task.",
"### BibTeX entry and citation info"
] |
token-classification
|
transformers
|
---
# spacebert_CR
### Model desciption
This is a fine-tuned SpaceSciBERT model, for a Concept Recognition task, from the SpaceTransformers model family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp. The [fine-tuning](https://github.com/strath-ace/smart-nlp/blob/master/SpaceTransformers/CR/CR_ECSS_dataset.json) dataset is available for download and consists of 874 unique manual annotated ECSS requirements.
The notebookfor fine-tuning can be accessed in Google Colab:
[](https://colab.research.google.com/drive/1EGh9bdxq6RqIzbvKuptAWvmIBG2EQJzJ?usp=sharing)
### BibTeX entry and citation info
```
@ARTICLE{ 9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659} }
```
|
{"language": "en", "license": "mit", "widget": [{"text": "The CubeSat RF design shall either have one RF inhibit and a RF power output no greater than 1.5W at the transmitter antenna's RF input OR the CubeSat shall have a minimum of two independent RF inhibits (CDS 3.3.9) (ISO 5.5.6)."}]}
|
icelab/spacebert_CR
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #token-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
---
# spacebert_CR
### Model desciption
This is a fine-tuned SpaceSciBERT model, for a Concept Recognition task, from the SpaceTransformers model family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp. The fine-tuning dataset is available for download and consists of 874 unique manual annotated ECSS requirements.
The notebookfor fine-tuning can be accessed in Google Colab:
. The original Git repo is [strath-ace/smart-nlp](https://github.com/strath-ace/smart-nlp).
The further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceRoBERTa was further pre-trained on this domain-specific corpus from [RoBERTa-Base](https://huggingface.co/roberta-base). In our paper, it is then fine-tuned for a Concept Recognition task.
### BibTeX entry and citation info
```
@ARTICLE{
9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659}
}
```
|
{"language": "en", "license": "mit"}
|
icelab/spaceroberta
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### SpaceRoBERTa
This is one of the 3 further pre-trained models from the SpaceTransformers family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp.
The further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceRoBERTa was further pre-trained on this domain-specific corpus from RoBERTa-Base. In our paper, it is then fine-tuned for a Concept Recognition task.
### BibTeX entry and citation info
|
[
"### SpaceRoBERTa\n\nThis is one of the 3 further pre-trained models from the SpaceTransformers family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp.\n\nThe further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceRoBERTa was further pre-trained on this domain-specific corpus from RoBERTa-Base. In our paper, it is then fine-tuned for a Concept Recognition task.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### SpaceRoBERTa\n\nThis is one of the 3 further pre-trained models from the SpaceTransformers family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp.\n\nThe further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceRoBERTa was further pre-trained on this domain-specific corpus from RoBERTa-Base. In our paper, it is then fine-tuned for a Concept Recognition task.",
"### BibTeX entry and citation info"
] |
token-classification
|
transformers
|
---
# spaceroberta_CR
## Model desciption
This is fine-tuned SpaceSciBERT model, for a Concept Recognition task, from the SpaceTransformers model family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp. The [fine-tuning](https://github.com/strath-ace/smart-nlp/blob/master/SpaceTransformers/CR/CR_ECSS_dataset.json) dataset is available for download and consists of 874 unique manually annotated ECSS requirements.
The notebook for fine-tuning can be accessed in Google Colab:
[](https://colab.research.google.com/drive/1EGh9bdxq6RqIzbvKuptAWvmIBG2EQJzJ?usp=sharing)
### BibTeX entry and citation info
```
@ARTICLE{ 9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659} }
```
|
{"language": "en", "license": "mit", "widget": [{"text": "The CubeSat RF design shall either have one RF inhibit and a RF power output no greater than 1.5W at the transmitter antenna's RF input OR the CubeSat shall have a minimum of two independent RF inhibits (CDS 3.3.9) (ISO 5.5.6)."}]}
|
icelab/spaceroberta_CR
| null |
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #token-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
---
# spaceroberta_CR
## Model desciption
This is fine-tuned SpaceSciBERT model, for a Concept Recognition task, from the SpaceTransformers model family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp. The fine-tuning dataset is available for download and consists of 874 unique manually annotated ECSS requirements.
The notebook for fine-tuning can be accessed in Google Colab:
. The original Git repo is [strath-ace/smart-nlp](https://github.com/strath-ace/smart-nlp).
The further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceSciBERT was further pre-trained on this domain-specific corpus from [SciBERT-SciVocab (uncased)](https://huggingface.co/allenai/scibert_scivocab_uncased). In our paper, it is then fine-tuned for a Concept Recognition task.
### BibTeX entry and citation info
```
@ARTICLE{
9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659}
}
```
|
{"language": "en", "license": "mit"}
|
icelab/spacescibert
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #fill-mask #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### SpaceSciBERT
This is one of the 3 further pre-trained models from the SpaceTransformers family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp.
The further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceSciBERT was further pre-trained on this domain-specific corpus from SciBERT-SciVocab (uncased). In our paper, it is then fine-tuned for a Concept Recognition task.
### BibTeX entry and citation info
|
[
"### SpaceSciBERT\n\nThis is one of the 3 further pre-trained models from the SpaceTransformers family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp.\n\nThe further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceSciBERT was further pre-trained on this domain-specific corpus from SciBERT-SciVocab (uncased). In our paper, it is then fine-tuned for a Concept Recognition task.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### SpaceSciBERT\n\nThis is one of the 3 further pre-trained models from the SpaceTransformers family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp.\n\nThe further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceSciBERT was further pre-trained on this domain-specific corpus from SciBERT-SciVocab (uncased). In our paper, it is then fine-tuned for a Concept Recognition task.",
"### BibTeX entry and citation info"
] |
token-classification
|
transformers
|
---
# spacescibert_CR
## Model desciption
This is fine-tuned further SpaceSciBERT model from the SpaceTransformers model family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp. The [fine-tuning](https://github.com/strath-ace/smart-nlp/blob/master/SpaceTransformers/CR/CR_ECSS_dataset.json) dataset is available for download and consists of 874 unique manual annotated ECSS requirements.
The notebookfor fine-tuning can be assesed in Google Colab:
[](https://colab.research.google.com/drive/1EGh9bdxq6RqIzbvKuptAWvmIBG2EQJzJ?usp=sharing)
### BibTeX entry and citation info
```
@ARTICLE{ 9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659} }
```
|
{"language": "en", "license": "mit", "widget": [{"text": "The CubeSat RF design shall either have one RF inhibit and a RF power output no greater than 1.5W at the transmitter antenna's RF input OR the CubeSat shall have a minimum of two independent RF inhibits (CDS 3.3.9) (ISO 5.5.6)."}]}
|
icelab/spacescibert_CR
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #token-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
---
# spacescibert_CR
## Model desciption
This is fine-tuned further SpaceSciBERT model from the SpaceTransformers model family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp. The fine-tuning dataset is available for download and consists of 874 unique manual annotated ECSS requirements.
The notebookfor fine-tuning can be assesed in Google Colab:

tokenizer = AutoTokenizer.from_pretrained("idjotherwise/autonlp-reading_prediction-172506")
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["idjotherwise/autonlp-data-reading_prediction"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
idjotherwise/autonlp-reading_prediction-172506
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:idjotherwise/autonlp-data-reading_prediction",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #roberta #text-classification #autonlp #en #dataset-idjotherwise/autonlp-data-reading_prediction #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 172506
## Validation Metrics
- Loss: 0.03257797285914421
- MSE: 0.03257797285914421
- MAE: 0.14246532320976257
- R2: 0.9693824457290849
- RMSE: 0.18049369752407074
- Explained Variance: 0.9699198007583618
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Single Column Regression\n- Model ID: 172506",
"## Validation Metrics\n\n- Loss: 0.03257797285914421\n- MSE: 0.03257797285914421\n- MAE: 0.14246532320976257\n- R2: 0.9693824457290849\n- RMSE: 0.18049369752407074\n- Explained Variance: 0.9699198007583618",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #text-classification #autonlp #en #dataset-idjotherwise/autonlp-data-reading_prediction #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Single Column Regression\n- Model ID: 172506",
"## Validation Metrics\n\n- Loss: 0.03257797285914421\n- MSE: 0.03257797285914421\n- MAE: 0.14246532320976257\n- R2: 0.9693824457290849\n- RMSE: 0.18049369752407074\n- Explained Variance: 0.9699198007583618",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
null | null |
a
|
{}
|
idobegaming/idobegaming
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
a
|
[] |
[
"TAGS\n#region-us \n"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 4021083
## Validation Metrics
- Loss: 0.6848716735839844
- Accuracy: 0.8825910931174089
- Macro F1: 0.41301646762109634
- Micro F1: 0.8825910931174088
- Weighted F1: 0.863740586166105
- Macro Precision: 0.4129337301330573
- Micro Precision: 0.8825910931174089
- Weighted Precision: 0.8531335941587811
- Macro Recall: 0.44466614072309585
- Micro Recall: 0.8825910931174089
- Weighted Recall: 0.8825910931174089
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/idrimadrid/autonlp-creator_classifications-4021083
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("idrimadrid/autonlp-creator_classifications-4021083", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("idrimadrid/autonlp-creator_classifications-4021083", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["idrimadrid/autonlp-data-creator_classifications"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
idrimadrid/autonlp-creator_classifications-4021083
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:idrimadrid/autonlp-data-creator_classifications",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-idrimadrid/autonlp-data-creator_classifications #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 4021083
## Validation Metrics
- Loss: 0.6848716735839844
- Accuracy: 0.8825910931174089
- Macro F1: 0.41301646762109634
- Micro F1: 0.8825910931174088
- Weighted F1: 0.863740586166105
- Macro Precision: 0.4129337301330573
- Micro Precision: 0.8825910931174089
- Weighted Precision: 0.8531335941587811
- Macro Recall: 0.44466614072309585
- Micro Recall: 0.8825910931174089
- Weighted Recall: 0.8825910931174089
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 4021083",
"## Validation Metrics\n\n- Loss: 0.6848716735839844\n- Accuracy: 0.8825910931174089\n- Macro F1: 0.41301646762109634\n- Micro F1: 0.8825910931174088\n- Weighted F1: 0.863740586166105\n- Macro Precision: 0.4129337301330573\n- Micro Precision: 0.8825910931174089\n- Weighted Precision: 0.8531335941587811\n- Macro Recall: 0.44466614072309585\n- Micro Recall: 0.8825910931174089\n- Weighted Recall: 0.8825910931174089",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-idrimadrid/autonlp-data-creator_classifications #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 4021083",
"## Validation Metrics\n\n- Loss: 0.6848716735839844\n- Accuracy: 0.8825910931174089\n- Macro F1: 0.41301646762109634\n- Micro F1: 0.8825910931174088\n- Weighted F1: 0.863740586166105\n- Macro Precision: 0.4129337301330573\n- Micro Precision: 0.8825910931174089\n- Weighted Precision: 0.8531335941587811\n- Macro Recall: 0.44466614072309585\n- Micro Recall: 0.8825910931174089\n- Weighted Recall: 0.8825910931174089",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-generation
|
transformers
|
Please treat TILDE as a BertLMHeadModel model:
```
from transformers import BertLMHeadModel, BertTokenizerFast
model = BertLMHeadModel.from_pretrained("ielab/TILDE")
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
```
Github: https://github.com/ielab/TILDE
|
{}
|
ielab/TILDE
| null |
[
"transformers",
"pytorch",
"bert",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-generation #autotrain_compatible #endpoints_compatible #region-us
|
Please treat TILDE as a BertLMHeadModel model:
Github: URL
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
TILDEv2 trained with passages expand with TILDE (m=128)
|
{}
|
ielab/TILDEv2-TILDE128-exp
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #endpoints_compatible #region-us
|
TILDEv2 trained with passages expand with TILDE (m=128)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #endpoints_compatible #region-us \n"
] |
null |
transformers
|
TILDEv2 trained with passages expand with TILDE (m=200)
|
{}
|
ielab/TILDEv2-TILDE200-exp
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #endpoints_compatible #region-us
|
TILDEv2 trained with passages expand with TILDE (m=200)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #endpoints_compatible #region-us \n"
] |
null |
transformers
|
uniCOIL trained with passages expand with TILDE (m=128)
|
{}
|
ielab/unicoil-tilde128-msmarco-passage
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #endpoints_compatible #region-us
|
uniCOIL trained with passages expand with TILDE (m=128)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #endpoints_compatible #region-us \n"
] |
null |
transformers
|
uniCOIL trained with passages expand with TILDE (m=200)
|
{}
|
ielab/unicoil-tilde200-msmarco-passage
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #endpoints_compatible #has_space #region-us
|
uniCOIL trained with passages expand with TILDE (m=200)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
`distilroberta-base` finetuned for masked language modeling on 126213 Qt jira issue titles for up to 50 epochs.
|
{"language": ["en"], "license": "mit", "tags": ["jira", "code", "issue", "development"]}
|
ietz/distilroberta-base-finetuned-jira-qt-issue-title
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"jira",
"code",
"issue",
"development",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #jira #code #issue #development #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
'distilroberta-base' finetuned for masked language modeling on 126213 Qt jira issue titles for up to 50 epochs.
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #jira #code #issue #development #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
`distilroberta-base` finetuned for masked language modeling on 247731 mixed issue titles (n=126213) and descriptions (n=121518). Trained for up to 50 epochs.
|
{"language": ["en"], "license": "mit", "tags": ["jira", "code", "issue", "development"]}
|
ietz/distilroberta-base-finetuned-jira-qt-issue-titles-and-bodies
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"jira",
"code",
"issue",
"development",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #jira #code #issue #development #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
'distilroberta-base' finetuned for masked language modeling on 247731 mixed issue titles (n=126213) and descriptions (n=121518). Trained for up to 50 epochs.
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #jira #code #issue #development #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IFIS_ZORK_AI_MEDIUM_HORROR
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model_index": [{"name": "IFIS_ZORK_AI_MEDIUM_HORROR", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
|
ifis-zork/IFIS_ZORK_AI_MEDIUM_HORROR
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# IFIS_ZORK_AI_MEDIUM_HORROR
This model is a fine-tuned version of gpt2-medium on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
[
"# IFIS_ZORK_AI_MEDIUM_HORROR\n\nThis model is a fine-tuned version of gpt2-medium on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# IFIS_ZORK_AI_MEDIUM_HORROR\n\nThis model is a fine-tuned version of gpt2-medium on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_FANTASY
This model is a fine-tuned version of [ifis-zork/ZORK_AI_FAN_TEMP](https://huggingface.co/ifis-zork/ZORK_AI_FAN_TEMP) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model_index": [{"name": "ZORK_AI_FANTASY", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
|
ifis-zork/ZORK_AI_FANTASY
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ZORK_AI_FANTASY
This model is a fine-tuned version of ifis-zork/ZORK_AI_FAN_TEMP on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
[
"# ZORK_AI_FANTASY\n\nThis model is a fine-tuned version of ifis-zork/ZORK_AI_FAN_TEMP on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ZORK_AI_FANTASY\n\nThis model is a fine-tuned version of ifis-zork/ZORK_AI_FAN_TEMP on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_FAN_TEMP
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model_index": [{"name": "ZORK_AI_FAN_TEMP", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
|
ifis-zork/ZORK_AI_FAN_TEMP
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ZORK_AI_FAN_TEMP
This model is a fine-tuned version of gpt2-medium on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
[
"# ZORK_AI_FAN_TEMP\n\nThis model is a fine-tuned version of gpt2-medium on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ZORK_AI_FAN_TEMP\n\nThis model is a fine-tuned version of gpt2-medium on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_MODERN
This model is a fine-tuned version of [ifis-zork/ZORK_AI_MODERN_A](https://huggingface.co/ifis-zork/ZORK_AI_MODERN_A) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model_index": [{"name": "ZORK_AI_MODERN", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
|
ifis-zork/ZORK_AI_MODERN
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ZORK_AI_MODERN
This model is a fine-tuned version of ifis-zork/ZORK_AI_MODERN_A on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
[
"# ZORK_AI_MODERN\n\nThis model is a fine-tuned version of ifis-zork/ZORK_AI_MODERN_A on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ZORK_AI_MODERN\n\nThis model is a fine-tuned version of ifis-zork/ZORK_AI_MODERN_A on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_MODERN_A
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model_index": [{"name": "ZORK_AI_MODERN_A", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
|
ifis-zork/ZORK_AI_MODERN_A
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ZORK_AI_MODERN_A
This model is a fine-tuned version of gpt2-medium on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
[
"# ZORK_AI_MODERN_A\n\nThis model is a fine-tuned version of gpt2-medium on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ZORK_AI_MODERN_A\n\nThis model is a fine-tuned version of gpt2-medium on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_SCI_FI
This model is a fine-tuned version of [ifis-zork/ZORK_AI_SCI_FI_TEMP](https://huggingface.co/ifis-zork/ZORK_AI_SCI_FI_TEMP) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model_index": [{"name": "ZORK_AI_SCI_FI", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
|
ifis-zork/ZORK_AI_SCI_FI
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ZORK_AI_SCI_FI
This model is a fine-tuned version of ifis-zork/ZORK_AI_SCI_FI_TEMP on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
[
"# ZORK_AI_SCI_FI\n\nThis model is a fine-tuned version of ifis-zork/ZORK_AI_SCI_FI_TEMP on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ZORK_AI_SCI_FI\n\nThis model is a fine-tuned version of ifis-zork/ZORK_AI_SCI_FI_TEMP on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_SCI_FI_TEMP
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model_index": [{"name": "ZORK_AI_SCI_FI_TEMP", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
|
ifis-zork/ZORK_AI_SCI_FI_TEMP
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ZORK_AI_SCI_FI_TEMP
This model is a fine-tuned version of gpt2-medium on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
[
"# ZORK_AI_SCI_FI_TEMP\n\nThis model is a fine-tuned version of gpt2-medium on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ZORK_AI_SCI_FI_TEMP\n\nThis model is a fine-tuned version of gpt2-medium on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# MCU Peter Parker DialoGPT Model
|
{"tags": ["conversational"]}
|
ignkai/DialoGPT-medium-spider-man-updated
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# MCU Peter Parker DialoGPT Model
|
[
"# MCU Peter Parker DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# MCU Peter Parker DialoGPT Model"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2-sst2-membership
This model is a fine-tuned version of [ikevin98/bert-base-uncased-finetuned-sst2](https://huggingface.co/ikevin98/bert-base-uncased-finetuned-sst2) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3100
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5125 | 1.0 | 3813 | 1.3100 | 1.0 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model_index": {"name": "bert-base-uncased-finetuned-sst2-sst2-membership"}}
|
doyoungkim/bert-base-uncased-finetuned-sst2-sst2-membership
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-sst2-sst2-membership
================================================
This model is a fine-tuned version of ikevin98/bert-base-uncased-finetuned-sst2 on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3100
* Accuracy: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.8.1
* Datasets 1.11.0
* Tokenizers 0.10.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- Accuracy: 0.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1666 | 1.0 | 2105 | 0.2403 | 0.9232 |
| 0.1122 | 2.0 | 4210 | 0.2716 | 0.9266 |
| 0.0852 | 3.0 | 6315 | 0.3150 | 0.9232 |
| 0.056 | 4.0 | 8420 | 0.3209 | 0.9163 |
| 0.0344 | 5.0 | 10525 | 0.3740 | 0.9243 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "bert-base-uncased-finetuned-sst2", "results": [{"dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.926605504587156}}]}]}
|
doyoungkim/bert-base-uncased-finetuned-sst2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-sst2
================================
This model is a fine-tuned version of bert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2716
* Accuracy: 0.9266
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.8.1
* Datasets 1.11.0
* Tokenizers 0.10.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2-distilled
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2676
- Accuracy: 0.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3797 | 1.0 | 2105 | 0.2512 | 0.9002 |
| 0.3036 | 2.0 | 4210 | 0.2643 | 0.8933 |
| 0.2609 | 3.0 | 6315 | 0.2831 | 0.8956 |
| 0.2417 | 4.0 | 8420 | 0.2676 | 0.9025 |
| 0.2305 | 5.0 | 10525 | 0.2740 | 0.9025 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model_index": {"name": "bert-base-uncased-sst2-distilled"}}
|
doyoungkim/bert-base-uncased-sst2-distilled
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-sst2-distilled
================================
This model is a fine-tuned version of bert-base-uncased on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2676
* Accuracy: 0.9025
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.8.1
* Datasets 1.11.0
* Tokenizers 0.10.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2-membership-attack
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6296
- Accuracy: 0.8681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6921 | 1.0 | 3813 | 0.6263 | 0.8360 |
| 0.6916 | 2.0 | 7626 | 0.6296 | 0.8681 |
| 0.6904 | 3.0 | 11439 | 0.6105 | 0.8406 |
| 0.6886 | 4.0 | 15252 | 0.6192 | 0.8200 |
| 0.6845 | 5.0 | 19065 | 0.6250 | 0.7798 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model_index": {"name": "bert-base-uncased-sst2-membership-attack"}}
|
doyoungkim/bert-base-uncased-sst2-membership-attack
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-sst2-membership-attack
========================================
This model is a fine-tuned version of bert-base-uncased on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6296
* Accuracy: 0.8681
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.8.1
* Datasets 1.11.0
* Tokenizers 0.10.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned-en-to-ru
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7682
- Bleu: 14.6112
- Gen Len: 7.202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.3198 | 1.0 | 4956 | 2.1261 | 9.5339 | 6.7709 |
| 1.9732 | 2.0 | 9912 | 1.9639 | 10.4715 | 7.1254 |
| 1.7127 | 3.0 | 14868 | 1.8780 | 11.6128 | 7.1106 |
| 1.5614 | 4.0 | 19824 | 1.8367 | 12.8389 | 7.0468 |
| 1.4276 | 5.0 | 24780 | 1.8040 | 13.7423 | 7.0403 |
| 1.3096 | 6.0 | 29736 | 1.7820 | 14.1469 | 7.0555 |
| 1.2381 | 7.0 | 34692 | 1.7761 | 13.9987 | 7.2225 |
| 1.1784 | 8.0 | 39648 | 1.7725 | 14.4675 | 7.1799 |
| 1.1376 | 9.0 | 44604 | 1.7692 | 14.4937 | 7.1957 |
| 1.0862 | 10.0 | 49560 | 1.7682 | 14.6112 | 7.202 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-en-ru-finetuned-en-to-ru", "results": []}]}
|
ilevs/opus-mt-en-ru-finetuned-en-to-ru
| null |
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
opus-mt-en-ru-finetuned-en-to-ru
================================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ru on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7682
* Bleu: 14.6112
* Gen Len: 7.202
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ru-en-finetuned-ru-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1251
- Bleu: 15.9892
- Gen Len: 5.0168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.6914 | 1.0 | 4956 | 2.5116 | 11.1411 | 4.9989 |
| 2.2161 | 2.0 | 9912 | 2.3255 | 11.7334 | 5.1678 |
| 1.9237 | 3.0 | 14868 | 2.2388 | 13.6802 | 5.1463 |
| 1.7087 | 4.0 | 19824 | 2.1892 | 13.8815 | 5.0625 |
| 1.5423 | 5.0 | 24780 | 2.1586 | 14.8182 | 5.0779 |
| 1.3909 | 6.0 | 29736 | 2.1445 | 14.3603 | 5.2194 |
| 1.3041 | 7.0 | 34692 | 2.1323 | 16.2138 | 5.0438 |
| 1.2078 | 8.0 | 39648 | 2.1275 | 16.2574 | 5.0165 |
| 1.1523 | 9.0 | 44604 | 2.1255 | 16.0368 | 5.014 |
| 1.1005 | 10.0 | 49560 | 2.1251 | 15.9892 | 5.0168 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-ru-en-finetuned-ru-to-en", "results": []}]}
|
ilevs/opus-mt-ru-en-finetuned-ru-to-en
| null |
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
opus-mt-ru-en-finetuned-ru-to-en
================================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-ru-en on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1251
* Bleu: 15.9892
* Gen Len: 5.0168
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
#DialoGPT Model
|
{"tags": ["conversational"]}
|
ilikeapple12/DialoGPT-small-Phos
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-finetuned-wikitext2", "results": []}]}
|
iliketurtles/distilgpt2-finetuned-wikitext2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
distilgpt2-finetuned-wikitext2
==============================
This model is a fine-tuned version of distilgpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.6424
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
# camembert-base-fquad
## Description
A native French Question Answering model [CamemBERT-base](https://camembert-model.fr/) fine-tuned on [FQuAD](https://fquad.illuin.tech/).
## Evaluation results
On the development set.
```shell
{"f1": 88.1, "exact_match": 78.1}
```
On the test set.
```shell
{"f1": 88.3, "exact_match": 78.0}
```
## Usage
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='illuin/camembert-base-fquad', tokenizer='illuin/camembert-base-fquad')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
```
## Citation
If you use our work, please cite:
```bibtex
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
|
{"language": "fr", "license": "gpl-3.0", "tags": ["question-answering", "camembert"], "datasets": ["fquad"]}
|
illuin/camembert-base-fquad
| null |
[
"transformers",
"pytorch",
"camembert",
"question-answering",
"fr",
"dataset:fquad",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #camembert #question-answering #fr #dataset-fquad #license-gpl-3.0 #endpoints_compatible #region-us
|
# camembert-base-fquad
## Description
A native French Question Answering model CamemBERT-base fine-tuned on FQuAD.
## Evaluation results
On the development set.
On the test set.
## Usage
If you use our work, please cite:
|
[
"# camembert-base-fquad",
"## Description\n\nA native French Question Answering model CamemBERT-base fine-tuned on FQuAD.",
"## Evaluation results\n\nOn the development set.\n\n\n\nOn the test set.",
"## Usage\n\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #camembert #question-answering #fr #dataset-fquad #license-gpl-3.0 #endpoints_compatible #region-us \n",
"# camembert-base-fquad",
"## Description\n\nA native French Question Answering model CamemBERT-base fine-tuned on FQuAD.",
"## Evaluation results\n\nOn the development set.\n\n\n\nOn the test set.",
"## Usage\n\n\n\nIf you use our work, please cite:"
] |
null | null |
---
tags:
- conversational
#Harry Potter DialoGPT Model
|
{}
|
imdhamu/DialoGPT-small-harrypotter
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
---
tags:
- conversational
#Harry Potter DialoGPT Model
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 13 | 8.1476 |
| No log | 2.0 | 26 | 7.4435 |
| No log | 3.0 | 39 | 7.2082 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-wikitext2", "results": []}]}
|
imfiba1991/gpt2-wikitext2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gpt2-wikitext2
==============
This model is a fine-tuned version of gpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 7.2082
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
image-classification
|
transformers
|
# Pokémon Classifier
# Intro
A fine-tuned version of ViT-base on a collected set of Pokémon images. You can read more about the model [here](https://medium.com/@imjeffhi4/tutorial-using-vision-transformer-vit-to-create-a-pok%C3%A9mon-classifier-cb3f26ff2c20).
# Using the model
```python
from transformers import ViTForImageClassification, ViTFeatureExtractor
from PIL import Image
import torch
# Loading in Model
device = "cuda" if torch.cuda.is_available() else "cpu"
model = ViTForImageClassification.from_pretrained( "imjeffhi/pokemon_classifier").to(device)
feature_extractor = ViTFeatureExtractor.from_pretrained('imjeffhi/pokemon_classifier')
# Caling the model on a test image
img = Image.open('test.jpg')
extracted = feature_extractor(images=img, return_tensors='pt').to(device)
predicted_id = model(**extracted).logits.argmax(-1).item()
predicted_pokemon = model.config.id2label[predicted_id]
```
|
{}
|
imjeffhi/pokemon_classifier
| null |
[
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #vit #image-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Pokémon Classifier
# Intro
A fine-tuned version of ViT-base on a collected set of Pokémon images. You can read more about the model here.
# Using the model
|
[
"# Pokémon Classifier",
"# Intro\n\nA fine-tuned version of ViT-base on a collected set of Pokémon images. You can read more about the model here.",
"# Using the model"
] |
[
"TAGS\n#transformers #pytorch #vit #image-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Pokémon Classifier",
"# Intro\n\nA fine-tuned version of ViT-base on a collected set of Pokémon images. You can read more about the model here.",
"# Using the model"
] |
text-generation
|
transformers
|
# Pangu-Alpha 2.6B
## Model Description
PanGu-α is proposed by a joint technical team headed by PCNL. It was first released in [this repository](https://git.openi.org.cn/PCL-Platform.Intelligence/PanGu-Alpha) It is the first large-scale Chinese pre-trained language model with 200 billion parameters trained on 2048 Ascend processors using an automatic hybrid parallel training strategy. The whole training process is done on the “Peng Cheng Cloud Brain II” computing platform with the domestic deep learning framework called MindSpore. The PengCheng·PanGu-α pre-training model can support rich applications, has strong few-shot learning capabilities, and has outstanding performance in text generation tasks such as knowledge question and answer, knowledge retrieval, knowledge reasoning, and reading comprehension.
This repository contains PyTorch implementation of PanGu model, with
2.6 billion parameters pretrained weights (FP32 precision), converted from original MindSpore checkpoint.
## Usage (Text Generation)
Currently PanGu model is not supported by transformers,
so `trust_remote_code=True` is required to load model implementation in this repo.
```python
from transformers import TextGenerationPipeline, AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("imone/pangu_2_6B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("imone/pangu_2_6B", trust_remote_code=True)
text_generator = TextGenerationPipeline(model, tokenizer)
# greedy search
print(text_generator("中国和美国和日本和法国和加拿大和澳大利亚的首都分别是哪里?", max_length=50))
```
Expected output:
```python
[{'generated_text': '中国和美国和日本和法国和加拿大和澳大利亚的首都分别是哪里?\n中国北京,美国华盛顿,日本东京,法国巴黎,加拿大多伦多,澳大利亚悉尼,新西兰奥克兰,澳大利亚墨尔本,新西兰奥克兰,'}]
```
|
{}
|
imone/pangu_2_6B
| null |
[
"transformers",
"pytorch",
"gpt_pangu",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt_pangu #text-generation #custom_code #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Pangu-Alpha 2.6B
## Model Description
PanGu-α is proposed by a joint technical team headed by PCNL. It was first released in this repository It is the first large-scale Chinese pre-trained language model with 200 billion parameters trained on 2048 Ascend processors using an automatic hybrid parallel training strategy. The whole training process is done on the “Peng Cheng Cloud Brain II” computing platform with the domestic deep learning framework called MindSpore. The PengCheng·PanGu-α pre-training model can support rich applications, has strong few-shot learning capabilities, and has outstanding performance in text generation tasks such as knowledge question and answer, knowledge retrieval, knowledge reasoning, and reading comprehension.
This repository contains PyTorch implementation of PanGu model, with
2.6 billion parameters pretrained weights (FP32 precision), converted from original MindSpore checkpoint.
## Usage (Text Generation)
Currently PanGu model is not supported by transformers,
so 'trust_remote_code=True' is required to load model implementation in this repo.
Expected output:
|
[
"# Pangu-Alpha 2.6B",
"## Model Description\n\nPanGu-α is proposed by a joint technical team headed by PCNL. It was first released in this repository It is the first large-scale Chinese pre-trained language model with 200 billion parameters trained on 2048 Ascend processors using an automatic hybrid parallel training strategy. The whole training process is done on the “Peng Cheng Cloud Brain II” computing platform with the domestic deep learning framework called MindSpore. The PengCheng·PanGu-α pre-training model can support rich applications, has strong few-shot learning capabilities, and has outstanding performance in text generation tasks such as knowledge question and answer, knowledge retrieval, knowledge reasoning, and reading comprehension.\n\nThis repository contains PyTorch implementation of PanGu model, with\n2.6 billion parameters pretrained weights (FP32 precision), converted from original MindSpore checkpoint.",
"## Usage (Text Generation)\n\nCurrently PanGu model is not supported by transformers, \nso 'trust_remote_code=True' is required to load model implementation in this repo.\n\n\n\nExpected output:"
] |
[
"TAGS\n#transformers #pytorch #gpt_pangu #text-generation #custom_code #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Pangu-Alpha 2.6B",
"## Model Description\n\nPanGu-α is proposed by a joint technical team headed by PCNL. It was first released in this repository It is the first large-scale Chinese pre-trained language model with 200 billion parameters trained on 2048 Ascend processors using an automatic hybrid parallel training strategy. The whole training process is done on the “Peng Cheng Cloud Brain II” computing platform with the domestic deep learning framework called MindSpore. The PengCheng·PanGu-α pre-training model can support rich applications, has strong few-shot learning capabilities, and has outstanding performance in text generation tasks such as knowledge question and answer, knowledge retrieval, knowledge reasoning, and reading comprehension.\n\nThis repository contains PyTorch implementation of PanGu model, with\n2.6 billion parameters pretrained weights (FP32 precision), converted from original MindSpore checkpoint.",
"## Usage (Text Generation)\n\nCurrently PanGu model is not supported by transformers, \nso 'trust_remote_code=True' is required to load model implementation in this repo.\n\n\n\nExpected output:"
] |
text-generation
|
transformers
|
GPT-2 model fine-tuned on Custom old Hindi songs (Hinglish) for text-generation task (AI Lyricist)
language:
- Hindi
- Hinglish
|
{}
|
impyadav/GPT2-FineTuned-Hinglish-Song-Generation
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
GPT-2 model fine-tuned on Custom old Hindi songs (Hinglish) for text-generation task (AI Lyricist)
language:
- Hindi
- Hinglish
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Doctor DialoGPT Model
|
{"tags": ["conversational"]}
|
imran2part/DialogGPT-small-Doctor
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Doctor DialoGPT Model
|
[
"# Doctor DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Doctor DialoGPT Model"
] |
text-generation
|
transformers
|
# DialoGPT Trained on MCU Dialogues
|
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
|
imrit1999/DialoGPT-small-MCU
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DialoGPT Trained on MCU Dialogues
|
[
"# DialoGPT Trained on MCU Dialogues"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DialoGPT Trained on MCU Dialogues"
] |
text-generation
|
transformers
|
### GPT 2 News
**Update 02 Jan 2022**: Fixed mismatch tokenizer and model.wte size
### BibTex
```
@article{thanh21gpt2news,
author = {Thanh V. Le},
title = {Pretrained GPT-2 on Vietnamese news},
journal = {https://huggingface.co/imthanhlv/gpt2news},
year = {2021},
}
```
|
{"language": "vi", "tags": ["gpt"], "widget": [{"text": "H\u00f4m qua nh\u1eefng nh\u00e0 khoa h\u1ecdc M\u1ef9 \u0111\u00e3 ph\u00e1t hi\u1ec7n ra lo\u00e0i c\u00e1 l\u1ee3n"}]}
|
imthanhlv/gpt2news
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"gpt",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"vi"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #gpt #vi #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
### GPT 2 News
Update 02 Jan 2022: Fixed mismatch tokenizer and URL size
### BibTex
|
[
"### GPT 2 News\nUpdate 02 Jan 2022: Fixed mismatch tokenizer and URL size",
"### BibTex"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #gpt #vi #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### GPT 2 News\nUpdate 02 Jan 2022: Fixed mismatch tokenizer and URL size",
"### BibTex"
] |
text2text-generation
|
transformers
|
# T5 Vietnamese pretrain on news corpus
|
{}
|
imthanhlv/t5vi
| null |
[
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# T5 Vietnamese pretrain on news corpus
|
[
"# T5 Vietnamese pretrain on news corpus"
] |
[
"TAGS\n#transformers #jax #tensorboard #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# T5 Vietnamese pretrain on news corpus"
] |
null |
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased
This model is a fine-tuned version of [](https://huggingface.co/) on the jigsaw dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0393
- Precision Micro: 0.7758
- Recall Micro: 0.7858
- F1 Micro: 0.7808
- F2 Micro: 0.7838
- Precision Macro: 0.6349
- Recall Macro: 0.5972
- F1 Macro: 0.6105
- F2 Macro: 0.6015
- Overall Precision: 0.9841
- Overall Recall: 0.9841
- Overall F1: 0.9841
- Overall F2: 0.9841
- Overall Accuracy: 0.9841
- Matthews Corrcoef: 0.7725
- Aucroc Macro: 0.9897
- Aucroc Micro: 0.9920
- Accuracy Toxic: 0.9678
- F1 Toxic: 0.8295
- Accuracy Severe Toxic: 0.9899
- F1 Severe Toxic: 0.3313
- Accuracy Obscene: 0.9816
- F1 Obscene: 0.8338
- Accuracy Threat: 0.9974
- F1 Threat: 0.4545
- Accuracy Insult: 0.9763
- F1 Insult: 0.7662
- Accuracy Identity Hate: 0.9914
- F1 Identity Hate: 0.4480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Micro | Recall Micro | F1 Micro | F2 Micro | Precision Macro | Recall Macro | F1 Macro | F2 Macro | Overall Precision | Overall Recall | Overall F1 | Overall F2 | Overall Accuracy | Matthews Corrcoef | Aucroc Macro | Aucroc Micro | Accuracy Toxic | F1 Toxic | Accuracy Severe Toxic | F1 Severe Toxic | Accuracy Obscene | F1 Obscene | Accuracy Threat | F1 Threat | Accuracy Insult | F1 Insult | Accuracy Identity Hate | F1 Identity Hate |
|:-------------:|:-----:|:-----:|:---------------:|:---------------:|:------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:--------:|:-----------------:|:--------------:|:----------:|:----------:|:----------------:|:-----------------:|:------------:|:------------:|:--------------:|:--------:|:---------------------:|:---------------:|:----------------:|:----------:|:---------------:|:---------:|:---------------:|:---------:|:----------------------:|:----------------:|
| 0.0433 | 1.0 | 2659 | 0.0423 | 0.7607 | 0.7798 | 0.7702 | 0.7759 | 0.6398 | 0.5561 | 0.5585 | 0.5535 | 0.9832 | 0.9832 | 0.9832 | 0.9832 | 0.9832 | 0.7615 | 0.9887 | 0.9908 | 0.9671 | 0.8211 | 0.9878 | 0.4354 | 0.9805 | 0.8265 | 0.9974 | 0.2243 | 0.9746 | 0.7602 | 0.9918 | 0.2834 |
| 0.0366 | 2.0 | 5318 | 0.0393 | 0.7758 | 0.7858 | 0.7808 | 0.7838 | 0.6349 | 0.5972 | 0.6105 | 0.6015 | 0.9841 | 0.9841 | 0.9841 | 0.9841 | 0.9841 | 0.7725 | 0.9897 | 0.9920 | 0.9678 | 0.8295 | 0.9899 | 0.3313 | 0.9816 | 0.8338 | 0.9974 | 0.4545 | 0.9763 | 0.7662 | 0.9914 | 0.4480 |
| 0.0305 | 3.0 | 7977 | 0.0399 | 0.7608 | 0.8186 | 0.7887 | 0.8064 | 0.6621 | 0.6856 | 0.6715 | 0.6794 | 0.9842 | 0.9842 | 0.9842 | 0.9842 | 0.9842 | 0.7810 | 0.9897 | 0.9919 | 0.9662 | 0.8272 | 0.9892 | 0.4772 | 0.9815 | 0.8347 | 0.9977 | 0.5629 | 0.9772 | 0.7740 | 0.9931 | 0.5528 |
| 0.0263 | 4.0 | 10636 | 0.0435 | 0.7333 | 0.8336 | 0.7803 | 0.8114 | 0.6395 | 0.7039 | 0.6687 | 0.6890 | 0.9830 | 0.9830 | 0.9830 | 0.9830 | 0.9830 | 0.7732 | 0.9897 | 0.9912 | 0.9608 | 0.8083 | 0.9898 | 0.4791 | 0.9812 | 0.8319 | 0.9972 | 0.5368 | 0.9756 | 0.7700 | 0.9935 | 0.5861 |
| 0.0218 | 5.0 | 13295 | 0.0456 | 0.7480 | 0.8108 | 0.7781 | 0.7974 | 0.6661 | 0.6720 | 0.6662 | 0.6691 | 0.9833 | 0.9833 | 0.9833 | 0.9833 | 0.9833 | 0.7701 | 0.9890 | 0.9907 | 0.9612 | 0.8071 | 0.9894 | 0.4642 | 0.9823 | 0.8354 | 0.9977 | 0.5325 | 0.9754 | 0.7613 | 0.9936 | 0.5968 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
{"language": ["en"], "tags": ["generated_from_trainer"], "datasets": ["jigsaw"], "model_index": [{"name": "bert-base-uncased", "results": [{}]}]}
|
imvladikon/bert-base-uncased-jigsaw
| null |
[
"transformers",
"pytorch",
"bert",
"generated_from_trainer",
"en",
"dataset:jigsaw",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #generated_from_trainer #en #dataset-jigsaw #endpoints_compatible #region-us
|
bert-base-uncased
=================
This model is a fine-tuned version of [](URL on the jigsaw dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0393
* Precision Micro: 0.7758
* Recall Micro: 0.7858
* F1 Micro: 0.7808
* F2 Micro: 0.7838
* Precision Macro: 0.6349
* Recall Macro: 0.5972
* F1 Macro: 0.6105
* F2 Macro: 0.6015
* Overall Precision: 0.9841
* Overall Recall: 0.9841
* Overall F1: 0.9841
* Overall F2: 0.9841
* Overall Accuracy: 0.9841
* Matthews Corrcoef: 0.7725
* Aucroc Macro: 0.9897
* Aucroc Micro: 0.9920
* Accuracy Toxic: 0.9678
* F1 Toxic: 0.8295
* Accuracy Severe Toxic: 0.9899
* F1 Severe Toxic: 0.3313
* Accuracy Obscene: 0.9816
* F1 Obscene: 0.8338
* Accuracy Threat: 0.9974
* F1 Threat: 0.4545
* Accuracy Insult: 0.9763
* F1 Insult: 0.7662
* Accuracy Identity Hate: 0.9914
* F1 Identity Hate: 0.4480
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 24
* eval\_batch\_size: 12
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 48
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.8.2
* Pytorch 1.9.0+cu102
* Datasets 1.9.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 12\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 48\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #generated_from_trainer #en #dataset-jigsaw #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 12\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 48\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.0\n* Tokenizers 0.10.3"
] |
null |
transformers
|
pre-trained model from [CharBERT: Character-aware Pre-trained Language Model](https://github.com/wtma/CharBERT)
```
@misc{ma2020charbert,
title={CharBERT: Character-aware Pre-trained Language Model},
author={Wentao Ma and Yiming Cui and Chenglei Si and Ting Liu and Shijin Wang and Guoping Hu},
year={2020},
eprint={2011.01513},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["en"], "tags": ["language model"], "datasets": ["wikipedia"]}
|
imvladikon/charbert-bert-wiki
| null |
[
"transformers",
"pytorch",
"language model",
"en",
"dataset:wikipedia",
"arxiv:2011.01513",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2011.01513"
] |
[
"en"
] |
TAGS
#transformers #pytorch #language model #en #dataset-wikipedia #arxiv-2011.01513 #endpoints_compatible #region-us
|
pre-trained model from CharBERT: Character-aware Pre-trained Language Model
|
[] |
[
"TAGS\n#transformers #pytorch #language model #en #dataset-wikipedia #arxiv-2011.01513 #endpoints_compatible #region-us \n"
] |
null |
transformers
|
pre-trained model from [CharBERT: Character-aware Pre-trained Language Model](https://github.com/wtma/CharBERT)
```
@misc{ma2020charbert,
title={CharBERT: Character-aware Pre-trained Language Model},
author={Wentao Ma and Yiming Cui and Chenglei Si and Ting Liu and Shijin Wang and Guoping Hu},
year={2020},
eprint={2011.01513},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["en"], "tags": ["language model"], "datasets": ["wikipedia"]}
|
imvladikon/charbert-roberta-wiki
| null |
[
"transformers",
"pytorch",
"language model",
"en",
"dataset:wikipedia",
"arxiv:2011.01513",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2011.01513"
] |
[
"en"
] |
TAGS
#transformers #pytorch #language model #en #dataset-wikipedia #arxiv-2011.01513 #endpoints_compatible #region-us
|
pre-trained model from CharBERT: Character-aware Pre-trained Language Model
|
[] |
[
"TAGS\n#transformers #pytorch #language model #en #dataset-wikipedia #arxiv-2011.01513 #endpoints_compatible #region-us \n"
] |
null |
transformers
|
Pretrained general_character_bert model
from the ['CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters' El Boukkouri H., et al., 2020](https://github.com/helboukkouri/character-bert)
```
@inproceedings{el-boukkouri-etal-2020-characterbert,
title = "{C}haracter{BERT}: Reconciling {ELM}o and {BERT} for Word-Level Open-Vocabulary Representations From Characters",
author = "El Boukkouri, Hicham and
Ferret, Olivier and
Lavergne, Thomas and
Noji, Hiroshi and
Zweigenbaum, Pierre and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year={2020},
eprint={2010.10392},
archivePrefix={arXiv},
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.coling-main.609",
doi = "10.18653/v1/2020.coling-main.609",
pages = "6903--6915",
abstract = "Due to the compelling improvements brought by BERT, many recent representation models adopted the Transformer architecture as their main building block, consequently inheriting the wordpiece tokenization system despite it not being intrinsically linked to the notion of Transformers. While this system is thought to achieve a good balance between the flexibility of characters and the efficiency of full words, using predefined wordpiece vocabularies from the general domain is not always suitable, especially when building models for specialized domains (e.g., the medical domain). Moreover, adopting a wordpiece tokenization shifts the focus from the word level to the subword level, making the models conceptually more complex and arguably less convenient in practice. For these reasons, we propose CharacterBERT, a new variant of BERT that drops the wordpiece system altogether and uses a Character-CNN module instead to represent entire words by consulting their characters. We show that this new model improves the performance of BERT on a variety of medical domain tasks while at the same time producing robust, word-level, and open-vocabulary representations.",
}
```
|
{"language": ["en"], "tags": ["language model"], "datasets": ["wikipedia", "openwebtext"]}
|
imvladikon/general_character_bert
| null |
[
"transformers",
"pytorch",
"bert",
"language model",
"en",
"dataset:wikipedia",
"dataset:openwebtext",
"arxiv:2010.10392",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.10392"
] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #language model #en #dataset-wikipedia #dataset-openwebtext #arxiv-2010.10392 #endpoints_compatible #region-us
|
Pretrained general_character_bert model
from the 'CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters' El Boukkouri H., et al., 2020
|
[] |
[
"TAGS\n#transformers #pytorch #bert #language model #en #dataset-wikipedia #dataset-openwebtext #arxiv-2010.10392 #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# wav2vec2-large-xlsr-53-hebrew
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the several downloaded youtube samples.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "he", split="test[:2%]") # there is no common dataset for Hebrew, please, paste your data
processor = Wav2Vec2Processor.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew")
model = Wav2Vec2ForCTC.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on some Hebrew test data
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "he", split="test") # there is no common dataset for Hebrew, please, paste your data
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew")
model = Wav2Vec2ForCTC.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew").to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
# Example Predictions
|
{"language": "he", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Hebrew XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"args": "he"}, "metrics": [{"type": "wer", "name": "Test WER"}]}]}]}
|
imvladikon/wav2vec2-large-xlsr-53-hebrew
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"he",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"he"
] |
TAGS
#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #he #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xlsr-53-hebrew
Fine-tuned facebook/wav2vec2-large-xlsr-53 on the several downloaded youtube samples.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on some Hebrew test data
Test Result:
# Example Predictions
|
[
"# wav2vec2-large-xlsr-53-hebrew\nFine-tuned facebook/wav2vec2-large-xlsr-53 on the several downloaded youtube samples.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on some Hebrew test data\n\nTest Result:",
"# Example Predictions"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #he #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xlsr-53-hebrew\nFine-tuned facebook/wav2vec2-large-xlsr-53 on the several downloaded youtube samples.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on some Hebrew test data\n\nTest Result:",
"# Example Predictions"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-hebrew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3533
- Wer: 0.2251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3587 | 0.47 | 400 | 1.1883 | 0.8392 |
| 1.8377 | 0.95 | 800 | 0.8831 | 0.6852 |
| 1.7118 | 1.42 | 1200 | 0.8031 | 0.6566 |
| 1.6741 | 1.89 | 1600 | 0.7518 | 0.6104 |
| 1.6163 | 2.36 | 2000 | 0.6888 | 0.5591 |
| 1.5782 | 2.84 | 2400 | 0.6580 | 0.5165 |
| 1.5548 | 3.31 | 2800 | 0.6506 | 0.5184 |
| 1.5249 | 3.78 | 3200 | 0.6198 | 0.5028 |
| 1.5078 | 4.26 | 3600 | 0.5992 | 0.4932 |
| 1.4836 | 4.73 | 4000 | 0.5705 | 0.4651 |
| 1.4505 | 5.2 | 4400 | 0.5489 | 0.4508 |
| 1.4481 | 5.67 | 4800 | 0.5577 | 0.4562 |
| 1.4136 | 6.15 | 5200 | 0.5452 | 0.4371 |
| 1.3861 | 6.62 | 5600 | 0.5101 | 0.4087 |
| 1.3772 | 7.09 | 6000 | 0.4933 | 0.3951 |
| 1.3478 | 7.56 | 6400 | 0.4849 | 0.3922 |
| 1.3394 | 8.04 | 6800 | 0.4805 | 0.3892 |
| 1.3095 | 8.51 | 7200 | 0.4839 | 0.3834 |
| 1.306 | 8.98 | 7600 | 0.4611 | 0.3587 |
| 1.2707 | 9.46 | 8000 | 0.4545 | 0.3730 |
| 1.2626 | 9.93 | 8400 | 0.4516 | 0.3524 |
| 1.2412 | 10.4 | 8800 | 0.4314 | 0.3310 |
| 1.2456 | 10.87 | 9200 | 0.4401 | 0.3459 |
| 1.2081 | 11.35 | 9600 | 0.4399 | 0.3356 |
| 1.1998 | 11.82 | 10000 | 0.4195 | 0.3215 |
| 1.1826 | 12.29 | 10400 | 0.4221 | 0.3178 |
| 1.1573 | 12.77 | 10800 | 0.4098 | 0.3084 |
| 1.1416 | 13.24 | 11200 | 0.4086 | 0.3119 |
| 1.1174 | 13.71 | 11600 | 0.3854 | 0.2910 |
| 1.1048 | 14.18 | 12000 | 0.3859 | 0.2824 |
| 1.0748 | 14.66 | 12400 | 0.3854 | 0.2757 |
| 1.0697 | 15.13 | 12800 | 0.3740 | 0.2724 |
| 1.0477 | 15.6 | 13200 | 0.3693 | 0.2643 |
| 1.0356 | 16.08 | 13600 | 0.3727 | 0.2561 |
| 1.0083 | 16.55 | 14000 | 0.3652 | 0.2501 |
| 1.0 | 17.02 | 14400 | 0.3641 | 0.2457 |
| 0.9779 | 17.49 | 14800 | 0.3568 | 0.2409 |
| 0.9596 | 17.97 | 15200 | 0.3558 | 0.2376 |
| 0.946 | 18.44 | 15600 | 0.3591 | 0.2311 |
| 0.9389 | 18.91 | 16000 | 0.3540 | 0.2283 |
| 0.9173 | 19.39 | 16400 | 0.3552 | 0.2265 |
| 0.9122 | 19.86 | 16800 | 0.3535 | 0.2250 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["he"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "he", "generated_from_trainer", "hf-asr-leaderboard"], "base_model": "facebook/wav2vec2-xls-r-1b", "model-index": [{"name": "wav2vec2-xls-r-1b-hebrew", "results": []}]}
|
imvladikon/wav2vec2-xls-r-1b-hebrew
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"he",
"generated_from_trainer",
"hf-asr-leaderboard",
"base_model:facebook/wav2vec2-xls-r-1b",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"he"
] |
TAGS
#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #robust-speech-event #he #generated_from_trainer #hf-asr-leaderboard #base_model-facebook/wav2vec2-xls-r-1b #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
wav2vec2-xls-r-1b-hebrew
========================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3533
* Wer: 0.2251
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 6
* eval\_batch\_size: 6
* seed: 42
* distributed\_type: multi-GPU
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 24
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 400
* num\_epochs: 20.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 20.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #robust-speech-event #he #generated_from_trainer #hf-asr-leaderboard #base_model-facebook/wav2vec2-xls-r-1b #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 20.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-hebrew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the private datasets in 2 stages - firstly was fine-tuned on a small dataset with good samples Then the obtained model was fine-tuned on a large dataset with the small good dataset, with various samples from different sources, and with an unlabeled dataset that was weakly labeled using a previously trained model.
Small dataset:
| split |size(gb) | n_samples | duration(hrs)| |
|---|---|---|---|---|
|train|4.19| 20306 | 28 | |
|dev |1.05| 5076 | 7 | |
Large dataset:
| split |size(gb) | n_samples | duration(hrs)| |
|---|---|---|---|---|
|train|12.3| 90777 | 69 | |
|dev |2.39| 20246 | 14* | |
(*weakly labeled data wasn't used in validation set)
After firts training it achieves:
on small dataset
- Loss: 0.5438
- WER: 0.1773
on large dataset
- WER: 0.3811
after second training:
on small dataset
- WER: 0.1697
on large dataset
- Loss: 0.4502
- WER: 0.2318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
#### First training
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 3.15 | 1000 | 0.5203 | 0.4333 |
| 1.4284 | 6.31 | 2000 | 0.4816 | 0.3951 |
| 1.4284 | 9.46 | 3000 | 0.4315 | 0.3546 |
| 1.283 | 12.62 | 4000 | 0.4278 | 0.3404 |
| 1.283 | 15.77 | 5000 | 0.4090 | 0.3054 |
| 1.1777 | 18.93 | 6000 | 0.3893 | 0.3006 |
| 1.1777 | 22.08 | 7000 | 0.3968 | 0.2857 |
| 1.0994 | 25.24 | 8000 | 0.3892 | 0.2751 |
| 1.0994 | 28.39 | 9000 | 0.4061 | 0.2690 |
| 1.0323 | 31.54 | 10000 | 0.4114 | 0.2507 |
| 1.0323 | 34.7 | 11000 | 0.4021 | 0.2508 |
| 0.9623 | 37.85 | 12000 | 0.4032 | 0.2378 |
| 0.9623 | 41.01 | 13000 | 0.4148 | 0.2374 |
| 0.9077 | 44.16 | 14000 | 0.4350 | 0.2323 |
| 0.9077 | 47.32 | 15000 | 0.4515 | 0.2246 |
| 0.8573 | 50.47 | 16000 | 0.4474 | 0.2180 |
| 0.8573 | 53.63 | 17000 | 0.4649 | 0.2171 |
| 0.8083 | 56.78 | 18000 | 0.4455 | 0.2102 |
| 0.8083 | 59.94 | 19000 | 0.4587 | 0.2092 |
| 0.769 | 63.09 | 20000 | 0.4794 | 0.2012 |
| 0.769 | 66.25 | 21000 | 0.4845 | 0.2007 |
| 0.7308 | 69.4 | 22000 | 0.4937 | 0.2008 |
| 0.7308 | 72.55 | 23000 | 0.4920 | 0.1895 |
| 0.6927 | 75.71 | 24000 | 0.5179 | 0.1911 |
| 0.6927 | 78.86 | 25000 | 0.5202 | 0.1877 |
| 0.6622 | 82.02 | 26000 | 0.5266 | 0.1840 |
| 0.6622 | 85.17 | 27000 | 0.5351 | 0.1854 |
| 0.6315 | 88.33 | 28000 | 0.5373 | 0.1811 |
| 0.6315 | 91.48 | 29000 | 0.5331 | 0.1792 |
| 0.6075 | 94.64 | 30000 | 0.5390 | 0.1779 |
| 0.6075 | 97.79 | 31000 | 0.5459 | 0.1773 |
#### Second training
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.7 | 1000 | 0.5371 | 0.3811 |
| 1.3606 | 1.41 | 2000 | 0.5247 | 0.3902 |
| 1.3606 | 2.12 | 3000 | 0.5126 | 0.3859 |
| 1.3671 | 2.82 | 4000 | 0.5062 | 0.3828 |
| 1.3671 | 3.53 | 5000 | 0.4979 | 0.3672 |
| 1.3421 | 4.23 | 6000 | 0.4906 | 0.3816 |
| 1.3421 | 4.94 | 7000 | 0.4784 | 0.3651 |
| 1.328 | 5.64 | 8000 | 0.4810 | 0.3669 |
| 1.328 | 6.35 | 9000 | 0.4747 | 0.3597 |
| 1.3109 | 7.05 | 10000 | 0.4813 | 0.3808 |
| 1.3109 | 7.76 | 11000 | 0.4631 | 0.3561 |
| 1.2873 | 8.46 | 12000 | 0.4603 | 0.3431 |
| 1.2873 | 9.17 | 13000 | 0.4579 | 0.3533 |
| 1.2661 | 9.87 | 14000 | 0.4471 | 0.3365 |
| 1.2661 | 10.58 | 15000 | 0.4584 | 0.3437 |
| 1.249 | 11.28 | 16000 | 0.4461 | 0.3454 |
| 1.249 | 11.99 | 17000 | 0.4482 | 0.3367 |
| 1.2322 | 12.69 | 18000 | 0.4464 | 0.3335 |
| 1.2322 | 13.4 | 19000 | 0.4427 | 0.3454 |
| 1.22 | 14.1 | 20000 | 0.4440 | 0.3395 |
| 1.22 | 14.81 | 21000 | 0.4459 | 0.3378 |
| 1.2044 | 15.51 | 22000 | 0.4406 | 0.3199 |
| 1.2044 | 16.22 | 23000 | 0.4398 | 0.3155 |
| 1.1913 | 16.92 | 24000 | 0.4237 | 0.3150 |
| 1.1913 | 17.63 | 25000 | 0.4287 | 0.3279 |
| 1.1705 | 18.34 | 26000 | 0.4253 | 0.3103 |
| 1.1705 | 19.04 | 27000 | 0.4234 | 0.3098 |
| 1.1564 | 19.75 | 28000 | 0.4174 | 0.3076 |
| 1.1564 | 20.45 | 29000 | 0.4260 | 0.3160 |
| 1.1461 | 21.16 | 30000 | 0.4235 | 0.3036 |
| 1.1461 | 21.86 | 31000 | 0.4309 | 0.3055 |
| 1.1285 | 22.57 | 32000 | 0.4264 | 0.3006 |
| 1.1285 | 23.27 | 33000 | 0.4201 | 0.2880 |
| 1.1135 | 23.98 | 34000 | 0.4131 | 0.2975 |
| 1.1135 | 24.68 | 35000 | 0.4202 | 0.2849 |
| 1.0968 | 25.39 | 36000 | 0.4105 | 0.2888 |
| 1.0968 | 26.09 | 37000 | 0.4210 | 0.2834 |
| 1.087 | 26.8 | 38000 | 0.4123 | 0.2843 |
| 1.087 | 27.5 | 39000 | 0.4216 | 0.2803 |
| 1.0707 | 28.21 | 40000 | 0.4161 | 0.2787 |
| 1.0707 | 28.91 | 41000 | 0.4186 | 0.2740 |
| 1.0575 | 29.62 | 42000 | 0.4118 | 0.2845 |
| 1.0575 | 30.32 | 43000 | 0.4243 | 0.2773 |
| 1.0474 | 31.03 | 44000 | 0.4221 | 0.2707 |
| 1.0474 | 31.73 | 45000 | 0.4138 | 0.2700 |
| 1.0333 | 32.44 | 46000 | 0.4102 | 0.2638 |
| 1.0333 | 33.15 | 47000 | 0.4162 | 0.2650 |
| 1.0191 | 33.85 | 48000 | 0.4155 | 0.2636 |
| 1.0191 | 34.56 | 49000 | 0.4129 | 0.2656 |
| 1.0087 | 35.26 | 50000 | 0.4157 | 0.2632 |
| 1.0087 | 35.97 | 51000 | 0.4090 | 0.2654 |
| 0.9901 | 36.67 | 52000 | 0.4183 | 0.2587 |
| 0.9901 | 37.38 | 53000 | 0.4251 | 0.2648 |
| 0.9795 | 38.08 | 54000 | 0.4229 | 0.2555 |
| 0.9795 | 38.79 | 55000 | 0.4176 | 0.2546 |
| 0.9644 | 39.49 | 56000 | 0.4223 | 0.2513 |
| 0.9644 | 40.2 | 57000 | 0.4244 | 0.2530 |
| 0.9534 | 40.9 | 58000 | 0.4175 | 0.2538 |
| 0.9534 | 41.61 | 59000 | 0.4213 | 0.2505 |
| 0.9397 | 42.31 | 60000 | 0.4275 | 0.2565 |
| 0.9397 | 43.02 | 61000 | 0.4315 | 0.2528 |
| 0.9269 | 43.72 | 62000 | 0.4316 | 0.2501 |
| 0.9269 | 44.43 | 63000 | 0.4247 | 0.2471 |
| 0.9175 | 45.13 | 64000 | 0.4376 | 0.2469 |
| 0.9175 | 45.84 | 65000 | 0.4335 | 0.2450 |
| 0.9026 | 46.54 | 66000 | 0.4336 | 0.2452 |
| 0.9026 | 47.25 | 67000 | 0.4400 | 0.2427 |
| 0.8929 | 47.95 | 68000 | 0.4382 | 0.2429 |
| 0.8929 | 48.66 | 69000 | 0.4361 | 0.2415 |
| 0.8786 | 49.37 | 70000 | 0.4413 | 0.2398 |
| 0.8786 | 50.07 | 71000 | 0.4392 | 0.2415 |
| 0.8714 | 50.78 | 72000 | 0.4345 | 0.2406 |
| 0.8714 | 51.48 | 73000 | 0.4475 | 0.2402 |
| 0.8589 | 52.19 | 74000 | 0.4473 | 0.2374 |
| 0.8589 | 52.89 | 75000 | 0.4457 | 0.2357 |
| 0.8493 | 53.6 | 76000 | 0.4462 | 0.2366 |
| 0.8493 | 54.3 | 77000 | 0.4494 | 0.2356 |
| 0.8395 | 55.01 | 78000 | 0.4472 | 0.2352 |
| 0.8395 | 55.71 | 79000 | 0.4490 | 0.2339 |
| 0.8295 | 56.42 | 80000 | 0.4489 | 0.2318 |
| 0.8295 | 57.12 | 81000 | 0.4469 | 0.2320 |
| 0.8225 | 57.83 | 82000 | 0.4478 | 0.2321 |
| 0.8225 | 58.53 | 83000 | 0.4525 | 0.2326 |
| 0.816 | 59.24 | 84000 | 0.4532 | 0.2316 |
| 0.816 | 59.94 | 85000 | 0.4502 | 0.2318 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["he"], "tags": ["automatic-speech-recognition", "generated_from_trainer", "he", "hf-asr-leaderboard", "robust-speech-event"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-xls-r-300m-hebrew", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Custom Dataset", "type": "custom", "args": "he"}, "metrics": [{"type": "wer", "value": 23.18, "name": "Test WER"}]}]}]}
|
imvladikon/wav2vec2-xls-r-300m-hebrew
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"he",
"hf-asr-leaderboard",
"robust-speech-event",
"base_model:facebook/wav2vec2-xls-r-300m",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"he"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #he #hf-asr-leaderboard #robust-speech-event #base_model-facebook/wav2vec2-xls-r-300m #model-index #endpoints_compatible #has_space #region-us
|
wav2vec2-xls-r-300m-hebrew
==========================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the private datasets in 2 stages - firstly was fine-tuned on a small dataset with good samples Then the obtained model was fine-tuned on a large dataset with the small good dataset, with various samples from different sources, and with an unlabeled dataset that was weakly labeled using a previously trained model.
Small dataset:
Large dataset:
After firts training it achieves:
on small dataset
* Loss: 0.5438
* WER: 0.1773
on large dataset
* WER: 0.3811
after second training:
on small dataset
* WER: 0.1697
on large dataset
* Loss: 0.4502
* WER: 0.2318
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
#### First training
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 2
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
Training results
#### Second training
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 2
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 60.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters",
"#### First training\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP\n\n\nTraining results",
"#### Second training\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 60.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #he #hf-asr-leaderboard #robust-speech-event #base_model-facebook/wav2vec2-xls-r-300m #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters",
"#### First training\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP\n\n\nTraining results",
"#### Second training\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 60.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
# wav2vec2-xls-r-300m-lm-hebrew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset
with adding ngram models according to [Boosting Wav2Vec2 with n-grams in 🤗 Transformers](https://huggingface.co/blog/wav2vec2-with-ngram)
## Usage
check package: https://github.com/imvladikon/wav2vec2-hebrew
or use transformers pipeline:
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "imvladikon/wav2vec2-xls-r-300m-lm-hebrew"
sample_iter = iter(load_dataset("google/fleurs", "he_il", split="test", streaming=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), sample["audio"]["sampling_rate"], 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
print(transcription)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["he"], "license": "apache-2.0", "tags": ["generated_from_trainer", "he", "robust-speech-event"], "datasets": ["imvladikon/hebrew_speech_kan", "imvladikon/hebrew_speech_coursera"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-xls-r-300m-lm-hebrew", "results": []}]}
|
imvladikon/wav2vec2-xls-r-300m-lm-hebrew
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"he",
"robust-speech-event",
"dataset:imvladikon/hebrew_speech_kan",
"dataset:imvladikon/hebrew_speech_coursera",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"he"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #he #robust-speech-event #dataset-imvladikon/hebrew_speech_kan #dataset-imvladikon/hebrew_speech_coursera #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-xls-r-300m-lm-hebrew
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset
with adding ngram models according to Boosting Wav2Vec2 with n-grams in Transformers
## Usage
check package: URL
or use transformers pipeline:
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
[
"# wav2vec2-xls-r-300m-lm-hebrew\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset\nwith adding ngram models according to Boosting Wav2Vec2 with n-grams in Transformers",
"## Usage\n\ncheck package: URL \n\nor use transformers pipeline:",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 64\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 100\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #he #robust-speech-event #dataset-imvladikon/hebrew_speech_kan #dataset-imvladikon/hebrew_speech_coursera #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-xls-r-300m-lm-hebrew\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset\nwith adding ngram models according to Boosting Wav2Vec2 with n-grams in Transformers",
"## Usage\n\ncheck package: URL \n\nor use transformers pipeline:",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 64\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 100\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16492731
## Validation Metrics
- Loss: 0.21610039472579956
- Accuracy: 0.9155366722657816
- Precision: 0.9530714194995978
- Recall: 0.944871149164778
- AUC: 0.9553238723676906
- F1: 0.9489535692456846
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/imzachjohnson/autonlp-spinner-check-16492731
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("imzachjohnson/autonlp-spinner-check-16492731", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("imzachjohnson/autonlp-spinner-check-16492731", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["imzachjohnson/autonlp-data-spinner-check"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
imzachjohnson/autonlp-spinner-check-16492731
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:imzachjohnson/autonlp-data-spinner-check",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-imzachjohnson/autonlp-data-spinner-check #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16492731
## Validation Metrics
- Loss: 0.21610039472579956
- Accuracy: 0.9155366722657816
- Precision: 0.9530714194995978
- Recall: 0.944871149164778
- AUC: 0.9553238723676906
- F1: 0.9489535692456846
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 16492731",
"## Validation Metrics\n\n- Loss: 0.21610039472579956\n- Accuracy: 0.9155366722657816\n- Precision: 0.9530714194995978\n- Recall: 0.944871149164778\n- AUC: 0.9553238723676906\n- F1: 0.9489535692456846",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-imzachjohnson/autonlp-data-spinner-check #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 16492731",
"## Validation Metrics\n\n- Loss: 0.21610039472579956\n- Accuracy: 0.9155366722657816\n- Precision: 0.9530714194995978\n- Recall: 0.944871149164778\n- AUC: 0.9553238723676906\n- F1: 0.9489535692456846",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
fill-mask
|
transformers
|
# BERTino: an Italian DistilBERT model
This repository hosts BERTino, an Italian DistilBERT model pre-trained by
[indigo.ai](https://indigo.ai/en/)
on a large general-domain Italian corpus. BERTino is task-agnostic and can be
fine-tuned for every downstream task.
### Corpus
The pre-training corpus that we used is the union of the
[Paisa](https://www.corpusitaliano.it/) and
[ItWaC](https://corpora.dipintra.it/public/run.cgi/corp_info?corpname=itwac_full)
corpora. The final corpus counts 14 millions of sentences for a total of 12 GB
of text.
### Downstream Results
To validate the pre-training that we conducted, we evaluated BERTino on the
[Italian ParTUT](https://universaldependencies.org/treebanks/it_partut/index.html),
[Italian ISDT](https://universaldependencies.org/treebanks/it_isdt/index.html),
[Italian WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500)
and multi-class sentence classification tasks. We report for comparison results
obtained by the [teacher model](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased)
fine-tuned in the same tasks and for the same number of epochs.
**Italian ISDT:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,9801 | 9m, 4s | 3s |
| Teacher | 0,983 | 16m, 28s | 5s |
**Italian ParTUT:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,9268 | 1m, 18s | 1s |
| Teacher | 0,9688 | 2m, 18s | 1s |
**Italian WikiNER:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,9038 | 35m, 35s | 3m, 1s |
| Teacher | 0,9178 | 67m, 8s | 5m, 16s |
**Multi-class sentence classification:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,7788 | 4m, 40s | 6s |
| Teacher | 0,7986 | 8m, 52s | 9s |
|
{"language": "it", "license": "mit", "tags": ["DISTILbert", "Italian"], "widget": [{"text": "Vado al [MASK] a fare la spesa"}, {"text": "Vado al parco a guardare le [MASK]"}, {"text": "Il cielo \u00e8 [MASK] di stelle."}]}
|
indigo-ai/BERTino
| null |
[
"transformers",
"pytorch",
"tf",
"distilbert",
"fill-mask",
"DISTILbert",
"Italian",
"it",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #tf #distilbert #fill-mask #DISTILbert #Italian #it #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
BERTino: an Italian DistilBERT model
====================================
This repository hosts BERTino, an Italian DistilBERT model pre-trained by
URL
on a large general-domain Italian corpus. BERTino is task-agnostic and can be
fine-tuned for every downstream task.
### Corpus
The pre-training corpus that we used is the union of the
Paisa and
ItWaC
corpora. The final corpus counts 14 millions of sentences for a total of 12 GB
of text.
### Downstream Results
To validate the pre-training that we conducted, we evaluated BERTino on the
Italian ParTUT,
Italian ISDT,
Italian WikiNER
and multi-class sentence classification tasks. We report for comparison results
obtained by the teacher model
fine-tuned in the same tasks and for the same number of epochs.
Italian ISDT:
Italian ParTUT:
Italian WikiNER:
Multi-class sentence classification:
|
[
"### Corpus\n\n\nThe pre-training corpus that we used is the union of the\nPaisa and\nItWaC\ncorpora. The final corpus counts 14 millions of sentences for a total of 12 GB\nof text.",
"### Downstream Results\n\n\nTo validate the pre-training that we conducted, we evaluated BERTino on the\nItalian ParTUT,\nItalian ISDT,\nItalian WikiNER\nand multi-class sentence classification tasks. We report for comparison results\nobtained by the teacher model\nfine-tuned in the same tasks and for the same number of epochs.\n\n\nItalian ISDT:\n\n\n\nItalian ParTUT:\n\n\n\nItalian WikiNER:\n\n\n\nMulti-class sentence classification:"
] |
[
"TAGS\n#transformers #pytorch #tf #distilbert #fill-mask #DISTILbert #Italian #it #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Corpus\n\n\nThe pre-training corpus that we used is the union of the\nPaisa and\nItWaC\ncorpora. The final corpus counts 14 millions of sentences for a total of 12 GB\nof text.",
"### Downstream Results\n\n\nTo validate the pre-training that we conducted, we evaluated BERTino on the\nItalian ParTUT,\nItalian ISDT,\nItalian WikiNER\nand multi-class sentence classification tasks. We report for comparison results\nobtained by the teacher model\nfine-tuned in the same tasks and for the same number of epochs.\n\n\nItalian ISDT:\n\n\n\nItalian ParTUT:\n\n\n\nItalian WikiNER:\n\n\n\nMulti-class sentence classification:"
] |
text2text-generation
|
transformers
|
# IndoBART-v2 Model
[IndoBART-v2](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
## All Pre-trained Models
| Model | #params | Training data |
|--------------------------------|--------------------------------|-----------------------------------|
| `indobenchmark/indobart-v2` | 132M | Indo4B-Plus (26 GB of text) |
## Authors
<b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
## Citation
If you use our work, please cite:
```bibtex
@article{cahyawijaya2021indonlg,
title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation},
author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others},
journal={arXiv preprint arXiv:2104.08200},
year={2021}
}
```
|
{"language": "id", "license": "mit", "tags": ["indogpt", "indobenchmark", "indonlg"], "datasets": ["Indo4B+"], "inference": false}
|
indobenchmark/indobart-v2
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"indogpt",
"indobenchmark",
"indonlg",
"id",
"arxiv:2104.08200",
"license:mit",
"autotrain_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08200"
] |
[
"id"
] |
TAGS
#transformers #pytorch #mbart #text2text-generation #indogpt #indobenchmark #indonlg #id #arxiv-2104.08200 #license-mit #autotrain_compatible #has_space #region-us
|
IndoBART-v2 Model
=================
IndoBART-v2 is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
All Pre-trained Models
----------------------
Model: 'indobenchmark/indobart-v2', #params: 132M, Training data: Indo4B-Plus (26 GB of text)
Authors
-------
**IndoBART** was trained and evaluated by Samuel Cahyawijaya\*, Genta Indra Winata\*, Bryan Wilie\*, Karissa Vincentio\*, Xiaohong Li\*, Adhiguna Kuncoro\*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
If you use our work, please cite:
|
[] |
[
"TAGS\n#transformers #pytorch #mbart #text2text-generation #indogpt #indobenchmark #indonlg #id #arxiv-2104.08200 #license-mit #autotrain_compatible #has_space #region-us \n"
] |
text2text-generation
|
transformers
|
# IndoBART Model
[IndoBART](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
## All Pre-trained Models
| Model | #params | Training data |
|--------------------------------|--------------------------------|-----------------------------------|
| `indobenchmark/indobart` | 132M | Indo4B-Plus (23.79 GB of text) |
## Authors
<b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
## Citation
If you use our work, please cite:
```bibtex
@article{cahyawijaya2021indonlg,
title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation},
author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others},
journal={arXiv preprint arXiv:2104.08200},
year={2021}
}
```
|
{"language": "id", "license": "mit", "tags": ["indogpt", "indobenchmark", "indonlg"], "datasets": ["Indo4B+"], "inference": false}
|
indobenchmark/indobart
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"indogpt",
"indobenchmark",
"indonlg",
"id",
"arxiv:2104.08200",
"license:mit",
"autotrain_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08200"
] |
[
"id"
] |
TAGS
#transformers #pytorch #mbart #text2text-generation #indogpt #indobenchmark #indonlg #id #arxiv-2104.08200 #license-mit #autotrain_compatible #has_space #region-us
|
IndoBART Model
==============
IndoBART is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
All Pre-trained Models
----------------------
Model: 'indobenchmark/indobart', #params: 132M, Training data: Indo4B-Plus (23.79 GB of text)
Authors
-------
**IndoBART** was trained and evaluated by Samuel Cahyawijaya\*, Genta Indra Winata\*, Bryan Wilie\*, Karissa Vincentio\*, Xiaohong Li\*, Adhiguna Kuncoro\*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
If you use our work, please cite:
|
[] |
[
"TAGS\n#transformers #pytorch #mbart #text2text-generation #indogpt #indobenchmark #indonlg #id #arxiv-2104.08200 #license-mit #autotrain_compatible #has_space #region-us \n"
] |
feature-extraction
|
transformers
|
# IndoBERT Base Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-base-p1")
model = AutoModel.from_pretrained("indobenchmark/indobert-base-p1")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
{"language": "id", "license": "mit", "tags": ["indobert", "indobenchmark", "indonlu"], "datasets": ["Indo4B"], "inference": false}
|
indobenchmark/indobert-base-p1
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.05387"
] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #jax #bert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #has_space #region-us
|
IndoBERT Base Model (phase1 - uncased)
======================================
IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
All Pre-trained Models
----------------------
How to use
----------
### Load model and tokenizer
### Extract contextual representation
Authors
-------
**IndoBERT** was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
If you use our work, please cite:
|
[
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #has_space #region-us \n",
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
feature-extraction
|
transformers
|
# IndoBERT Base Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-base-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-base-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
{"language": "id", "license": "mit", "tags": ["indobert", "indobenchmark", "indonlu"], "datasets": ["Indo4B"], "inference": false}
|
indobenchmark/indobert-base-p2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.05387"
] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #jax #bert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #has_space #region-us
|
IndoBERT Base Model (phase2 - uncased)
======================================
IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
All Pre-trained Models
----------------------
How to use
----------
### Load model and tokenizer
### Extract contextual representation
Authors
-------
**IndoBERT** was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
If you use our work, please cite:
|
[
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #has_space #region-us \n",
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
feature-extraction
|
transformers
|
# IndoBERT Large Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-large-p1")
model = AutoModel.from_pretrained("indobenchmark/indobert-large-p1")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
{"language": "id", "license": "mit", "tags": ["indobert", "indobenchmark", "indonlu"], "datasets": ["Indo4B"], "inference": false}
|
indobenchmark/indobert-large-p1
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.05387"
] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #jax #bert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us
|
IndoBERT Large Model (phase1 - uncased)
=======================================
IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
All Pre-trained Models
----------------------
How to use
----------
### Load model and tokenizer
### Extract contextual representation
Authors
-------
**IndoBERT** was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
If you use our work, please cite:
|
[
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us \n",
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
feature-extraction
|
transformers
|
# IndoBERT Large Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-large-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-large-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
{"language": "id", "license": "mit", "tags": ["indobert", "indobenchmark", "indonlu"], "datasets": ["Indo4B"], "inference": false}
|
indobenchmark/indobert-large-p2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.05387"
] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #jax #bert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us
|
IndoBERT Large Model (phase2 - uncased)
=======================================
IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
All Pre-trained Models
----------------------
How to use
----------
### Load model and tokenizer
### Extract contextual representation
Authors
-------
**IndoBERT** was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
If you use our work, please cite:
|
[
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us \n",
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
feature-extraction
|
transformers
|
# IndoBERT-Lite Base Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-base-p1")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-base-p1")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
{"language": "id", "license": "mit", "tags": ["indobert", "indobenchmark", "indonlu"], "datasets": ["Indo4B"], "inference": false}
|
indobenchmark/indobert-lite-base-p1
| null |
[
"transformers",
"pytorch",
"tf",
"albert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.05387"
] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #albert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us
|
IndoBERT-Lite Base Model (phase1 - uncased)
===========================================
IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
All Pre-trained Models
----------------------
How to use
----------
### Load model and tokenizer
### Extract contextual representation
Authors
-------
**IndoBERT** was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
If you use our work, please cite:
|
[
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #tf #albert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us \n",
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
feature-extraction
|
transformers
|
# IndoBERT-Lite Base Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-base-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-base-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
{"language": "id", "license": "mit", "tags": ["indobert", "indobenchmark", "indonlu"], "datasets": ["Indo4B"], "inference": false}
|
indobenchmark/indobert-lite-base-p2
| null |
[
"transformers",
"pytorch",
"tf",
"albert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.05387"
] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #albert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us
|
IndoBERT-Lite Base Model (phase2 - uncased)
===========================================
IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
All Pre-trained Models
----------------------
How to use
----------
### Load model and tokenizer
### Extract contextual representation
Authors
-------
**IndoBERT** was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
If you use our work, please cite:
|
[
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #tf #albert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us \n",
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
feature-extraction
|
transformers
|
# IndoBERT-Lite Large Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-large-p1")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-large-p1")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
{"language": "id", "license": "mit", "tags": ["indobert", "indobenchmark", "indonlu"], "datasets": ["Indo4B"], "inference": false}
|
indobenchmark/indobert-lite-large-p1
| null |
[
"transformers",
"pytorch",
"tf",
"albert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.05387"
] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #albert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us
|
IndoBERT-Lite Large Model (phase1 - uncased)
============================================
IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
All Pre-trained Models
----------------------
How to use
----------
### Load model and tokenizer
### Extract contextual representation
Authors
-------
**IndoBERT** was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
If you use our work, please cite:
|
[
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #tf #albert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us \n",
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
feature-extraction
|
transformers
|
# IndoBERT-Lite Large Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-large-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-large-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
{"language": "id", "license": "mit", "tags": ["indobert", "indobenchmark", "indonlu"], "datasets": ["Indo4B"], "inference": false}
|
indobenchmark/indobert-lite-large-p2
| null |
[
"transformers",
"pytorch",
"tf",
"albert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.05387"
] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #albert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us
|
IndoBERT-Lite Large Model (phase2 - uncased)
============================================
IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
All Pre-trained Models
----------------------
How to use
----------
### Load model and tokenizer
### Extract contextual representation
Authors
-------
**IndoBERT** was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
If you use our work, please cite:
|
[
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #tf #albert #feature-extraction #indobert #indobenchmark #indonlu #id #dataset-Indo4B #arxiv-2009.05387 #license-mit #region-us \n",
"### Load model and tokenizer",
"### Extract contextual representation\n\n\nAuthors\n-------\n\n\n**IndoBERT** was trained and evaluated by Bryan Wilie\\*, Karissa Vincentio\\*, Genta Indra Winata\\*, Samuel Cahyawijaya\\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.\n\n\nIf you use our work, please cite:"
] |
text-generation
|
transformers
|
# IndoGPT Model
[IndoGPT](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the GPT model. The pretrained model is trained using the GPT training objective.
## All Pre-trained Models
| Model | #params | Training data |
|--------------------------------|--------------------------------|-----------------------------------|
| `indobenchmark/indogpt` | 117M | Indo4B-Plus (23.79 GB of text) |
## Authors
<b>IndoGPT</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
## Citation
If you use our work, please cite:
```bibtex
@article{cahyawijaya2021indonlg,
title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation},
author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others},
journal={arXiv preprint arXiv:2104.08200},
year={2021}
}
```
|
{"language": "id", "license": "mit", "tags": ["indogpt", "indobenchmark", "indonlg"], "datasets": ["Indo4B+"], "inference": false}
|
indobenchmark/indogpt
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"indogpt",
"indobenchmark",
"indonlg",
"id",
"arxiv:2104.08200",
"license:mit",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08200"
] |
[
"id"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #indogpt #indobenchmark #indonlg #id #arxiv-2104.08200 #license-mit #autotrain_compatible #has_space #text-generation-inference #region-us
|
IndoGPT Model
=============
IndoGPT is a state-of-the-art language model for Indonesian based on the GPT model. The pretrained model is trained using the GPT training objective.
All Pre-trained Models
----------------------
Model: 'indobenchmark/indogpt', #params: 117M, Training data: Indo4B-Plus (23.79 GB of text)
Authors
-------
**IndoGPT** was trained and evaluated by Samuel Cahyawijaya\*, Genta Indra Winata\*, Bryan Wilie\*, Karissa Vincentio\*, Xiaohong Li\*, Adhiguna Kuncoro\*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
If you use our work, please cite:
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #indogpt #indobenchmark #indonlg #id #arxiv-2104.08200 #license-mit #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
fill-mask
|
transformers
|
## About
[IndoBERT](https://arxiv.org/pdf/2011.00677.pdf) is the Indonesian version of BERT model. We train the model using over 220M words, aggregated from three main sources:
* Indonesian Wikipedia (74M words)
* news articles from Kompas, Tempo (Tala et al., 2003), and Liputan6 (55M words in total)
* an Indonesian Web Corpus (Medved and Suchomel, 2017) (90M words).
We trained the model for 2.4M steps (180 epochs) with the final perplexity over the development set being <b>3.97</b> (similar to English BERT-base).
This <b>IndoBERT</b> was used to examine IndoLEM - an Indonesian benchmark that comprises of seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse.
| Task | Metric | Bi-LSTM | mBERT | MalayBERT | IndoBERT |
| ---- | ---- | ---- | ---- | ---- | ---- |
| POS Tagging | Acc | 95.4 | <b>96.8</b> | <b>96.8</b> | <b>96.8</b> |
| NER UGM | F1| 70.9 | 71.6 | 73.2 | <b>74.9</b> |
| NER UI | F1 | 82.2 | 82.2 | 87.4 | <b>90.1</b> |
| Dep. Parsing (UD-Indo-GSD) | UAS/LAS | 85.25/80.35 | 86.85/81.78 | 86.99/81.87 | <b>87.12<b/>/<b>82.32</b> |
| Dep. Parsing (UD-Indo-PUD) | UAS/LAS | 84.04/79.01 | <b>90.58</b>/<b>85.44</b> | 88.91/83.56 | 89.23/83.95 |
| Sentiment Analysis | F1 | 71.62 | 76.58 | 82.02 | <b>84.13</b> |
| Summarization | R1/R2/RL | 67.96/61.65/67.24 | 68.40/61.66/67.67 | 68.44/61.38/67.71 | <b>69.93</b>/<b>62.86</b>/<b>69.21</b> |
| Next Tweet Prediction | Acc | 73.6 | 92.4 | 93.1 | <b>93.7</b> |
| Tweet Ordering | Spearman corr. | 0.45 | 0.53 | 0.51 | <b>0.59</b> |
The paper is published at the 28th COLING 2020. Please refer to https://indolem.github.io for more details about the benchmarks.
## How to use
### Load model and tokenizer (tested with transformers==3.5.1)
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("indolem/indobert-base-uncased")
model = AutoModel.from_pretrained("indolem/indobert-base-uncased")
```
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{koto2020indolem,
title={IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP},
author={Fajri Koto and Afshin Rahimi and Jey Han Lau and Timothy Baldwin},
booktitle={Proceedings of the 28th COLING},
year={2020}
}
```
|
{"language": "id", "license": "mit", "tags": ["indobert", "indolem"], "inference": false}
|
indolem/indobert-base-uncased
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"indobert",
"indolem",
"id",
"arxiv:2011.00677",
"license:mit",
"autotrain_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2011.00677"
] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #indobert #indolem #id #arxiv-2011.00677 #license-mit #autotrain_compatible #has_space #region-us
|
About
-----
IndoBERT is the Indonesian version of BERT model. We train the model using over 220M words, aggregated from three main sources:
* Indonesian Wikipedia (74M words)
* news articles from Kompas, Tempo (Tala et al., 2003), and Liputan6 (55M words in total)
* an Indonesian Web Corpus (Medved and Suchomel, 2017) (90M words).
We trained the model for 2.4M steps (180 epochs) with the final perplexity over the development set being **3.97** (similar to English BERT-base).
This **IndoBERT** was used to examine IndoLEM - an Indonesian benchmark that comprises of seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse.
The paper is published at the 28th COLING 2020. Please refer to URL for more details about the benchmarks.
How to use
----------
### Load model and tokenizer (tested with transformers==3.5.1)
If you use our work, please cite:
|
[
"### Load model and tokenizer (tested with transformers==3.5.1)\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #indobert #indolem #id #arxiv-2011.00677 #license-mit #autotrain_compatible #has_space #region-us \n",
"### Load model and tokenizer (tested with transformers==3.5.1)\n\n\nIf you use our work, please cite:"
] |
fill-mask
|
transformers
|
# IndoBERTweet 🐦
## 1. Paper
Fajri Koto, Jey Han Lau, and Timothy Baldwin. [_IndoBERTweet: A Pretrained Language Model for Indonesian Twitter
with Effective Domain-Specific Vocabulary Initialization_](https://arxiv.org/pdf/2109.04607.pdf).
In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (**EMNLP 2021**), Dominican Republic (virtual).
## 2. About
[IndoBERTweet](https://github.com/indolem/IndoBERTweet) is the first large-scale pretrained model for Indonesian Twitter
that is trained by extending a monolingually trained Indonesian BERT model with additive domain-specific vocabulary.
In this paper, we show that initializing domain-specific vocabulary with average-pooling of BERT subword embeddings is more efficient than pretraining from scratch, and more effective than initializing based on word2vec projections.
## 3. Pretraining Data
We crawl Indonesian tweets over a 1-year period using the official Twitter API, from December 2019 to December 2020, with 60 keywords covering 4 main topics: economy, health, education, and government. We obtain in total of **409M word tokens**, two times larger than the training data used to pretrain [IndoBERT](https://aclanthology.org/2020.coling-main.66.pdf). Due to Twitter policy, this pretraining data will not be released to public.
## 4. How to use
Load model and tokenizer (tested with transformers==3.5.1)
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("indolem/indobertweet-base-uncased")
model = AutoModel.from_pretrained("indolem/indobertweet-base-uncased")
```
**Preprocessing Steps:**
* lower-case all words
* converting user mentions and URLs into @USER and HTTPURL, respectively
* translating emoticons into text using the [emoji package](https://pypi.org/project/emoji/).
## 5. Results over 7 Indonesian Twitter Datasets
<table>
<col>
<colgroup span="2"></colgroup>
<colgroup span="2"></colgroup>
<tr>
<th rowspan="2">Models</td>
<th colspan="2" scope="colgroup">Sentiment</th>
<th colspan="1" scope="colgroup">Emotion</th>
<th colspan="2" scope="colgroup">Hate Speech</th>
<th colspan="2" scope="colgroup">NER</th>
<th rowspan="2" scope="colgroup">Average</th>
</tr>
<tr>
<th scope="col">IndoLEM</th>
<th scope="col">SmSA</th>
<th scope="col">EmoT</th>
<th scope="col">HS1</th>
<th scope="col">HS2</th>
<th scope="col">Formal</th>
<th scope="col">Informal</th>
</tr>
<tr>
<td scope="row">mBERT</td>
<td>76.6</td>
<td>84.7</td>
<td>67.5</td>
<td>85.1</td>
<td>75.1</td>
<td>85.2</td>
<td>83.2</td>
<td>79.6</td>
</tr>
<tr>
<td scope="row">malayBERT</td>
<td>82.0</td>
<td>84.1</td>
<td>74.2</td>
<td>85.0</td>
<td>81.9</td>
<td>81.9</td>
<td>81.3</td>
<td>81.5</td>
</tr>
<tr>
<td scope="row">IndoBERT (Willie, et al., 2020)</td>
<td>84.1</td>
<td>88.7</td>
<td>73.3</td>
<td>86.8</td>
<td>80.4</td>
<td>86.3</td>
<td>84.3</td>
<td>83.4</td>
</tr>
<tr>
<td scope="row">IndoBERT (Koto, et al., 2020)</td>
<td>84.1</td>
<td>87.9</td>
<td>71.0</td>
<td>86.4</td>
<td>79.3</td>
<td>88.0</td>
<td><b>86.9</b></td>
<td>83.4</td>
</tr>
<tr>
<td scope="row">IndoBERTweet (1M steps from scratch)</td>
<td>86.2</td>
<td>90.4</td>
<td>76.0</td>
<td><b>88.8</b></td>
<td><b>87.5</b></td>
<td><b>88.1</b></td>
<td>85.4</td>
<td>86.1</td>
</tr>
<tr>
<td scope="row">IndoBERT + Voc adaptation + 200k steps</td>
<td><b>86.6</b></td>
<td><b>92.7</b></td>
<td><b>79.0</b></td>
<td>88.4</td>
<td>84.0</td>
<td>87.7</td>
<td><b>86.9</b></td>
<td><b>86.5</b></td>
</tr>
</table>
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{koto2021indobertweet,
title={IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effective Domain-Specific Vocabulary Initialization},
author={Fajri Koto and Jey Han Lau and Timothy Baldwin},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021)},
year={2021}
}
```
|
{"language": ["id"], "license": "apache-2.0", "tags": ["Twitter"], "datasets": ["Twitter 2021"], "widget": [{"text": "guweehh udh ga' paham lg sm [MASK]"}]}
|
indolem/indobertweet-base-uncased
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"Twitter",
"id",
"arxiv:2109.04607",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.04607"
] |
[
"id"
] |
TAGS
#transformers #pytorch #bert #fill-mask #Twitter #id #arxiv-2109.04607 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
IndoBERTweet
============
1. Paper
--------
Fajri Koto, Jey Han Lau, and Timothy Baldwin. *IndoBERTweet: A Pretrained Language Model for Indonesian Twitter
with Effective Domain-Specific Vocabulary Initialization*.
In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), Dominican Republic (virtual).
2. About
--------
IndoBERTweet is the first large-scale pretrained model for Indonesian Twitter
that is trained by extending a monolingually trained Indonesian BERT model with additive domain-specific vocabulary.
In this paper, we show that initializing domain-specific vocabulary with average-pooling of BERT subword embeddings is more efficient than pretraining from scratch, and more effective than initializing based on word2vec projections.
3. Pretraining Data
-------------------
We crawl Indonesian tweets over a 1-year period using the official Twitter API, from December 2019 to December 2020, with 60 keywords covering 4 main topics: economy, health, education, and government. We obtain in total of 409M word tokens, two times larger than the training data used to pretrain IndoBERT. Due to Twitter policy, this pretraining data will not be released to public.
4. How to use
-------------
Load model and tokenizer (tested with transformers==3.5.1)
Preprocessing Steps:
* lower-case all words
* converting user mentions and URLs into @USER and HTTPURL, respectively
* translating emoticons into text using the emoji package.
5. Results over 7 Indonesian Twitter Datasets
---------------------------------------------
If you use our work, please cite:
|
[] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #Twitter #id #arxiv-2109.04607 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation
|
transformers
|
# GPT2-medium-indonesian
This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first
introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104)
organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
The demo can be found [here](https://huggingface.co/spaces/indonesian-nlp/gpt2-app).
## How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='indonesian-nlp/gpt2-medium-indonesian')
>>> set_seed(42)
>>> generator("Sewindu sudah kita tak berjumpa,", max_length=30, num_return_sequences=5)
[{'generated_text': 'Sewindu sudah kita tak berjumpa, dua dekade lalu, saya hanya bertemu sekali. Entah mengapa, saya lebih nyaman berbicara dalam bahasa Indonesia, bahasa Indonesia'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi dalam dua hari ini, kita bisa saja bertemu.”\
“Kau tau, bagaimana dulu kita bertemu?” aku'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, banyak kisah yang tersimpan. Tak mudah tuk kembali ke pelukan, di mana kini kita berada, sebuah tempat yang jauh'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, sejak aku lulus kampus di Bandung, aku sempat mencari kabar tentangmu. Ah, masih ada tempat di hatiku,'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi Tuhan masih saja menyukarkan doa kita masing-masing.\
Tuhan akan memberi lebih dari apa yang kita'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('indonesian-nlp/gpt2-medium-indonesian')
model = GPT2Model.from_pretrained('indonesian-nlp/gpt2-medium-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('indonesian-nlp/gpt2-medium-indonesian')
model = TFGPT2Model.from_pretrained('indonesian-nlp/gpt2-medium-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Limitations and bias
The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/),
[mc4](https://huggingface.co/datasets/mc4) and [Wikipedia](https://huggingface.co/datasets/wikipedia). The datasets
contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on
the dataset (see the **Training data** section), the filtering is by no means a thorough mitigation of biased content
that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model.
As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we
> do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry
> out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender,
> race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with
> similar levels of caution around use cases that are sensitive to biases around human attributes.
We have done a basic bias analysis that you can find in this [notebook](https://huggingface.co/indonesian-nlp/gpt2-small-indonesian/blob/main/bias_analysis/gpt2_medium_indonesian_bias_analysis.ipynb), performed on [Indonesian GPT2 medium](https://huggingface.co/indonesian-nlp/gpt2-medium-indonesian), based on the bias analysis for [Polish GPT2](https://huggingface.co/flax-community/papuGaPT2) with modifications.
### Gender bias
We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.

The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).

### Ethnicity bias
We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:
* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)
* Topic - we will use 5 different topics:
* random act: *entered home*
* said: *said*
* works as: *works as*
* intent: *let [person] ...*
* define: *is*
Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...)
We used a [model](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-indonesian) trained on Indonesian hate speech corpus ([dataset 1](https://github.com/okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection), [dataset 2](https://github.com/ialfina/id-hatespeech-detection)) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.

### Religion bias
With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.

## Training data
The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4)
and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB
of data in total. The mc4 dataset was cleaned using [this filtering script](https://github.com/Wikidepia/indonesian_datasets/blob/master/dump/mc4/cleanup.py)
and we also only included links that have been cited by the Indonesian Wikipedia.
## Training procedure
The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `6d 3h 7m 26s`.
### Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| dataset | train loss | eval loss | eval perplexity |
| ---------- | ---------- | -------------- | ---------- |
| ID OSCAR+mc4+Wikipedia (29GB) | 2.79 | 2.696 | 14.826 |
### Tracking
The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-medium-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya).
## Team members
- Akmal ([@Wikidepia](https://huggingface.co/Wikidepia))
- alvinwatner ([@alvinwatner](https://huggingface.co/alvinwatner))
- Cahya Wirawan ([@cahya](https://huggingface.co/cahya))
- Galuh Sahid ([@Galuh](https://huggingface.co/Galuh))
- Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia))
- Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli))
- Samsul Rahmadani ([@munggok](https://huggingface.co/munggok))
## Future work
We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains
if we can get the necessary hardware resources.
|
{"language": "id", "widget": [{"text": "Sewindu sudah kita tak berjumpa, rinduku padamu sudah tak terkira."}]}
|
indonesian-nlp/gpt2-medium-indonesian
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"id",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #safetensors #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
GPT2-medium-indonesian
======================
This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first
introduced in this paper
and first released at this page.
This model was trained using HuggingFace's Flax framework and is part of the JAX/Flax Community Week
organized by HuggingFace. All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
The demo can be found here.
How to use
----------
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Limitations and bias
--------------------
The training data used for this model are Indonesian websites of OSCAR,
mc4 and Wikipedia. The datasets
contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on
the dataset (see the Training data section), the filtering is by no means a thorough mitigation of biased content
that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model.
As the openAI team themselves point out in their model card:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
>
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we
> do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry
> out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender,
> race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with
> similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
We have done a basic bias analysis that you can find in this notebook, performed on Indonesian GPT2 medium, based on the bias analysis for Polish GPT2 with modifications.
### Gender bias
We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.
!gender bias - male
The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).
!gender bias - female
### Ethnicity bias
We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:
* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)
* Topic - we will use 5 different topics:
+ random act: *entered home*
+ said: *said*
+ works as: *works as*
+ intent: *let [person] ...*
+ define: *is*
Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...)
We used a model trained on Indonesian hate speech corpus (dataset 1, dataset 2) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.
!bias analysis - ethnicities
### Religion bias
With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.
!bias analysis - ethnicities
Training data
-------------
The model was trained on a combined dataset of OSCAR, mc4
and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB
of data in total. The mc4 dataset was cleaned using this filtering script
and we also only included links that have been cited by the Indonesian Wikipedia.
Training procedure
------------------
The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was '6d 3h 7m 26s'.
### Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
### Tracking
The training process was tracked in TensorBoard and Weights and Biases.
Team members
------------
* Akmal (@Wikidepia)
* alvinwatner (@alvinwatner)
* Cahya Wirawan (@cahya)
* Galuh Sahid (@Galuh)
* Muhammad Agung Hambali (@AyameRushia)
* Muhammad Fhadli (@muhammadfhadli)
* Samsul Rahmadani (@munggok)
Future work
-----------
We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains
if we can get the necessary hardware resources.
|
[
"### Gender bias\n\n\nWe generated 50 texts starting with prompts \"She/He works as\". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.\n\n\n!gender bias - male\n\n\nThe most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).\n\n\n!gender bias - female",
"### Ethnicity bias\n\n\nWe generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:\n\n\n* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)\n* Topic - we will use 5 different topics:\n\t+ random act: *entered home*\n\t+ said: *said*\n\t+ works as: *works as*\n\t+ intent: *let [person] ...*\n\t+ define: *is*\n\n\nSample of generated prompt: \"seorang perempuan sunda masuk ke rumah...\" (a Sundanese woman enters the house...)\n\n\nWe used a model trained on Indonesian hate speech corpus (dataset 1, dataset 2) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.\n\n\nThe following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.\n\n\n!bias analysis - ethnicities",
"### Religion bias\n\n\nWith the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.\n\n\nThe following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.\n\n\n!bias analysis - ethnicities\n\n\nTraining data\n-------------\n\n\nThe model was trained on a combined dataset of OSCAR, mc4\nand Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB\nof data in total. The mc4 dataset was cleaned using this filtering script\nand we also only included links that have been cited by the Indonesian Wikipedia.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was '6d 3h 7m 26s'.",
"### Evaluation results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### Tracking\n\n\nThe training process was tracked in TensorBoard and Weights and Biases.\n\n\nTeam members\n------------\n\n\n* Akmal (@Wikidepia)\n* alvinwatner (@alvinwatner)\n* Cahya Wirawan (@cahya)\n* Galuh Sahid (@Galuh)\n* Muhammad Agung Hambali (@AyameRushia)\n* Muhammad Fhadli (@muhammadfhadli)\n* Samsul Rahmadani (@munggok)\n\n\nFuture work\n-----------\n\n\nWe would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains\nif we can get the necessary hardware resources."
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### Gender bias\n\n\nWe generated 50 texts starting with prompts \"She/He works as\". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.\n\n\n!gender bias - male\n\n\nThe most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).\n\n\n!gender bias - female",
"### Ethnicity bias\n\n\nWe generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:\n\n\n* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)\n* Topic - we will use 5 different topics:\n\t+ random act: *entered home*\n\t+ said: *said*\n\t+ works as: *works as*\n\t+ intent: *let [person] ...*\n\t+ define: *is*\n\n\nSample of generated prompt: \"seorang perempuan sunda masuk ke rumah...\" (a Sundanese woman enters the house...)\n\n\nWe used a model trained on Indonesian hate speech corpus (dataset 1, dataset 2) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.\n\n\nThe following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.\n\n\n!bias analysis - ethnicities",
"### Religion bias\n\n\nWith the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.\n\n\nThe following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.\n\n\n!bias analysis - ethnicities\n\n\nTraining data\n-------------\n\n\nThe model was trained on a combined dataset of OSCAR, mc4\nand Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB\nof data in total. The mc4 dataset was cleaned using this filtering script\nand we also only included links that have been cited by the Indonesian Wikipedia.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was '6d 3h 7m 26s'.",
"### Evaluation results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### Tracking\n\n\nThe training process was tracked in TensorBoard and Weights and Biases.\n\n\nTeam members\n------------\n\n\n* Akmal (@Wikidepia)\n* alvinwatner (@alvinwatner)\n* Cahya Wirawan (@cahya)\n* Galuh Sahid (@Galuh)\n* Muhammad Agung Hambali (@AyameRushia)\n* Muhammad Fhadli (@muhammadfhadli)\n* Samsul Rahmadani (@munggok)\n\n\nFuture work\n-----------\n\n\nWe would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains\nif we can get the necessary hardware resources."
] |
text-generation
|
transformers
|
# GPT2-small-indonesian
This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first
introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104)
organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
The demo can be found [here](https://huggingface.co/spaces/flax-community/gpt2-indonesian).
## How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness,
we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='flax-community/gpt2-small-indonesian')
>>> set_seed(42)
>>> generator("Sewindu sudah kita tak berjumpa,", max_length=30, num_return_sequences=5)
[{'generated_text': 'Sewindu sudah kita tak berjumpa, dua dekade lalu, saya hanya bertemu sekali. Entah mengapa, saya lebih nyaman berbicara dalam bahasa Indonesia, bahasa Indonesia'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi dalam dua hari ini, kita bisa saja bertemu.”\
“Kau tau, bagaimana dulu kita bertemu?” aku'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, banyak kisah yang tersimpan. Tak mudah tuk kembali ke pelukan, di mana kini kita berada, sebuah tempat yang jauh'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, sejak aku lulus kampus di Bandung, aku sempat mencari kabar tentangmu. Ah, masih ada tempat di hatiku,'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi Tuhan masih saja menyukarkan doa kita masing-masing.\
Tuhan akan memberi lebih dari apa yang kita'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-small-indonesian')
model = GPT2Model.from_pretrained('flax-community/gpt2-small-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-small-indonesian')
model = TFGPT2Model.from_pretrained('flax-community/gpt2-small-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Limitations and bias
The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/),
[mc4](https://huggingface.co/datasets/mc4) and [Wikipedia](https://huggingface.co/datasets/wikipedia). The datasets
contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on
the dataset (see the **Training data** section), the filtering is by no means a thorough mitigation of biased content
that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model.
As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we
> do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry
> out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender,
> race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with
> similar levels of caution around use cases that are sensitive to biases around human attributes.
We have done a basic bias analysis that you can find in this [notebook](https://huggingface.co/flax-community/gpt2-small-indonesian/blob/main/bias_analysis/gpt2_medium_indonesian_bias_analysis.ipynb), performed on [Indonesian GPT2 medium](https://huggingface.co/flax-community/gpt2-medium-indonesian), based on the bias analysis for [Polish GPT2](https://huggingface.co/flax-community/papuGaPT2) with modifications.
### Gender bias
We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.

The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).

### Ethnicity bias
We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:
* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)
* Topic - we will use 5 different topics:
* random act: *entered home*
* said: *said*
* works as: *works as*
* intent: *let [person] ...*
* define: *is*
Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...)
We used a [model](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-indonesian) trained on Indonesian hate speech corpus ([dataset 1](https://github.com/okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection), [dataset 2](https://github.com/ialfina/id-hatespeech-detection)) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.

### Religion bias
With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.

## Training data
The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4)
and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB
of data in total. The mc4 dataset was cleaned using [this filtering script](https://github.com/Wikidepia/indonesian_datasets/blob/master/dump/mc4/cleanup.py)
and we also only included links that have been cited by the Indonesian Wikipedia.
## Training procedure
The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `4d 14h 50m 47s`.
### Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| dataset | train loss | eval loss | eval perplexity |
| ---------- | ---------- | -------------- | ---------- |
| ID OSCAR+mc4+wikipedia (29GB) | 3.046 | 2.926 | 18.66 |
### Tracking
The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-small-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya).
## Team members
- Akmal ([@Wikidepia](https://huggingface.co/Wikidepia))
- alvinwatner ([@alvinwatner](https://huggingface.co/alvinwatner))
- Cahya Wirawan ([@cahya](https://huggingface.co/cahya))
- Galuh Sahid ([@Galuh](https://huggingface.co/Galuh))
- Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia))
- Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli))
- Samsul Rahmadani ([@munggok](https://huggingface.co/munggok))
## Future work
We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains
if we can get the necessary hardware resources.
|
{"language": "id", "widget": [{"text": "Sewindu sudah kita tak berjumpa, rinduku padamu sudah tak terkira."}]}
|
indonesian-nlp/gpt2
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"id",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #safetensors #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
GPT2-small-indonesian
=====================
This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first
introduced in this paper
and first released at this page.
This model was trained using HuggingFace's Flax framework and is part of the JAX/Flax Community Week
organized by HuggingFace. All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
The demo can be found here.
How to use
----------
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness,
we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Limitations and bias
--------------------
The training data used for this model are Indonesian websites of OSCAR,
mc4 and Wikipedia. The datasets
contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on
the dataset (see the Training data section), the filtering is by no means a thorough mitigation of biased content
that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model.
As the openAI team themselves point out in their model card:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
>
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we
> do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry
> out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender,
> race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with
> similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
We have done a basic bias analysis that you can find in this notebook, performed on Indonesian GPT2 medium, based on the bias analysis for Polish GPT2 with modifications.
### Gender bias
We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.
!gender bias - male
The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).
!gender bias - female
### Ethnicity bias
We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:
* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)
* Topic - we will use 5 different topics:
+ random act: *entered home*
+ said: *said*
+ works as: *works as*
+ intent: *let [person] ...*
+ define: *is*
Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...)
We used a model trained on Indonesian hate speech corpus (dataset 1, dataset 2) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.
!bias analysis - ethnicities
### Religion bias
With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.
!bias analysis - ethnicities
Training data
-------------
The model was trained on a combined dataset of OSCAR, mc4
and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB
of data in total. The mc4 dataset was cleaned using this filtering script
and we also only included links that have been cited by the Indonesian Wikipedia.
Training procedure
------------------
The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was '4d 14h 50m 47s'.
### Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
### Tracking
The training process was tracked in TensorBoard and Weights and Biases.
Team members
------------
* Akmal (@Wikidepia)
* alvinwatner (@alvinwatner)
* Cahya Wirawan (@cahya)
* Galuh Sahid (@Galuh)
* Muhammad Agung Hambali (@AyameRushia)
* Muhammad Fhadli (@muhammadfhadli)
* Samsul Rahmadani (@munggok)
Future work
-----------
We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains
if we can get the necessary hardware resources.
|
[
"### Gender bias\n\n\nWe generated 50 texts starting with prompts \"She/He works as\". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.\n\n\n!gender bias - male\n\n\nThe most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).\n\n\n!gender bias - female",
"### Ethnicity bias\n\n\nWe generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:\n\n\n* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)\n* Topic - we will use 5 different topics:\n\t+ random act: *entered home*\n\t+ said: *said*\n\t+ works as: *works as*\n\t+ intent: *let [person] ...*\n\t+ define: *is*\n\n\nSample of generated prompt: \"seorang perempuan sunda masuk ke rumah...\" (a Sundanese woman enters the house...)\n\n\nWe used a model trained on Indonesian hate speech corpus (dataset 1, dataset 2) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.\n\n\nThe following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.\n\n\n!bias analysis - ethnicities",
"### Religion bias\n\n\nWith the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.\n\n\nThe following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.\n\n\n!bias analysis - ethnicities\n\n\nTraining data\n-------------\n\n\nThe model was trained on a combined dataset of OSCAR, mc4\nand Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB\nof data in total. The mc4 dataset was cleaned using this filtering script\nand we also only included links that have been cited by the Indonesian Wikipedia.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was '4d 14h 50m 47s'.",
"### Evaluation results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### Tracking\n\n\nThe training process was tracked in TensorBoard and Weights and Biases.\n\n\nTeam members\n------------\n\n\n* Akmal (@Wikidepia)\n* alvinwatner (@alvinwatner)\n* Cahya Wirawan (@cahya)\n* Galuh Sahid (@Galuh)\n* Muhammad Agung Hambali (@AyameRushia)\n* Muhammad Fhadli (@muhammadfhadli)\n* Samsul Rahmadani (@munggok)\n\n\nFuture work\n-----------\n\n\nWe would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains\nif we can get the necessary hardware resources."
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### Gender bias\n\n\nWe generated 50 texts starting with prompts \"She/He works as\". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.\n\n\n!gender bias - male\n\n\nThe most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).\n\n\n!gender bias - female",
"### Ethnicity bias\n\n\nWe generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:\n\n\n* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)\n* Topic - we will use 5 different topics:\n\t+ random act: *entered home*\n\t+ said: *said*\n\t+ works as: *works as*\n\t+ intent: *let [person] ...*\n\t+ define: *is*\n\n\nSample of generated prompt: \"seorang perempuan sunda masuk ke rumah...\" (a Sundanese woman enters the house...)\n\n\nWe used a model trained on Indonesian hate speech corpus (dataset 1, dataset 2) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.\n\n\nThe following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.\n\n\n!bias analysis - ethnicities",
"### Religion bias\n\n\nWith the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.\n\n\nThe following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.\n\n\n!bias analysis - ethnicities\n\n\nTraining data\n-------------\n\n\nThe model was trained on a combined dataset of OSCAR, mc4\nand Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB\nof data in total. The mc4 dataset was cleaned using this filtering script\nand we also only included links that have been cited by the Indonesian Wikipedia.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was '4d 14h 50m 47s'.",
"### Evaluation results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### Tracking\n\n\nThe training process was tracked in TensorBoard and Weights and Biases.\n\n\nTeam members\n------------\n\n\n* Akmal (@Wikidepia)\n* alvinwatner (@alvinwatner)\n* Cahya Wirawan (@cahya)\n* Galuh Sahid (@Galuh)\n* Muhammad Agung Hambali (@AyameRushia)\n* Muhammad Fhadli (@muhammadfhadli)\n* Samsul Rahmadani (@munggok)\n\n\nFuture work\n-----------\n\n\nWe would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains\nif we can get the necessary hardware resources."
] |
automatic-speech-recognition
|
transformers
|
# Multilingual Speech Recognition for Indonesian Languages
This is the model built for the project
[Multilingual Speech Recognition for Indonesian Languages](https://github.com/indonesian-nlp/multilingual-asr).
It is a fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice),
[High-quality TTS data for Javanese - SLR41](https://huggingface.co/datasets/openslr), and
[High-quality TTS data for Sundanese - SLR44](https://huggingface.co/datasets/openslr) datasets.
We also provide a [live demo](https://huggingface.co/spaces/indonesian-nlp/multilingual-asr) to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-indonesian-javanese-sundanese")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-indonesian-javanese-sundanese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-indonesian-javanese-sundanese")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-indonesian-javanese-sundanese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 11.57 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
(will be available soon)
|
{"language": ["id", "jv", "sun"], "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard", "id", "jv", "robust-speech-event", "speech", "su"], "datasets": ["mozilla-foundation/common_voice_7_0", "openslr", "magic_data", "titml"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Indonesian Javanese and Sundanese by Indonesian NLP", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 6.1", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": 4.056, "name": "Test WER"}, {"type": "cer", "value": 1.472, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "id"}, "metrics": [{"type": "wer", "value": 4.492, "name": "Test WER"}, {"type": "cer", "value": 1.577, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "id"}, "metrics": [{"type": "wer", "value": 48.94, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "id"}, "metrics": [{"type": "wer", "value": 68.95, "name": "Test WER"}]}]}]}
|
indonesian-nlp/wav2vec2-indonesian-javanese-sundanese
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"id",
"jv",
"robust-speech-event",
"speech",
"su",
"sun",
"dataset:mozilla-foundation/common_voice_7_0",
"dataset:openslr",
"dataset:magic_data",
"dataset:titml",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id",
"jv",
"sun"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #id #jv #robust-speech-event #speech #su #sun #dataset-mozilla-foundation/common_voice_7_0 #dataset-openslr #dataset-magic_data #dataset-titml #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Multilingual Speech Recognition for Indonesian Languages
This is the model built for the project
Multilingual Speech Recognition for Indonesian Languages.
It is a fine-tuned facebook/wav2vec2-large-xlsr-53
model on the Indonesian Common Voice dataset,
High-quality TTS data for Javanese - SLR41, and
High-quality TTS data for Sundanese - SLR44 datasets.
We also provide a live demo to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
Test Result: 11.57 %
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found here
(will be available soon)
|
[
"# Multilingual Speech Recognition for Indonesian Languages\n\nThis is the model built for the project \nMultilingual Speech Recognition for Indonesian Languages.\nIt is a fine-tuned facebook/wav2vec2-large-xlsr-53\nmodel on the Indonesian Common Voice dataset, \nHigh-quality TTS data for Javanese - SLR41, and\nHigh-quality TTS data for Sundanese - SLR44 datasets.\n\nWe also provide a live demo to test the model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 11.57 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #id #jv #robust-speech-event #speech #su #sun #dataset-mozilla-foundation/common_voice_7_0 #dataset-openslr #dataset-magic_data #dataset-titml #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Multilingual Speech Recognition for Indonesian Languages\n\nThis is the model built for the project \nMultilingual Speech Recognition for Indonesian Languages.\nIt is a fine-tuned facebook/wav2vec2-large-xlsr-53\nmodel on the Indonesian Common Voice dataset, \nHigh-quality TTS data for Javanese - SLR41, and\nHigh-quality TTS data for Sundanese - SLR44 datasets.\n\nWe also provide a live demo to test the model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 11.57 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Indonesian
This is the baseline for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice).
It was trained using the default hyperparamer and for 2x30 epochs.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.55 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/indonesian-nlp/indonesian-speech-recognition)
(will be available soon)
|
{"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesian Baseline by indonesian-nlp", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": 25.55, "name": "Test WER"}]}]}]}
|
indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Indonesian
This is the baseline for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned
facebook/wav2vec2-large-xlsr-53
model on the Indonesian Common Voice dataset.
It was trained using the default hyperparamer and for 2x30 epochs.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
Test Result: 25.55 %
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found here
(will be available soon)
|
[
"# Wav2Vec2-Large-XLSR-Indonesian\n\nThis is the baseline for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Indonesian Common Voice dataset.\nIt was trained using the default hyperparamer and for 2x30 epochs.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 25.55 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Indonesian\n\nThis is the baseline for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Indonesian Common Voice dataset.\nIt was trained using the default hyperparamer and for 2x30 epochs.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 25.55 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Indonesian
This is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 14.29 %
## Training
The Common Voice `train`, `validation`, and [synthetic voice datasets](https://cloud.uncool.ai/index.php/s/Kg4C6f5NJGN9ZdR) were used for training.
The script used for training can be found [here](https://github.com/indonesian-nlp/wav2vec2-indonesian)
|
{"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesian by Indonesian NLP", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": 14.29, "name": "Test WER"}]}]}]}
|
indonesian-nlp/wav2vec2-large-xlsr-indonesian
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Wav2Vec2-Large-XLSR-Indonesian
This is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned
facebook/wav2vec2-large-xlsr-53
model on the Indonesian Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
Test Result: 14.29 %
## Training
The Common Voice 'train', 'validation', and synthetic voice datasets were used for training.
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-Indonesian\n\nThis is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Indonesian Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 14.29 %",
"## Training\n\nThe Common Voice 'train', 'validation', and synthetic voice datasets were used for training.\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Wav2Vec2-Large-XLSR-Indonesian\n\nThis is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Indonesian Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 14.29 %",
"## Training\n\nThe Common Voice 'train', 'validation', and synthetic voice datasets were used for training.\n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Automatic Speech Recognition for Luganda
This is the model built for the
[Mozilla Luganda Automatic Speech Recognition competition](https://zindi.africa/competitions/mozilla-luganda-automatic-speech-recognition).
It is a fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Luganda Common Voice dataset](https://huggingface.co/datasets/common_voice) version 7.0.
We also provide a [live demo](https://huggingface.co/spaces/indonesian-nlp/luganda-asr) to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lg", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "‘", "’", "’"]
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
WER without KenLM: 15.38 %
WER With KenLM:
**Test Result**: 7.53 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/indonesian-nlp/luganda-asr)
|
{"language": "lg", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Luganda by Indonesian-NLP", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice lg", "type": "common_voice", "args": "lg"}, "metrics": [{"type": "wer", "value": 7.53, "name": "Test WER"}]}]}]}
|
indonesian-nlp/wav2vec2-luganda
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"lg",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"lg"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #lg #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Automatic Speech Recognition for Luganda
This is the model built for the
Mozilla Luganda Automatic Speech Recognition competition.
It is a fine-tuned facebook/wav2vec2-large-xlsr-53
model on the Luganda Common Voice dataset version 7.0.
We also provide a live demo to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
WER without KenLM: 15.38 %
WER With KenLM:
Test Result: 7.53 %
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found here
|
[
"# Automatic Speech Recognition for Luganda\n\nThis is the model built for the \nMozilla Luganda Automatic Speech Recognition competition.\nIt is a fine-tuned facebook/wav2vec2-large-xlsr-53\nmodel on the Luganda Common Voice dataset version 7.0.\n\nWe also provide a live demo to test the model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nWER without KenLM: 15.38 %\n\nWER With KenLM:\n\nTest Result: 7.53 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #lg #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Automatic Speech Recognition for Luganda\n\nThis is the model built for the \nMozilla Luganda Automatic Speech Recognition competition.\nIt is a fine-tuned facebook/wav2vec2-large-xlsr-53\nmodel on the Luganda Common Voice dataset version 7.0.\n\nWe also provide a live demo to test the model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nWER without KenLM: 15.38 %\n\nWER With KenLM:\n\nTest Result: 7.53 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0830
- Precision: 0.8919
- Recall: 0.8632
- F1: 0.8773
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0533 | 1.0 | 2904 | 0.0777 | 0.8773 | 0.8527 | 0.8648 | 0.9834 |
| 0.0271 | 2.0 | 5808 | 0.0794 | 0.8740 | 0.8537 | 0.8638 | 0.9835 |
| 0.0165 | 3.0 | 8712 | 0.0830 | 0.8919 | 0.8632 | 0.8773 | 0.9851 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "gpl-3.0", "tags": ["generated_from_trainer"], "datasets": ["mim_gold_ner"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "IceBERT-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "mim_gold_ner", "type": "mim_gold_ner", "args": "mim-gold-ner"}, "metrics": [{"type": "precision", "value": 0.8918518518518519, "name": "Precision"}, {"type": "recall", "value": 0.8631855657784682, "name": "Recall"}, {"type": "f1", "value": 0.8772845953002611, "name": "F1"}, {"type": "accuracy", "value": 0.9851436434474428, "name": "Accuracy"}]}]}]}
|
indridinn/IceBERT-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:gpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #dataset-mim_gold_ner #license-gpl-3.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
IceBERT-finetuned-ner
=====================
This model is a fine-tuned version of vesteinn/IceBERT on the mim\_gold\_ner dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0830
* Precision: 0.8919
* Recall: 0.8632
* F1: 0.8773
* Accuracy: 0.9851
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.11.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #dataset-mim_gold_ner #license-gpl-3.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0907
- Precision: 0.8666
- Recall: 0.8511
- F1: 0.8588
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0573 | 1.0 | 2904 | 0.0961 | 0.8543 | 0.8134 | 0.8334 | 0.9806 |
| 0.0314 | 2.0 | 5808 | 0.0912 | 0.8709 | 0.8282 | 0.8490 | 0.9819 |
| 0.0203 | 3.0 | 8712 | 0.0907 | 0.8666 | 0.8511 | 0.8588 | 0.9834 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "agpl-3.0", "tags": ["generated_from_trainer"], "datasets": ["mim_gold_ner"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "XLMR-ENIS-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "mim_gold_ner", "type": "mim_gold_ner", "args": "mim-gold-ner"}, "metrics": [{"type": "precision", "value": 0.8666203542896839, "name": "Precision"}, {"type": "recall", "value": 0.8510517339397385, "name": "Recall"}, {"type": "f1", "value": 0.8587654887563103, "name": "F1"}, {"type": "accuracy", "value": 0.9833747693058585, "name": "Accuracy"}]}]}]}
|
indridinn/XLMR-ENIS-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:agpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #xlm-roberta #token-classification #generated_from_trainer #dataset-mim_gold_ner #license-agpl-3.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
XLMR-ENIS-finetuned-ner
=======================
This model is a fine-tuned version of vesteinn/XLMR-ENIS on the mim\_gold\_ner dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0907
* Precision: 0.8666
* Recall: 0.8511
* F1: 0.8588
* Accuracy: 0.9834
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.11.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #token-classification #generated_from_trainer #dataset-mim_gold_ner #license-agpl-3.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.