pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
tokens_length
listlengths
1
723
input_texts
listlengths
1
1
fill-mask
transformers
# BERT for Vietnamese is trained on more 20 GB news dataset Apply for task sentiment analysis on using [AIViVN's comments dataset](https://www.aivivn.com/contests/6) The model achieved 0.90268 on the public leaderboard, (winner's score is 0.90087) Bert4news is used for a toolkit Vietnames(segmentation and Named Entity Recognition) at ViNLPtoolkit(https://github.com/bino282/ViNLP) We use word sentencepiece, use basic bert tokenization and same config with bert base with lowercase = False. You can download trained model: - [tensorflow](https://drive.google.com/file/d/1X-sRDYf7moS_h61J3L79NkMVGHP-P-k5/view?usp=sharing). - [pytorch](https://drive.google.com/file/d/11aFSTpYIurn-oI2XpAmcCTccB_AonMOu/view?usp=sharing). Use with huggingface/transformers ``` bash import torch from transformers import BertTokenizer,BertModel tokenizer= BertTokenizer.from_pretrained("NlpHUST/vibert4news-base-cased") bert_model = BertModel.from_pretrained("NlpHUST/vibert4news-base-cased") line = "Tôi là sinh viên trường Bách Khoa Hà Nội ." input_id = tokenizer.encode(line,add_special_tokens = True) att_mask = [int(token_id > 0) for token_id in input_id] input_ids = torch.tensor([input_id]) att_masks = torch.tensor([att_mask]) with torch.no_grad(): features = bert_model(input_ids,att_masks) print(features) ``` # Vietnamese toolkit with bert ViNLP is a system annotation for Vietnamese, it use pretrain [Bert4news](https://github.com/bino282/bert4news/) to fine-turning to NLP problems in Vietnamese components of wordsegmentation,Named entity recognition (NER) and achieve high accuravy. ### Installation ```bash git clone https://github.com/bino282/ViNLP.git cd ViNLP python setup.py develop build ``` ### Test Segmentation The model achieved F1 score : 0.984 on VLSP 2013 dataset |Model | F1 | |--------|-----------| | **BertVnTokenizer** | 98.40 | | **DongDu** | 96.90 | | **JvnSegmenter-Maxent** | 97.00 | | **JvnSegmenter-CRFs** | 97.06 | | **VnTokenizer** | 97.33 | | **UETSegmenter** | 97.87 | | **VnTokenizer** | 97.33 | | **VnCoreNLP (i.e. RDRsegmenter)** | 97.90 | ``` bash from ViNLP import BertVnTokenizer tokenizer = BertVnTokenizer() sentences = tokenizer.split(["Tổng thống Donald Trump ký sắc lệnh cấm mọi giao dịch của Mỹ với ByteDance và Tecent - chủ sở hữu của 2 ứng dụng phổ biến TikTok và WeChat sau 45 ngày nữa."]) print(sentences[0]) ``` ``` bash Tổng_thống Donald_Trump ký sắc_lệnh cấm mọi giao_dịch của Mỹ với ByteDance và Tecent - chủ_sở_hữu của 2 ứng_dụng phổ_biến TikTok và WeChat sau 45 ngày nữa . ``` ### Test Named Entity Recognition The model achieved F1 score VLSP 2018 for all named entities including nested entities : 0.786 |Model | F1 | |--------|-----------| | **BertVnNer** | 78.60 | | **VNER Attentive Neural Network** | 77.52 | | **vietner CRF (ngrams + word shapes + cluster + w2v)** | 76.63 | | **ZA-NER BiLSTM** | 74.70 | ``` bash from ViNLP import BertVnNer bert_ner_model = BertVnNer() sentence = "Theo SCMP, báo cáo của CSIS với tên gọi Định hình Tương lai Chính sách của Mỹ với Trung Quốc cũng cho thấy sự ủng hộ tương đối rộng rãi của các chuyên gia về việc cấm Huawei, tập đoàn viễn thông khổng lồ của Trung Quốc" entities = bert_ner_model.annotate([sentence]) print(entities) ``` ``` bash [{'ORGANIZATION': ['SCMP', 'CSIS', 'Huawei'], 'LOCATION': ['Mỹ', 'Trung Quốc']}] ``` Run training with base config ``` bash python train_pytorch.py \\\\ --model_path=bert4news.pytorch \\\\ --max_len=200 \\\\ --batch_size=16 \\\\ --epochs=6 \\\\ --lr=2e-5 ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van (nha282@gmail.com).
{"language": "vn"}
NlpHUST/vibert4news-base-cased
null
[ "transformers", "pytorch", "safetensors", "fill-mask", "vn", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "vn" ]
TAGS #transformers #pytorch #safetensors #fill-mask #vn #autotrain_compatible #endpoints_compatible #region-us
BERT for Vietnamese is trained on more 20 GB news dataset ========================================================= Apply for task sentiment analysis on using AIViVN's comments dataset The model achieved 0.90268 on the public leaderboard, (winner's score is 0.90087) Bert4news is used for a toolkit Vietnames(segmentation and Named Entity Recognition) at ViNLPtoolkit(URL We use word sentencepiece, use basic bert tokenization and same config with bert base with lowercase = False. You can download trained model: * tensorflow. * pytorch. Use with huggingface/transformers Vietnamese toolkit with bert ============================ ViNLP is a system annotation for Vietnamese, it use pretrain Bert4news to fine-turning to NLP problems in Vietnamese components of wordsegmentation,Named entity recognition (NER) and achieve high accuravy. ### Installation ### Test Segmentation The model achieved F1 score : 0.984 on VLSP 2013 dataset ### Test Named Entity Recognition The model achieved F1 score VLSP 2018 for all named entities including nested entities : 0.786 Run training with base config ### Contact information For personal communication related to this project, please contact Nha Nguyen Van (nha282@URL).
[ "### Installation", "### Test Segmentation\n\n\nThe model achieved F1 score : 0.984 on VLSP 2013 dataset", "### Test Named Entity Recognition\n\n\nThe model achieved F1 score VLSP 2018 for all named entities including nested entities : 0.786\n\n\n\nRun training with base config", "### Contact information\n\n\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)." ]
[ "TAGS\n#transformers #pytorch #safetensors #fill-mask #vn #autotrain_compatible #endpoints_compatible #region-us \n", "### Installation", "### Test Segmentation\n\n\nThe model achieved F1 score : 0.984 on VLSP 2013 dataset", "### Test Named Entity Recognition\n\n\nThe model achieved F1 score VLSP 2018 for all named entities including nested entities : 0.786\n\n\n\nRun training with base config", "### Contact information\n\n\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)." ]
[ 33, 4, 23, 36, 29 ]
[ "TAGS\n#transformers #pytorch #safetensors #fill-mask #vn #autotrain_compatible #endpoints_compatible #region-us \n### Installation### Test Segmentation\n\n\nThe model achieved F1 score : 0.984 on VLSP 2013 dataset### Test Named Entity Recognition\n\n\nThe model achieved F1 score VLSP 2018 for all named entities including nested entities : 0.786\n\n\n\nRun training with base config### Contact information\n\n\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)." ]
text-generation
transformers
# Hagrid DialoGPT medium model
{"tags": ["conversational"]}
NoLawz/DialoGPT-medium-hagrid
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Hagrid DialoGPT medium model
[ "# Hagrid DialoGPT medium model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Hagrid DialoGPT medium model" ]
[ 39, 9 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Hagrid DialoGPT medium model" ]
text-generation
transformers
# Harry Potter DialoGPT medium model
{"tags": ["conversational"]}
NoLawz/DialoGPT-medium-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT medium model
[ "# Harry Potter DialoGPT medium model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT medium model" ]
[ 39, 8 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT medium model" ]
text-generation
transformers
# Spong Bob DialoGPT medium model
{"tags": ["conversational"]}
NoLawz/DialoGPT-medium-spongebob
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Spong Bob DialoGPT medium model
[ "# Spong Bob DialoGPT medium model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Spong Bob DialoGPT medium model" ]
[ 39, 9 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Spong Bob DialoGPT medium model" ]
text-generation
transformers
# NLGP docstring model The NLGP docstring model was introduced in the paper [Natural Language-Guided Programming](https://arxiv.org/abs/2108.05198). The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language **intent** in a certain code **context** (see the example below). Also see the [NLGP natural](https://huggingface.co/Nokia/nlgp-natural) model. This work was carried out by a research team in Nokia Bell Labs. **Context** ```py import matplotlib.pyplot as plt values = [1, 2, 3, 4] labels = ["a", "b", "c", "d"] ``` **Intent** ```py # plot a bart chart ``` **Prediction** ```py plt.bar(labels, values) plt.show() ``` ## Usage ```py import re from transformers import GPT2LMHeadModel, GPT2TokenizerFast # load the model tok = GPT2TokenizerFast.from_pretrained("Nokia/nlgp-docstring") model = GPT2LMHeadModel.from_pretrained("Nokia/nlgp-docstring") # preprocessing functions num_spaces = [2, 4, 6, 8, 10, 12, 14, 16, 18] def preprocess(context, query): """ Encodes context + query as a single string and replaces whitespace with special tokens <|2space|>, <|4space|>, ... """ input_str = f"{context}\n{query} <|endofcomment|>\n" indentation_symbols = {n: f"<|{n}space|>" for n in num_spaces} m = re.match("^[ ]+", input_str) if not m: return input_str leading_whitespace = m.group(0) N = len(leading_whitespace) for n in self.num_spaces: leading_whitespace = leading_whitespace.replace(n * " ", self.indentation_symbols[n]) return leading_whitespace + input_str[N:] detokenize_pattern = re.compile(fr"<\|(\d+)space\|>") def postprocess(output): output = output.split("<|cell|>")[0] def insert_space(m): num_spaces = int(m.group(1)) return num_spaces * " " return detokenize_pattern.sub(insert_space, output) # inference code_context = """ import matplotlib.pyplot as plt values = [1, 2, 3, 4] labels = ["a", "b", "c", "d"] """ query = "# plot a bar chart" input_str = preprocess(code_context, query) input_ids = tok(input_str, return_tensors="pt").input_ids max_length = 150 # don't generate output longer than this length total_max_length = min(1024 - input_ids.shape[-1], input_ids.shape[-1] + 150) # total = input + output input_and_output = model.generate( input_ids=input_ids, max_length=total_max_length, min_length=10, do_sample=False, num_beams=4, early_stopping=True, eos_token_id=tok.encode("<|cell|>")[0] ) output = input_and_output[:, input_ids.shape[-1]:] # remove the tokens that correspond to the input_str output_str = tok.decode(output[0]) postprocess(output_str) ``` ## License and copyright Copyright 2021 Nokia Licensed under the Apache License 2.0 SPDX-License-Identifier: Apache-2.0
{"language": ["en", "code"], "license": "apache-2.0", "tags": ["code completion", "code generation"]}
Nokia/nlgp-docstring
null
[ "transformers", "pytorch", "gpt2", "text-generation", "code completion", "code generation", "en", "code", "arxiv:2108.05198", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2108.05198" ]
[ "en", "code" ]
TAGS #transformers #pytorch #gpt2 #text-generation #code completion #code generation #en #code #arxiv-2108.05198 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# NLGP docstring model The NLGP docstring model was introduced in the paper Natural Language-Guided Programming. The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language intent in a certain code context (see the example below). Also see the NLGP natural model. This work was carried out by a research team in Nokia Bell Labs. Context Intent Prediction ## Usage ## License and copyright Copyright 2021 Nokia Licensed under the Apache License 2.0 SPDX-License-Identifier: Apache-2.0
[ "# NLGP docstring model\n\nThe NLGP docstring model was introduced in the paper Natural Language-Guided Programming. The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language intent in a certain code context (see the example below). \nAlso see the NLGP natural model.\n\nThis work was carried out by a research team in Nokia Bell Labs.\n\nContext\n\n\nIntent\n\n\nPrediction", "## Usage", "## License and copyright\n\nCopyright 2021 Nokia\n\nLicensed under the Apache License 2.0\n\nSPDX-License-Identifier: Apache-2.0" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #code completion #code generation #en #code #arxiv-2108.05198 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# NLGP docstring model\n\nThe NLGP docstring model was introduced in the paper Natural Language-Guided Programming. The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language intent in a certain code context (see the example below). \nAlso see the NLGP natural model.\n\nThis work was carried out by a research team in Nokia Bell Labs.\n\nContext\n\n\nIntent\n\n\nPrediction", "## Usage", "## License and copyright\n\nCopyright 2021 Nokia\n\nLicensed under the Apache License 2.0\n\nSPDX-License-Identifier: Apache-2.0" ]
[ 65, 91, 3, 30 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #code completion #code generation #en #code #arxiv-2108.05198 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# NLGP docstring model\n\nThe NLGP docstring model was introduced in the paper Natural Language-Guided Programming. The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language intent in a certain code context (see the example below). \nAlso see the NLGP natural model.\n\nThis work was carried out by a research team in Nokia Bell Labs.\n\nContext\n\n\nIntent\n\n\nPrediction## Usage## License and copyright\n\nCopyright 2021 Nokia\n\nLicensed under the Apache License 2.0\n\nSPDX-License-Identifier: Apache-2.0" ]
text-generation
transformers
# NLGP natural model The NLGP natural model was introduced in the paper [Natural Language-Guided Programming](https://arxiv.org/abs/2108.05198). The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language **intent** in a certain code **context** (see the example below). This work was carried out by a research team in Nokia Bell Labs. **Context** ```py import matplotlib.pyplot as plt values = [1, 2, 3, 4] labels = ["a", "b", "c", "d"] ``` **Intent** ```py # plot a bar chart ``` **Prediction** ```py plt.bar(labels, values) plt.show() ``` ## Usage ```py import re from transformers import GPT2LMHeadModel, GPT2TokenizerFast # load the model tok = GPT2TokenizerFast.from_pretrained("Nokia/nlgp-natural") model = GPT2LMHeadModel.from_pretrained("Nokia/nlgp-natural") # preprocessing functions num_spaces = [2, 4, 6, 8, 10, 12, 14, 16, 18] def preprocess(context, query): """ Encodes context + query as a single string and replaces whitespace with special tokens <|2space|>, <|4space|>, ... """ input_str = f"{context}\n{query} <|endofcomment|>\n" indentation_symbols = {n: f"<|{n}space|>" for n in num_spaces} m = re.match("^[ ]+", input_str) if not m: return input_str leading_whitespace = m.group(0) N = len(leading_whitespace) for n in self.num_spaces: leading_whitespace = leading_whitespace.replace(n * " ", self.indentation_symbols[n]) return leading_whitespace + input_str[N:] detokenize_pattern = re.compile(fr"<\|(\d+)space\|>") def postprocess(output): output = output.split("<|cell|>")[0] def insert_space(m): num_spaces = int(m.group(1)) return num_spaces * " " return detokenize_pattern.sub(insert_space, output) # inference code_context = """ import matplotlib.pyplot as plt values = [1, 2, 3, 4] labels = ["a", "b", "c", "d"] """ query = "# plot a bar chart" input_str = preprocess(code_context, query) input_ids = tok(input_str, return_tensors="pt").input_ids max_length = 150 # don't generate output longer than this length total_max_length = min(1024 - input_ids.shape[-1], input_ids.shape[-1] + 150) # total = input + output input_and_output = model.generate( input_ids=input_ids, max_length=total_max_length, min_length=10, do_sample=False, num_beams=4, early_stopping=True, eos_token_id=tok.encode("<|cell|>")[0] ) output = input_and_output[:, input_ids.shape[-1]:] # remove the tokens that correspond to the input_str output_str = tok.decode(output[0]) postprocess(output_str) ``` ## License and copyright Copyright 2021 Nokia Licensed under the Apache License 2.0 SPDX-License-Identifier: Apache-2.0
{"language": ["en", "code"], "license": "apache-2.0", "tags": ["code completion", "code generation"]}
Nokia/nlgp-natural
null
[ "transformers", "pytorch", "gpt2", "text-generation", "code completion", "code generation", "en", "code", "arxiv:2108.05198", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2108.05198" ]
[ "en", "code" ]
TAGS #transformers #pytorch #gpt2 #text-generation #code completion #code generation #en #code #arxiv-2108.05198 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# NLGP natural model The NLGP natural model was introduced in the paper Natural Language-Guided Programming. The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language intent in a certain code context (see the example below). This work was carried out by a research team in Nokia Bell Labs. Context Intent Prediction ## Usage ## License and copyright Copyright 2021 Nokia Licensed under the Apache License 2.0 SPDX-License-Identifier: Apache-2.0
[ "# NLGP natural model\n\nThe NLGP natural model was introduced in the paper Natural Language-Guided Programming. The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language intent in a certain code context (see the example below). This work was carried out by a research team in Nokia Bell Labs.\n\nContext\n\n\nIntent\n\n\nPrediction", "## Usage", "## License and copyright\n\nCopyright 2021 Nokia\n\nLicensed under the Apache License 2.0\n\nSPDX-License-Identifier: Apache-2.0" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #code completion #code generation #en #code #arxiv-2108.05198 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# NLGP natural model\n\nThe NLGP natural model was introduced in the paper Natural Language-Guided Programming. The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language intent in a certain code context (see the example below). This work was carried out by a research team in Nokia Bell Labs.\n\nContext\n\n\nIntent\n\n\nPrediction", "## Usage", "## License and copyright\n\nCopyright 2021 Nokia\n\nLicensed under the Apache License 2.0\n\nSPDX-License-Identifier: Apache-2.0" ]
[ 65, 79, 3, 30 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #code completion #code generation #en #code #arxiv-2108.05198 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# NLGP natural model\n\nThe NLGP natural model was introduced in the paper Natural Language-Guided Programming. The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language intent in a certain code context (see the example below). This work was carried out by a research team in Nokia Bell Labs.\n\nContext\n\n\nIntent\n\n\nPrediction## Usage## License and copyright\n\nCopyright 2021 Nokia\n\nLicensed under the Apache License 2.0\n\nSPDX-License-Identifier: Apache-2.0" ]
automatic-speech-recognition
transformers
# Wav2vec2 German Model This model has been fine-tuned on the wav2vec-large-xlsr-53 with the German CommonVoice dataset. It achieves a 11.26 WER on the full test dataset. It was basically trained with the code provided by [Max Idahl](https://huggingface.co/maxidl/wav2vec2-large-xlsr-german) with small adjustments in data preprocessing and on training parameters. You can use it to transcribe your own files by the following code. Please note, that your input file must be *.wav, encoded in 16 kHz and be single channel. To convert an audio file using ffmpeg use: "ffmpeg -i input.wav -ar 16000 -ac 1 output.wav". The transcribe process is very memory consuming (around 10GB per 10 seconds). If the script ends with "Killed" it means the Python interpreter ran out of memory. In this case, try with a shorter audio file. ```python # !pip3 install transformers torch soundfile import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer # load pretrained model tokenizer = Wav2Vec2Tokenizer.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") model = Wav2Vec2ForCTC.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") #load audio audio_input, _ = sf.read("/path/to/your/audio.wav") # transcribe input_values = tokenizer(audio_input, return_tensors="pt").input_values logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = tokenizer.batch_decode(predicted_ids)[0] print(str(transcription)) ``` To evaluate the model on the full CommonVoice test dataset, run this script: ```python import re import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "de", split="test") # use "test[:1%]" for 1% sample wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") model = Wav2Vec2ForCTC.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") model.to("cuda") chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=4) # batch_size=8 -> requires ~14.5GB GPU memory # Chunked version, see https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/5: import jiwer def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("Total (chunk_size=1000), WER: {:2f}".format(100 * chunked_wer(result["pred_strings"], result["sentence"], chunk_size=1000))) ``` Output: Total (chunk_size=1000), WER: 11.256522
{}
Noricum/wav2vec2-large-xlsr-53-german
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
# Wav2vec2 German Model This model has been fine-tuned on the wav2vec-large-xlsr-53 with the German CommonVoice dataset. It achieves a 11.26 WER on the full test dataset. It was basically trained with the code provided by Max Idahl with small adjustments in data preprocessing and on training parameters. You can use it to transcribe your own files by the following code. Please note, that your input file must be *.wav, encoded in 16 kHz and be single channel. To convert an audio file using ffmpeg use: "ffmpeg -i URL -ar 16000 -ac 1 URL". The transcribe process is very memory consuming (around 10GB per 10 seconds). If the script ends with "Killed" it means the Python interpreter ran out of memory. In this case, try with a shorter audio file. To evaluate the model on the full CommonVoice test dataset, run this script: Output: Total (chunk_size=1000), WER: 11.256522
[ "# Wav2vec2 German Model\n \n This model has been fine-tuned on the wav2vec-large-xlsr-53 with the German CommonVoice dataset.\n \n It achieves a 11.26 WER on the full test dataset.\n It was basically trained with the code provided by Max Idahl with small adjustments in data preprocessing and on training parameters.\n \n You can use it to transcribe your own files by the following code. Please note, that your input file must be *.wav, encoded in 16 kHz and be single channel. To convert an audio file using ffmpeg use: \"ffmpeg -i URL -ar 16000 -ac 1 URL\". The transcribe process is very memory consuming (around 10GB per 10 seconds). If the script ends with \"Killed\" it means the Python interpreter ran out of memory. In this case, try with a shorter audio file.\n \n\n\nTo evaluate the model on the full CommonVoice test dataset, run this script:\n\n\n\nOutput: Total (chunk_size=1000), WER: 11.256522" ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n", "# Wav2vec2 German Model\n \n This model has been fine-tuned on the wav2vec-large-xlsr-53 with the German CommonVoice dataset.\n \n It achieves a 11.26 WER on the full test dataset.\n It was basically trained with the code provided by Max Idahl with small adjustments in data preprocessing and on training parameters.\n \n You can use it to transcribe your own files by the following code. Please note, that your input file must be *.wav, encoded in 16 kHz and be single channel. To convert an audio file using ffmpeg use: \"ffmpeg -i URL -ar 16000 -ac 1 URL\". The transcribe process is very memory consuming (around 10GB per 10 seconds). If the script ends with \"Killed\" it means the Python interpreter ran out of memory. In this case, try with a shorter audio file.\n \n\n\nTo evaluate the model on the full CommonVoice test dataset, run this script:\n\n\n\nOutput: Total (chunk_size=1000), WER: 11.256522" ]
[ 32, 234 ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n# Wav2vec2 German Model\n \n This model has been fine-tuned on the wav2vec-large-xlsr-53 with the German CommonVoice dataset.\n \n It achieves a 11.26 WER on the full test dataset.\n It was basically trained with the code provided by Max Idahl with small adjustments in data preprocessing and on training parameters.\n \n You can use it to transcribe your own files by the following code. Please note, that your input file must be *.wav, encoded in 16 kHz and be single channel. To convert an audio file using ffmpeg use: \"ffmpeg -i URL -ar 16000 -ac 1 URL\". The transcribe process is very memory consuming (around 10GB per 10 seconds). If the script ends with \"Killed\" it means the Python interpreter ran out of memory. In this case, try with a shorter audio file.\n \n\n\nTo evaluate the model on the full CommonVoice test dataset, run this script:\n\n\n\nOutput: Total (chunk_size=1000), WER: 11.256522" ]
text-generation
transformers
# distilgpt2-base-pretrained-he A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. Then was further fine-tuned on GPU. ## Dataset ### oscar (unshuffled deduplicated he) - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ### CC-100 (he) - [HomePage](https://data.statmt.org/cc-100/) This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. ### Misc * Hebrew Twitter * Wikipedia * Various other sources ## Training * Done on a TPUv3-8 VM using [Huggingface's clm-flax example script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py) <BR> * I have made a list of items which might make it easier for other to use this script. The list was posted to [This discussion forum](https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351) * Further training was performed on GPU ## Usage #### Simple usage sample code ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline def main(): model_name="Norod78/distilgpt2-base-pretrained-he" prompt_text = "שלום, קוראים לי" generated_max_length = 192 print("Loading model...") model = AutoModelForCausalLM.from_pretrained(model_name) print('Loading Tokenizer...') tokenizer = AutoTokenizer.from_pretrained(model_name) text_generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) print("Generating text...") result = text_generator(prompt_text, num_return_sequences=1, batch_size=1, do_sample=True, top_k=40, top_p=0.92, temperature = 1, repetition_penalty=5.0, max_length = generated_max_length) print("result = " + str(result)) if __name__ == '__main__': main() ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05d4\u05d0\u05d9\u05e9 \u05d4\u05d0\u05d7\u05e8\u05d5\u05df \u05e2\u05dc\u05d9 \u05d0\u05d3\u05de\u05d5\u05ea \u05d9\u05e9\u05d1 \u05dc\u05d1\u05d3 \u05d1\u05d7\u05d3\u05e8\u05d5 \u05db\u05e9\u05dc\u05e4\u05ea\u05e2 \u05e0\u05e9\u05de\u05e2\u05d4 \u05e0\u05e7\u05d9\u05e9\u05d4"}, {"text": "\u05e9\u05dc\u05d5\u05dd, \u05e7\u05e8\u05d5\u05d0\u05d9\u05dd \u05dc\u05d9"}, {"text": "\u05d4\u05d0\u05e8\u05d9 \u05e4\u05d5\u05d8\u05e8 \u05d7\u05d9\u05d9\u05da \u05d7\u05d9\u05d5\u05da \u05e0\u05d1\u05d5\u05da"}, {"text": "\u05d4\u05d7\u05ea\u05d5\u05dc \u05e9\u05dc\u05da \u05de\u05d0\u05d5\u05d3 \u05d7\u05de\u05d5\u05d3 \u05d5"}]}
Norod78/distilgpt2-base-pretrained-he
null
[ "transformers", "pytorch", "tf", "jax", "coreml", "onnx", "safetensors", "gpt2", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "he" ]
TAGS #transformers #pytorch #tf #jax #coreml #onnx #safetensors #gpt2 #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# distilgpt2-base-pretrained-he A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU. ## Dataset ### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ### CC-100 (he) - HomePage This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. ### Misc * Hebrew Twitter * Wikipedia * Various other sources ## Training * Done on a TPUv3-8 VM using Huggingface's clm-flax example script <BR> * I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum * Further training was performed on GPU ## Usage #### Simple usage sample code
[ "# distilgpt2-base-pretrained-he\n\nA tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU.", "## Dataset", "### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.", "### CC-100 (he) - HomePage\n\nThis corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.", "### Misc\n* Hebrew Twitter\n* Wikipedia\n* Various other sources", "## Training\n\n* Done on a TPUv3-8 VM using Huggingface's clm-flax example script <BR>\n* I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum\n* Further training was performed on GPU", "## Usage", "#### Simple usage sample code" ]
[ "TAGS\n#transformers #pytorch #tf #jax #coreml #onnx #safetensors #gpt2 #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# distilgpt2-base-pretrained-he\n\nA tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU.", "## Dataset", "### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.", "### CC-100 (he) - HomePage\n\nThis corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.", "### Misc\n* Hebrew Twitter\n* Wikipedia\n* Various other sources", "## Training\n\n* Done on a TPUv3-8 VM using Huggingface's clm-flax example script <BR>\n* I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum\n* Further training was performed on GPU", "## Usage", "#### Simple usage sample code" ]
[ 61, 60, 4, 58, 100, 14, 66, 3, 8 ]
[ "TAGS\n#transformers #pytorch #tf #jax #coreml #onnx #safetensors #gpt2 #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# distilgpt2-base-pretrained-he\n\nA tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU.## Dataset### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.### CC-100 (he) - HomePage\n\nThis corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.### Misc\n* Hebrew Twitter\n* Wikipedia\n* Various other sources## Training\n\n* Done on a TPUv3-8 VM using Huggingface's clm-flax example script <BR>\n* I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum\n* Further training was performed on GPU## Usage#### Simple usage sample code" ]
text-generation
transformers
# hebrew-bad_wiki-gpt_neo-tiny ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** The model developer notes that the model is > Hebrew nonsense generation model which produces really bad wiki-abstract text. - **Developed by:** [Doron Adler](https://github.com/Norod) - **Model Type:** Text Generation - **Language(s):** Hebrew - **License:** MIT - **Resources for more information:** - [GitHub Repo](https://github.com/Norod/hebrew-gpt_neo) - [HuggingFace Space](https://huggingface.co/spaces/Norod78/Hebrew-GPT-Neo-Small) ## Uses #### Direct Use This model can be used for text generation. #### Misuse and Out-of-scope Use ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Training #### Training Data [Hebrew Wikipedia Dump](https://dumps.wikimedia.org/hewiki/latest/) (hewiki abstract) from May 2020 #### Training Procedure This model was fined tuned upon [hebrew-gpt_neo-tiny](https://huggingface.co/Norod78/hebrew-gpt_neo-tiny) which was previously trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Fine-tuning on the wiki-absract text was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen). ## Evaluation #### Configs Model configs for the hebrew-gpt_neo-tiny is available on the [hebrew-gpt_neo model github](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-tiny/configs) * **Activation Function:** gelu * **Number_Head:** 12 * **Number_Vocab:** 50257 * **Train batch size:** 250 * **Eval batch size:** 64 * **Predict batch size:** 1 ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). - **Hardware Type:** [More information needed] - **Hours used:** Unknown - **Cloud Provider:** GCP tpu-v8s - **Compute Region:** europe-west4 - **Carbon Emitted:** [More information needed] ## How to Get Started With the Model A Google Colab Notebook is also available [here](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-tiny/Norod78_hebrew_gpt_neo_tiny_Colab.ipynb) ​​ ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-bad_wiki-gpt_neo-tiny") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-bad_wiki-gpt_neo-tiny") ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05de\u05ea\u05de\u05d8\u05d9\u05e7\u05d4:"}, {"text": "\u05e2\u05dc\u05d9\u05d9\u05ea \u05d4\u05de\u05db\u05d5\u05e0\u05d5\u05ea"}, {"text": "\u05d5\u05d9\u05e7\u05d9\u05e4\u05d3\u05d9\u05d4 \u05d4\u05e2\u05d1\u05e8\u05d9\u05ea"}, {"text": "\u05d4\u05d0\u05d9\u05e8\u05d5\u05d5\u05d9\u05d6\u05d9\u05d5\u05df \u05d4\u05d5\u05d0"}, {"text": "\u05d3\u05d5\u05d3 \u05d1\u05df-\u05d2\u05d5\u05e8\u05d9\u05d5\u05df \u05d4\u05d9\u05d4"}]}
Norod78/hebrew-bad_wiki-gpt_neo-tiny
null
[ "transformers", "pytorch", "coreml", "safetensors", "gpt_neo", "text-generation", "he", "arxiv:1910.09700", "arxiv:2105.09680", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.09700", "2105.09680" ]
[ "he" ]
TAGS #transformers #pytorch #coreml #safetensors #gpt_neo #text-generation #he #arxiv-1910.09700 #arxiv-2105.09680 #license-mit #autotrain_compatible #endpoints_compatible #region-us
# hebrew-bad_wiki-gpt_neo-tiny ## Table of Contents - Model Details - Uses - Risks, Limitations and Biases - Training - Evaluation - Environmental Impact - How to Get Started With the Model ## Model Details Model Description: The model developer notes that the model is > Hebrew nonsense generation model which produces really bad wiki-abstract text. - Developed by: Doron Adler - Model Type: Text Generation - Language(s): Hebrew - License: MIT - Resources for more information: - GitHub Repo - HuggingFace Space ## Uses #### Direct Use This model can be used for text generation. #### Misuse and Out-of-scope Use ## Risks, Limitations and Biases CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). ## Training #### Training Data Hebrew Wikipedia Dump (hewiki abstract) from May 2020 #### Training Procedure This model was fined tuned upon hebrew-gpt_neo-tiny which was previously trained using EleutherAI's gpt-neo. Fine-tuning on the wiki-absract text was done using @minimaxir's aitextgen. ## Evaluation #### Configs Model configs for the hebrew-gpt_neo-tiny is available on the hebrew-gpt_neo model github * Activation Function: gelu * Number_Head: 12 * Number_Vocab: 50257 * Train batch size: 250 * Eval batch size: 64 * Predict batch size: 1 ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper. - Hardware Type: [More information needed] - Hours used: Unknown - Cloud Provider: GCP tpu-v8s - Compute Region: europe-west4 - Carbon Emitted: [More information needed] ## How to Get Started With the Model A Google Colab Notebook is also available here ​​
[ "# hebrew-bad_wiki-gpt_neo-tiny", "## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- How to Get Started With the Model", "## Model Details\nModel Description:\n\nThe model developer notes that the model is \n> Hebrew nonsense generation model which produces really bad wiki-abstract text. \n\n\n- Developed by: Doron Adler\n- Model Type: Text Generation\n- Language(s): Hebrew\n- License: MIT\n- Resources for more information:\n- GitHub Repo\n- HuggingFace Space", "## Uses", "#### Direct Use\n\nThis model can be used for text generation.", "#### Misuse and Out-of-scope Use", "## Risks, Limitations and Biases\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).", "## Training", "#### Training Data\n Hebrew Wikipedia Dump (hewiki abstract) from May 2020", "#### Training Procedure\n\n\nThis model was fined tuned upon hebrew-gpt_neo-tiny which was previously trained using EleutherAI's gpt-neo. \n\nFine-tuning on the wiki-absract text was done using @minimaxir's aitextgen.", "## Evaluation", "#### Configs\n\nModel configs for the hebrew-gpt_neo-tiny is available on the hebrew-gpt_neo model github \n\n* Activation Function: gelu\n* Number_Head: 12\n* Number_Vocab: 50257\n* Train batch size: 250\n* Eval batch size: 64\n* Predict batch size: 1", "## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n- Hardware Type: [More information needed]\n\n- Hours used: Unknown\n\n- Cloud Provider: GCP tpu-v8s\n\n- Compute Region: europe-west4\n\n- Carbon Emitted: [More information needed]", "## How to Get Started With the Model\n\nA Google Colab Notebook is also available here\n\n\n​​" ]
[ "TAGS\n#transformers #pytorch #coreml #safetensors #gpt_neo #text-generation #he #arxiv-1910.09700 #arxiv-2105.09680 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# hebrew-bad_wiki-gpt_neo-tiny", "## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- How to Get Started With the Model", "## Model Details\nModel Description:\n\nThe model developer notes that the model is \n> Hebrew nonsense generation model which produces really bad wiki-abstract text. \n\n\n- Developed by: Doron Adler\n- Model Type: Text Generation\n- Language(s): Hebrew\n- License: MIT\n- Resources for more information:\n- GitHub Repo\n- HuggingFace Space", "## Uses", "#### Direct Use\n\nThis model can be used for text generation.", "#### Misuse and Out-of-scope Use", "## Risks, Limitations and Biases\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).", "## Training", "#### Training Data\n Hebrew Wikipedia Dump (hewiki abstract) from May 2020", "#### Training Procedure\n\n\nThis model was fined tuned upon hebrew-gpt_neo-tiny which was previously trained using EleutherAI's gpt-neo. \n\nFine-tuning on the wiki-absract text was done using @minimaxir's aitextgen.", "## Evaluation", "#### Configs\n\nModel configs for the hebrew-gpt_neo-tiny is available on the hebrew-gpt_neo model github \n\n* Activation Function: gelu\n* Number_Head: 12\n* Number_Vocab: 50257\n* Train batch size: 250\n* Eval batch size: 64\n* Predict batch size: 1", "## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n- Hardware Type: [More information needed]\n\n- Hours used: Unknown\n\n- Cloud Provider: GCP tpu-v8s\n\n- Compute Region: europe-west4\n\n- Carbon Emitted: [More information needed]", "## How to Get Started With the Model\n\nA Google Colab Notebook is also available here\n\n\n​​" ]
[ 65, 14, 32, 70, 3, 15, 13, 71, 3, 18, 61, 3, 76, 82, 18 ]
[ "TAGS\n#transformers #pytorch #coreml #safetensors #gpt_neo #text-generation #he #arxiv-1910.09700 #arxiv-2105.09680 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# hebrew-bad_wiki-gpt_neo-tiny## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- How to Get Started With the Model## Model Details\nModel Description:\n\nThe model developer notes that the model is \n> Hebrew nonsense generation model which produces really bad wiki-abstract text. \n\n\n- Developed by: Doron Adler\n- Model Type: Text Generation\n- Language(s): Hebrew\n- License: MIT\n- Resources for more information:\n- GitHub Repo\n- HuggingFace Space## Uses#### Direct Use\n\nThis model can be used for text generation.#### Misuse and Out-of-scope Use## Risks, Limitations and Biases\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).## Training#### Training Data\n Hebrew Wikipedia Dump (hewiki abstract) from May 2020#### Training Procedure\n\n\nThis model was fined tuned upon hebrew-gpt_neo-tiny which was previously trained using EleutherAI's gpt-neo. \n\nFine-tuning on the wiki-absract text was done using @minimaxir's aitextgen.## Evaluation#### Configs\n\nModel configs for the hebrew-gpt_neo-tiny is available on the hebrew-gpt_neo model github \n\n* Activation Function: gelu\n* Number_Head: 12\n* Number_Vocab: 50257\n* Train batch size: 250\n* Eval batch size: 64\n* Predict batch size: 1## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n- Hardware Type: [More information needed]\n\n- Hours used: Unknown\n\n- Cloud Provider: GCP tpu-v8s\n\n- Compute Region: europe-west4\n\n- Carbon Emitted: [More information needed]## How to Get Started With the Model\n\nA Google Colab Notebook is also available here\n\n\n​​" ]
text-generation
transformers
# hebrew-gpt_neo-small Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. 3. CC100-Hebrew Dataset [Homepage](https://metatext.io/datasets/cc100-hebrew) Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-small/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-small/Norod78_hebrew_gpt_neo_small_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.2 transformers==4.6.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-small") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-small", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05e2\u05d5\u05d3 \u05d1\u05d9\u05de\u05d9 \u05e7\u05d3\u05dd"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d3\u05d5\u05e8\u05d5\u05df \u05d5\u05d0\u05e0\u05d9 \u05de\u05e2\u05d5\u05e0\u05d9\u05d9\u05df \u05dc"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d0\u05d9\u05e6\u05d9\u05e7 \u05d5\u05d0\u05e0\u05d9 \u05d7\u05d5\u05e9\u05d1 \u05e9"}, {"text": "\u05d4\u05d7\u05ea\u05d5\u05dc \u05e9\u05dc\u05da \u05de\u05d0\u05d5\u05d3 \u05d7\u05de\u05d5\u05d3 \u05d5"}]}
Norod78/hebrew-gpt_neo-small
null
[ "transformers", "pytorch", "jax", "onnx", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "he" ]
TAGS #transformers #pytorch #jax #onnx #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# hebrew-gpt_neo-small Hebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available here 2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. 3. CC100-Hebrew Dataset Homepage Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language. ## Training Config Available here <BR> ## Usage ### Google Colab Notebook Available here <BR> #### Simple usage sample code
[ "# hebrew-gpt_neo-small\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.", "## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\n\n3. CC100-Hebrew Dataset Homepage \n\nCreated by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.", "## Training Config\n\nAvailable here <BR>", "## Usage", "### Google Colab Notebook\n\nAvailable here <BR>", "#### Simple usage sample code" ]
[ "TAGS\n#transformers #pytorch #jax #onnx #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# hebrew-gpt_neo-small\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.", "## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\n\n3. CC100-Hebrew Dataset Homepage \n\nCreated by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.", "## Training Config\n\nAvailable here <BR>", "## Usage", "### Google Colab Notebook\n\nAvailable here <BR>", "#### Simple usage sample code" ]
[ 50, 52, 162, 11, 3, 12, 8 ]
[ "TAGS\n#transformers #pytorch #jax #onnx #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n# hebrew-gpt_neo-small\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\n\n3. CC100-Hebrew Dataset Homepage \n\nCreated by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.## Training Config\n\nAvailable here <BR>## Usage### Google Colab Notebook\n\nAvailable here <BR>#### Simple usage sample code" ]
text-generation
transformers
# hebrew-gpt_neo-tiny Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-tiny/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-tiny/Norod78_hebrew_gpt_neo_tiny_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.2 transformers==4.6.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-tiny") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-tiny", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 1024: max_len = 1024 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05e2\u05d5\u05d3 \u05d1\u05d9\u05de\u05d9 \u05e7\u05d3\u05dd"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d3\u05d5\u05e8\u05d5\u05df \u05d5\u05d0\u05e0\u05d9 \u05de\u05e2\u05d5\u05e0\u05d9\u05d9\u05df \u05dc"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d0\u05d9\u05e6\u05d9\u05e7 \u05d5\u05d0\u05e0\u05d9 \u05d7\u05d5\u05e9\u05d1 \u05e9"}, {"text": "\u05d4\u05d7\u05ea\u05d5\u05dc \u05e9\u05dc\u05da \u05de\u05d0\u05d5\u05d3 \u05d7\u05de\u05d5\u05d3 \u05d5"}]}
Norod78/hebrew-gpt_neo-tiny
null
[ "transformers", "pytorch", "jax", "onnx", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "he" ]
TAGS #transformers #pytorch #jax #onnx #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# hebrew-gpt_neo-tiny Hebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available here 2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ## Training Config Available here <BR> ## Usage ### Google Colab Notebook Available here <BR> #### Simple usage sample code
[ "# hebrew-gpt_neo-tiny\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.", "## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.", "## Training Config\n\nAvailable here <BR>", "## Usage", "### Google Colab Notebook\n\nAvailable here <BR>", "#### Simple usage sample code" ]
[ "TAGS\n#transformers #pytorch #jax #onnx #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# hebrew-gpt_neo-tiny\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.", "## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.", "## Training Config\n\nAvailable here <BR>", "## Usage", "### Google Colab Notebook\n\nAvailable here <BR>", "#### Simple usage sample code" ]
[ 50, 52, 79, 11, 3, 12, 8 ]
[ "TAGS\n#transformers #pytorch #jax #onnx #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n# hebrew-gpt_neo-tiny\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.## Training Config\n\nAvailable here <BR>## Usage### Google Colab Notebook\n\nAvailable here <BR>#### Simple usage sample code" ]
text-generation
transformers
# hebrew-gpt_neo-xl-poetry Hebrew poetry text generation model which was fine tuned upon on [hebrew-gpt_neo-xl](https://huggingface.co/Norod78/hebrew-gpt_neo-xl). ## Datasets An assortment of various Hebrew books, magazines and poetry corpuses ## Training Config Similar to [this one](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-xl/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-xl/Norod78_hebrew_gpt_neo_xl_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.3 transformers==4.8.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-xl-poetry") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-xl-poetry", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05e2\u05d5\u05d3 \u05d1\u05d9\u05de\u05d9 \u05e7\u05d3\u05dd"}, {"text": "\u05ea\u05e8\u05d9\u05e1\u05e8 \u05de\u05db\u05e9\u05e4\u05d5\u05ea \u05e1\u05d2"}, {"text": "\n\n\u05d4\u05d0\u05d9\u05e9 \u05d4\u05d0\u05d7\u05e8\u05d5\u05df \u05d1\u05e2\u05d5\u05dc\u05dd /"}, {"text": "\u05e4\u05e2\u05dd \u05d0\u05d7\u05ea, \u05dc\u05e4\u05e0\u05d9 \u05e9\u05e0\u05d9\u05dd \u05e8\u05d1\u05d5\u05ea"}, {"text": "\u05d4\u05e8\u05de\u05d9\u05d5\u05e0\u05d9 \u05d4\u05e1\u05ea\u05d9\u05e8\u05d4 \u05d0\u05ea"}, {"text": "\u05dc\u05e4\u05ea\u05e2, \u05d0\u05d5\u05e8 \u05d9\u05e8\u05d5\u05e7"}]}
Norod78/hebrew-gpt_neo-xl-poetry
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "he" ]
TAGS #transformers #pytorch #jax #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #region-us
# hebrew-gpt_neo-xl-poetry Hebrew poetry text generation model which was fine tuned upon on hebrew-gpt_neo-xl. ## Datasets An assortment of various Hebrew books, magazines and poetry corpuses ## Training Config Similar to this one <BR> ## Usage ### Google Colab Notebook Available here <BR> #### Simple usage sample code
[ "# hebrew-gpt_neo-xl-poetry\n\nHebrew poetry text generation model which was fine tuned upon on hebrew-gpt_neo-xl.", "## Datasets\n\nAn assortment of various Hebrew books, magazines and poetry corpuses", "## Training Config\n\nSimilar to this one <BR>", "## Usage", "### Google Colab Notebook\n\nAvailable here <BR>", "#### Simple usage sample code" ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# hebrew-gpt_neo-xl-poetry\n\nHebrew poetry text generation model which was fine tuned upon on hebrew-gpt_neo-xl.", "## Datasets\n\nAn assortment of various Hebrew books, magazines and poetry corpuses", "## Training Config\n\nSimilar to this one <BR>", "## Usage", "### Google Colab Notebook\n\nAvailable here <BR>", "#### Simple usage sample code" ]
[ 43, 31, 17, 13, 3, 12, 8 ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# hebrew-gpt_neo-xl-poetry\n\nHebrew poetry text generation model which was fine tuned upon on hebrew-gpt_neo-xl.## Datasets\n\nAn assortment of various Hebrew books, magazines and poetry corpuses## Training Config\n\nSimilar to this one <BR>## Usage### Google Colab Notebook\n\nAvailable here <BR>#### Simple usage sample code" ]
text-generation
transformers
# hebrew-gpt_neo-xl Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. 3. CC100-Hebrew Dataset [Homepage](https://metatext.io/datasets/cc100-hebrew) Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-xl/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-xl/Norod78_hebrew_gpt_neo_xl_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.3 transformers==4.8.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-xl") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-xl", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\ \ \ " sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\ \t\tOutput\ " + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\ {}: {}".format(i, text)) print("\ " + 100 * '-') ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05e2\u05d5\u05d3 \u05d1\u05d9\u05de\u05d9 \u05e7\u05d3\u05dd"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d3\u05d5\u05e8\u05d5\u05df \u05d5\u05d0\u05e0\u05d9 \u05de\u05e2\u05d5\u05e0\u05d9\u05d9\u05df \u05dc"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d0\u05d9\u05e6\u05d9\u05e7 \u05d5\u05d0\u05e0\u05d9 \u05d7\u05d5\u05e9\u05d1 \u05e9"}, {"text": "\u05d4\u05d7\u05ea\u05d5\u05dc \u05e9\u05dc\u05da \u05de\u05d0\u05d5\u05d3 \u05d7\u05de\u05d5\u05d3 \u05d5"}, {"text": "\u05d5\u05d1\u05d3\u05e8\u05da \u05e8\u05d0\u05d9\u05e0\u05d5 \u05e9\u05d4\u05d2\u05df"}]}
Norod78/hebrew-gpt_neo-xl
null
[ "transformers", "pytorch", "jax", "onnx", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "he" ]
TAGS #transformers #pytorch #jax #onnx #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# hebrew-gpt_neo-xl Hebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available here 2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. 3. CC100-Hebrew Dataset Homepage Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language. ## Training Config Available here <BR> ## Usage ### Google Colab Notebook Available here <BR> #### Simple usage sample code
[ "# hebrew-gpt_neo-xl\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.", "## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\n\n3. CC100-Hebrew Dataset Homepage \n\nCreated by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.", "## Training Config\n\nAvailable here <BR>", "## Usage", "### Google Colab Notebook\n\nAvailable here <BR>", "#### Simple usage sample code" ]
[ "TAGS\n#transformers #pytorch #jax #onnx #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# hebrew-gpt_neo-xl\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.", "## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\n\n3. CC100-Hebrew Dataset Homepage \n\nCreated by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.", "## Training Config\n\nAvailable here <BR>", "## Usage", "### Google Colab Notebook\n\nAvailable here <BR>", "#### Simple usage sample code" ]
[ 50, 52, 162, 11, 3, 12, 8 ]
[ "TAGS\n#transformers #pytorch #jax #onnx #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n# hebrew-gpt_neo-xl\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\n\n3. CC100-Hebrew Dataset Homepage \n\nCreated by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.## Training Config\n\nAvailable here <BR>## Usage### Google Colab Notebook\n\nAvailable here <BR>#### Simple usage sample code" ]
text-generation
transformers
# hebrew_poetry-gpt_neo-small Hebrew poetry text generation model, fined tuned upon [hebrew-gpt_neo-small](https://huggingface.co/Norod78/hebrew-gpt_neo-small) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Fine-tuning was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen). ## Datasets 1. Text from [New stage](http://stage.co.il/) 2. A dataset containing Hebrew lyrics
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05e4\u05e2\u05dd \u05d0\u05d7\u05ea \u05dc\u05e4\u05e0\u05d9 \u05e9\u05e0"}, {"text": "\u05d4\u05d9\u05dd \u05db\u05d7\u05d5\u05dc \u05d5\u05d0\u05e0\u05d9 \u05d7"}, {"text": "\u05e9\u05dd \u05d4\u05d9\u05e6\u05d9\u05e8\u05d4:"}, {"text": "\u05db\u05e9\u05d4\u05de\u05db\u05d5\u05e0\u05d5\u05ea"}]}
Norod78/hebrew_poetry-gpt_neo-small
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "he" ]
TAGS #transformers #pytorch #jax #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #region-us
# hebrew_poetry-gpt_neo-small Hebrew poetry text generation model, fined tuned upon hebrew-gpt_neo-small which was trained using EleutherAI's gpt-neo. Fine-tuning was done using @minimaxir's aitextgen. ## Datasets 1. Text from New stage 2. A dataset containing Hebrew lyrics
[ "# hebrew_poetry-gpt_neo-small\n\nHebrew poetry text generation model, fined tuned upon hebrew-gpt_neo-small which was trained using EleutherAI's gpt-neo. \nFine-tuning was done using @minimaxir's aitextgen.", "## Datasets\n\n1. Text from New stage\n2. A dataset containing Hebrew lyrics" ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# hebrew_poetry-gpt_neo-small\n\nHebrew poetry text generation model, fined tuned upon hebrew-gpt_neo-small which was trained using EleutherAI's gpt-neo. \nFine-tuning was done using @minimaxir's aitextgen.", "## Datasets\n\n1. Text from New stage\n2. A dataset containing Hebrew lyrics" ]
[ 43, 59, 19 ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# hebrew_poetry-gpt_neo-small\n\nHebrew poetry text generation model, fined tuned upon hebrew-gpt_neo-small which was trained using EleutherAI's gpt-neo. \nFine-tuning was done using @minimaxir's aitextgen.## Datasets\n\n1. Text from New stage\n2. A dataset containing Hebrew lyrics" ]
text-generation
transformers
# hebrew_stories-gpt_neo-small Hebrew story-text generation model, fined tuned upon [hebrew-gpt_neo-small](https://huggingface.co/Norod78/hebrew-gpt_neo-small) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). ## Dataset Text from various Hebrew books
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05ea\u05e8\u05d9\u05e1\u05e8 \u05de\u05db\u05e9\u05e4\u05d5\u05ea \u05e1\u05d2"}, {"text": "\n\n\u05d4\u05d0\u05d9\u05e9 \u05d4\u05d0\u05d7\u05e8\u05d5\u05df \u05d1\u05e2\u05d5\u05dc\u05dd /"}, {"text": "\u05e4\u05e2\u05dd \u05d0\u05d7\u05ea, \u05dc\u05e4\u05e0\u05d9 \u05e9\u05e0\u05d9\u05dd \u05e8\u05d1\u05d5\u05ea"}, {"text": "\u05d4\u05e8\u05de\u05d9\u05d5\u05e0\u05d9 \u05d4\u05e1\u05ea\u05d9\u05e8\u05d4 \u05d0\u05ea"}, {"text": "\u05dc\u05e4\u05ea\u05e2, \u05d0\u05d5\u05e8 \u05d9\u05e8\u05d5\u05e7"}]}
Norod78/hebrew_stories-gpt_neo-small
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "he" ]
TAGS #transformers #pytorch #jax #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #region-us
# hebrew_stories-gpt_neo-small Hebrew story-text generation model, fined tuned upon hebrew-gpt_neo-small which was trained using EleutherAI's gpt-neo. ## Dataset Text from various Hebrew books
[ "# hebrew_stories-gpt_neo-small\n\nHebrew story-text generation model, fined tuned upon hebrew-gpt_neo-small which was trained using EleutherAI's gpt-neo.", "## Dataset\n\nText from various Hebrew books" ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# hebrew_stories-gpt_neo-small\n\nHebrew story-text generation model, fined tuned upon hebrew-gpt_neo-small which was trained using EleutherAI's gpt-neo.", "## Dataset\n\nText from various Hebrew books" ]
[ 43, 44, 9 ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #gpt_neo #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# hebrew_stories-gpt_neo-small\n\nHebrew story-text generation model, fined tuned upon hebrew-gpt_neo-small which was trained using EleutherAI's gpt-neo.## Dataset\n\nText from various Hebrew books" ]
text-generation
transformers
# hewiki-articles-distilGPT2py-il ## A tiny GPT2 model for generating Hebrew text A distilGPT2 sized model. <br> Training data was hewiki-20200701-pages-articles-multistream.xml.bz2 from https://dumps.wikimedia.org/hewiki/20200701/ <br> XML has been converted to plain text using Wikipedia Extractor http://medialab.di.unipi.it/wiki/Wikipedia_Extractor <br> I then added <|startoftext|> and <|endoftext|> markers and deleted empty lines. <br> #### How to use ```python import torch import torch.nn as nn from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained("Norod78/hewiki-articles-distilGPT2py-il") model = GPT2LMHeadModel.from_pretrained("Norod78/hewiki-articles-distilGPT2py-il").eval() bos_token = tokenizer.bos_token #Beginning of sentace eos_token = tokenizer.eos_token #End of sentence def generate_word(model, tokens_tensor, temperature=1.0): """ Sample a word given a tensor of tokens of previous words from a model. Given the words we have, sample a plausible word. Temperature is used for controlling randomness. If using temperature==0 we simply use a greedy arg max. Else, we sample from a multinomial distribution using a lower inverse temperature to allow for more randomness to escape repetitions. """ with torch.no_grad(): outputs = model(tokens_tensor) predictions = outputs[0] if temperature>0: # Make the distribution more or less skewed based on the temperature predictions = outputs[0]/temperature # Sample from the distribution softmax = nn.Softmax(dim=0) predicted_index = torch.multinomial(softmax(predictions[0,-1,:]),1).item() # Simply take the arg-max of the distribution else: predicted_index = torch.argmax(predictions[0, -1, :]).item() # Decode the encoding to the corresponding word predicted_text = tokenizer.decode([predicted_index]) return predicted_text def generate_sentence(model, tokenizer, initial_text, temperature=1.0): """ Generate a sentence given some initial text using a model and a tokenizer. Returns the new sentence. """ # Encode a text inputs text = "" sentence = text # We avoid an infinite loop by setting a maximum range for i in range(0,84): indexed_tokens = tokenizer.encode(initial_text + text) # Convert indexed tokens in a PyTorch tensor tokens_tensor = torch.tensor([indexed_tokens]) new_word = generate_word(model, tokens_tensor, temperature=temperature) # Here the temperature is slowly decreased with each generated word, # this ensures that the sentence (ending) makes more sense. # We don't decrease to a temperature of 0.0 to leave some randomness in. if temperature<(1-0.008): temperature += 0.008 else: temperature = 0.996 text = text+new_word # Stop generating new words when we have reached the end of the line or the poem if eos_token in new_word: # returns new sentence and whether poem is done return (text.replace(eos_token,"").strip(), True) elif '/' in new_word: return (text.strip(), False) elif bos_token in new_word: return (text.replace(bos_token,"").strip(), False) return (text, True) for output_num in range(1,5): init_text = "בוקר טוב" text = bos_token + init_text for i in range(0,84): sentence = generate_sentence(model, tokenizer, text, temperature=0.9) text = init_text + sentence[0] print(text) if (sentence[1] == True): break ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "<|startoftext|>\u05d4\u05d7\u05d5\u05e7 \u05d4\u05e9\u05e0\u05d9 \u05e9\u05dc \u05de\u05d5\u05e2\u05d3\u05d5\u05df \u05e7\u05e8\u05d1 \u05d4\u05d5\u05d0"}, {"text": "<|startoftext|>\u05e8\u05d0\u05e9 \u05d4\u05de\u05de\u05e9\u05dc\u05d4 \u05d1\u05df \u05d2\u05d5\u05e8\u05d9\u05d5\u05df"}, {"text": "<|startoftext|>\u05dc\u05de\u05d9\u05d3\u05ea \u05de\u05db\u05d5\u05e0\u05d4 (\u05e1\u05e8\u05d8)"}, {"text": "<|startoftext|>\u05de\u05e0\u05e9\u05d4 \u05e4\u05d5\u05de\u05e4\u05e8\u05e0\u05d9\u05e7\u05dc"}, {"text": "<|startoftext|>\u05d0\u05d9 \u05e9\u05d5\u05d5\u05d9\u05d5\u05df "}]}
Norod78/hewiki-articles-distilGPT2py-il
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "gpt2", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "he" ]
TAGS #transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# hewiki-articles-distilGPT2py-il ## A tiny GPT2 model for generating Hebrew text A distilGPT2 sized model. <br> Training data was URL.bz2 from URL <br> XML has been converted to plain text using Wikipedia Extractor URL <br> I then added <|startoftext|> and <|endoftext|> markers and deleted empty lines. <br> #### How to use
[ "# hewiki-articles-distilGPT2py-il", "## A tiny GPT2 model for generating Hebrew text\n\nA distilGPT2 sized model. <br>\nTraining data was URL.bz2 from URL <br>\nXML has been converted to plain text using Wikipedia Extractor URL <br>\nI then added <|startoftext|> and <|endoftext|> markers and deleted empty lines. <br>", "#### How to use" ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# hewiki-articles-distilGPT2py-il", "## A tiny GPT2 model for generating Hebrew text\n\nA distilGPT2 sized model. <br>\nTraining data was URL.bz2 from URL <br>\nXML has been converted to plain text using Wikipedia Extractor URL <br>\nI then added <|startoftext|> and <|endoftext|> markers and deleted empty lines. <br>", "#### How to use" ]
[ 51, 16, 85, 7 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #he #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# hewiki-articles-distilGPT2py-il## A tiny GPT2 model for generating Hebrew text\n\nA distilGPT2 sized model. <br>\nTraining data was URL.bz2 from URL <br>\nXML has been converted to plain text using Wikipedia Extractor URL <br>\nI then added <|startoftext|> and <|endoftext|> markers and deleted empty lines. <br>#### How to use" ]
text-generation
transformers
#Lelouch DialoGPT model
{"tags": ["conversational"]}
Nova/DialoGPT-medium-Lelouch
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Lelouch DialoGPT model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
NovaChrono/twervy
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# My Awesome Model
[ "# My Awesome Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# My Awesome Model" ]
[ 39, 4 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# My Awesome Model" ]
text-generation
transformers
# Genji-JP 6B Please check our blog post for more details, samples, evaluations and more: [Blogpost](https://blog.novelai.net/data-efficient-language-transfer-with-gpt-j-45daedaaf35a) ## Model Description Genji-JP 6B is a model finetuned on our Japanese storytelling dataset based on EleutherAI's GPT-J 6B model. This particular model is trained on Japanese web novels. | Hyperparameter | Value | |-------------------|--------| | n_parameters | 6,053,381,344 | | n_layers | 28* | | d_model | 4,096 | | d_ff | 16,384 | | n_heads | 16 | | d_head | 256 | | n_ctx | 2,048 | | n_vocab | 50,400 (same tokenizer as GPT-2/3) | | position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | `*` each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on our Japanese storytelling dataset. Check our blog post for more details. ### How to use ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("NovelAI/genji-jp", torch_dtype=torch.float16, low_cpu_mem_usage=True).eval().cuda() text = '''あらすじ:あなたは異世界に転生してしまいました。勇者となって、仲間を作り、異世界を冒険しよう! *** 転生すると、ある能力を手に入れていた。それは、''' tokens = tokenizer(text, return_tensors="pt").input_ids generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, temperature=1, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id) last_tokens = generated_tokens[0] generated_text = tokenizer.decode(last_tokens).replace("�", "") print("Generation:\n" + generated_text) ``` When run, produces output like this: ``` Generation: あらすじ:あなたは異世界に転生してしまいました。勇者となって、仲間を作り、異世界を冒険しよう! *** 転生すると、ある能力を手に入れていた。それは、『予知』だ。過去から未来のことを、誰も知らない出来事も含めて見通すことが出来る。 悪魔の欠片と呼ばれる小さな結晶を取り込んで、使役することが出来る。人を惹きつけ、堕落させる。何より、俺は男なんて居なかったし、女に興味もない。……そんなクズの片棒を担ぎ上げる奴が多くなると思うと、ちょっと苦しい。 だが、一部の人間には協力者を得ることが出来る。目立たない街にある寺の中で、常に家に引きこもっている老人。そんなヤツの魂をコントロールすることが出来るのだ。便利な能力だ。しかし、裏切り者は大勢いる。気を抜けば、狂う。だから注意が必要だ。 ――「やってやるよ」  アーロンは不敵に笑った。この ``` ## Acknowledgements This project was possible because of the compute provided by the [TPU Research Cloud](https://sites.research.google/trc/) Thanks [EleutherAI](https://eleuther.ai/) for pretraining the GPT-J 6B model. Thanks to everyone who contributed to this project! - [Finetune](https://github.com/finetuneanon) - [Aero](https://github.com/AeroScripts) - [Kurumuz](https://github.com/kurumuz)
{"language": ["ja", "en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"]}
NovelAI/genji-jp
null
[ "transformers", "pytorch", "gptj", "text-generation", "causal-lm", "ja", "en", "arxiv:2104.09864", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.09864" ]
[ "ja", "en" ]
TAGS #transformers #pytorch #gptj #text-generation #causal-lm #ja #en #arxiv-2104.09864 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Genji-JP 6B =========== Please check our blog post for more details, samples, evaluations and more: Blogpost Model Description ----------------- Genji-JP 6B is a model finetuned on our Japanese storytelling dataset based on EleutherAI's GPT-J 6B model. This particular model is trained on Japanese web novels. '\*' each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. Training data ------------- GPT-J 6B was pretrained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on our Japanese storytelling dataset. Check our blog post for more details. ### How to use When run, produces output like this: Acknowledgements ---------------- This project was possible because of the compute provided by the TPU Research Cloud Thanks EleutherAI for pretraining the GPT-J 6B model. Thanks to everyone who contributed to this project! * Finetune * Aero * Kurumuz
[ "### How to use\n\n\nWhen run, produces output like this:\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud\n\n\nThanks EleutherAI for pretraining the GPT-J 6B model.\n\n\nThanks to everyone who contributed to this project!\n\n\n* Finetune\n* Aero\n* Kurumuz" ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #causal-lm #ja #en #arxiv-2104.09864 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nWhen run, produces output like this:\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud\n\n\nThanks EleutherAI for pretraining the GPT-J 6B model.\n\n\nThanks to everyone who contributed to this project!\n\n\n* Finetune\n* Aero\n* Kurumuz" ]
[ 62, 84 ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #causal-lm #ja #en #arxiv-2104.09864 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### How to use\n\n\nWhen run, produces output like this:\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud\n\n\nThanks EleutherAI for pretraining the GPT-J 6B model.\n\n\nThanks to everyone who contributed to this project!\n\n\n* Finetune\n* Aero\n* Kurumuz" ]
null
null
# Genji-python 6B For example usage or to easily use the model you can check our colab notebook: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing) ## Model Description Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size. Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load. This model needs more effort to set up as you need to install git-lfs and pull the repo. | Hyperparameter | Value | |-------------------|--------| | n_parameters | 6,053,381,344 | | n_layers | 28* | | d_model | 4,096 | | d_ff | 16,384 | | n_heads | 16 | | d_head | 256 | | n_ctx | 2,048 | | n_vocab | 50,400 (same tokenizer as GPT-2/3) | | position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | `*` each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile. ## Training procedure Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06 ## Intended Use This model is trained for assistence on writing python code and having fun trying weird stuff with it. ### How to use This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable. For now, you need to use this fork: [Fork](https://github.com/finetuneanon/transformers) to install with pip: ```bash pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b ``` **git-lfs** also needs to be installed, on ubuntu: ```bash apt install git-lfs ``` after it's installed, initialize git-lfs: ```bash git lfs install ``` then clone this repo: ```bash git clone https://huggingface.co/NovelAI/genji-python-6B-split ``` Now we can load the model. We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards. How to use: ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, GPTNeoForCausalLM, ) model = AutoModelForCausalLM.from_pretrained("genji-python-6B-split/model").half().eval().cuda() tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B") text = '''def print_customer_name''' tokens = tokenizer(text, return_tensors="pt").input_ids generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id) last_tokens = generated_tokens[0][len(tokens[0]):] generated_text = tokenizer.decode(last_tokens) print("Generation:\n" + generated_text) ``` When ran, this code generates: ```python Prompt: def print_customer_name Generation: (self, customer): """Print the name of a customer.""" if not self.is_valid(): return print("Customer: {}".format(customer)) ``` For example usage, you can see our colab notebook as well: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing) ## Eval results TBD ## Acknowledgements This project was possible because of the compute provided by the [TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B. Thanks to everyone who contributed to this project: - [Aero](https://github.com/AeroScripts) - [Finetune](https://github.com/finetuneanon) - [Kurumuz](https://github.com/kurumuz)
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["the Pile"]}
NovelAI/genji-python-6B-split
null
[ "pytorch", "causal-lm", "en", "arxiv:2104.09864", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.09864" ]
[ "en" ]
TAGS #pytorch #causal-lm #en #arxiv-2104.09864 #license-apache-2.0 #region-us
Genji-python 6B =============== For example usage or to easily use the model you can check our colab notebook: Notebook Model Description ----------------- Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size. Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load. This model needs more effort to set up as you need to install git-lfs and pull the repo. '\*' each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. Training data ------------- GPT-J 6B was pretrained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile. Training procedure ------------------ Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06 Intended Use ------------ This model is trained for assistence on writing python code and having fun trying weird stuff with it. ### How to use This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable. For now, you need to use this fork: Fork to install with pip: git-lfs also needs to be installed, on ubuntu: after it's installed, initialize git-lfs: then clone this repo: Now we can load the model. We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards. How to use: When ran, this code generates: For example usage, you can see our colab notebook as well: Notebook Eval results ------------ TBD Acknowledgements ---------------- This project was possible because of the compute provided by the TPU Research Cloud and EleutherAI for pretraining of the GPT-J 6B. Thanks to everyone who contributed to this project: * Aero * Finetune * Kurumuz
[ "### How to use\n\n\nThis model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.\nFor now, you need to use this fork:\nFork\n\n\nto install with pip:\n\n\ngit-lfs also needs to be installed, on ubuntu:\n\n\nafter it's installed, initialize git-lfs:\n\n\nthen clone this repo:\n\n\nNow we can load the model.\n\n\nWe recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.\n\n\nHow to use:\n\n\nWhen ran, this code generates:\n\n\nFor example usage, you can see our colab notebook as well:\nNotebook\n\n\nEval results\n------------\n\n\nTBD\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud and EleutherAI for pretraining of the GPT-J 6B.\n\n\nThanks to everyone who contributed to this project:\n\n\n* Aero\n* Finetune\n* Kurumuz" ]
[ "TAGS\n#pytorch #causal-lm #en #arxiv-2104.09864 #license-apache-2.0 #region-us \n", "### How to use\n\n\nThis model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.\nFor now, you need to use this fork:\nFork\n\n\nto install with pip:\n\n\ngit-lfs also needs to be installed, on ubuntu:\n\n\nafter it's installed, initialize git-lfs:\n\n\nthen clone this repo:\n\n\nNow we can load the model.\n\n\nWe recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.\n\n\nHow to use:\n\n\nWhen ran, this code generates:\n\n\nFor example usage, you can see our colab notebook as well:\nNotebook\n\n\nEval results\n------------\n\n\nTBD\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud and EleutherAI for pretraining of the GPT-J 6B.\n\n\nThanks to everyone who contributed to this project:\n\n\n* Aero\n* Finetune\n* Kurumuz" ]
[ 36, 242 ]
[ "TAGS\n#pytorch #causal-lm #en #arxiv-2104.09864 #license-apache-2.0 #region-us \n### How to use\n\n\nThis model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.\nFor now, you need to use this fork:\nFork\n\n\nto install with pip:\n\n\ngit-lfs also needs to be installed, on ubuntu:\n\n\nafter it's installed, initialize git-lfs:\n\n\nthen clone this repo:\n\n\nNow we can load the model.\n\n\nWe recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.\n\n\nHow to use:\n\n\nWhen ran, this code generates:\n\n\nFor example usage, you can see our colab notebook as well:\nNotebook\n\n\nEval results\n------------\n\n\nTBD\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud and EleutherAI for pretraining of the GPT-J 6B.\n\n\nThanks to everyone who contributed to this project:\n\n\n* Aero\n* Finetune\n* Kurumuz" ]
text-generation
transformers
# Genji-python 6B For example usage or to easily use the model you can check our colab notebook: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing) ## Model Description Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size. | Hyperparameter | Value | |-------------------|--------| | n_parameters | 6,053,381,344 | | n_layers | 28* | | d_model | 4,096 | | d_ff | 16,384 | | n_heads | 16 | | d_head | 256 | | n_ctx | 2,048 | | n_vocab | 50,400 (same tokenizer as GPT-2/3) | | position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | `*` each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile. ## Training procedure Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06 ## Intended Use This model is trained for assistence on writing python code and having fun trying weird stuff with it. ### How to use This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable. For now, you need to use this fork: [Fork](https://github.com/finetuneanon/transformers) to install with pip: ```bash pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b ``` This model takes more than 16 gigs of RAM to load. If you want more efficient and faster loading, please check our split model. We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards. How to use: ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, GPTNeoForCausalLM, ) model = AutoModelForCausalLM.from_pretrained("NovelAI/genji-python-6B", use_auth_token=True).half().eval().cuda() tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B") text = '''def print_customer_name''' tokens = tokenizer(text, return_tensors="pt").input_ids generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id) last_tokens = generated_tokens[0][len(tokens[0]):] generated_text = tokenizer.decode(last_tokens) print("Generation:\n" + generated_text) ``` When ran, this code generates: ```python Prompt: def print_customer_name Generation: (self, customer): """Print the name of a customer.""" if not self.is_valid(): return print("Customer: {}".format(customer)) ``` For example usage, you can see our colab notebook as well: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing) ## Eval results TBD ## Acknowledgements This project was possible because of the compute provided by the [TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B. Thanks to everyone who contributed to this project! - [Aero](https://github.com/AeroScripts) - [Finetune](https://github.com/finetuneanon) - [Kurumuz](https://github.com/kurumuz)
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["the Pile"]}
NovelAI/genji-python-6B
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "causal-lm", "en", "arxiv:2104.09864", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.09864" ]
[ "en" ]
TAGS #transformers #pytorch #gpt_neo #text-generation #causal-lm #en #arxiv-2104.09864 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Genji-python 6B =============== For example usage or to easily use the model you can check our colab notebook: Notebook Model Description ----------------- Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size. '\*' each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. Training data ------------- GPT-J 6B was pretrained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile. Training procedure ------------------ Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06 Intended Use ------------ This model is trained for assistence on writing python code and having fun trying weird stuff with it. ### How to use This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable. For now, you need to use this fork: Fork to install with pip: This model takes more than 16 gigs of RAM to load. If you want more efficient and faster loading, please check our split model. We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards. How to use: When ran, this code generates: For example usage, you can see our colab notebook as well: Notebook Eval results ------------ TBD Acknowledgements ---------------- This project was possible because of the compute provided by the TPU Research Cloud and EleutherAI for pretraining of the GPT-J 6B. Thanks to everyone who contributed to this project! * Aero * Finetune * Kurumuz
[ "### How to use\n\n\nThis model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.\nFor now, you need to use this fork:\nFork\n\n\nto install with pip:\n\n\nThis model takes more than 16 gigs of RAM to load. If you want more efficient and faster loading, please check our split model.\nWe recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.\n\n\nHow to use:\n\n\nWhen ran, this code generates:\n\n\nFor example usage, you can see our colab notebook as well:\nNotebook\n\n\nEval results\n------------\n\n\nTBD\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud\n\n\nand EleutherAI for pretraining of the GPT-J 6B.\n\n\nThanks to everyone who contributed to this project!\n\n\n* Aero\n* Finetune\n* Kurumuz" ]
[ "TAGS\n#transformers #pytorch #gpt_neo #text-generation #causal-lm #en #arxiv-2104.09864 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nThis model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.\nFor now, you need to use this fork:\nFork\n\n\nto install with pip:\n\n\nThis model takes more than 16 gigs of RAM to load. If you want more efficient and faster loading, please check our split model.\nWe recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.\n\n\nHow to use:\n\n\nWhen ran, this code generates:\n\n\nFor example usage, you can see our colab notebook as well:\nNotebook\n\n\nEval results\n------------\n\n\nTBD\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud\n\n\nand EleutherAI for pretraining of the GPT-J 6B.\n\n\nThanks to everyone who contributed to this project!\n\n\n* Aero\n* Finetune\n* Kurumuz" ]
[ 61, 225 ]
[ "TAGS\n#transformers #pytorch #gpt_neo #text-generation #causal-lm #en #arxiv-2104.09864 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### How to use\n\n\nThis model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.\nFor now, you need to use this fork:\nFork\n\n\nto install with pip:\n\n\nThis model takes more than 16 gigs of RAM to load. If you want more efficient and faster loading, please check our split model.\nWe recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.\n\n\nHow to use:\n\n\nWhen ran, this code generates:\n\n\nFor example usage, you can see our colab notebook as well:\nNotebook\n\n\nEval results\n------------\n\n\nTBD\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud\n\n\nand EleutherAI for pretraining of the GPT-J 6B.\n\n\nThanks to everyone who contributed to this project!\n\n\n* Aero\n* Finetune\n* Kurumuz" ]
text-classification
transformers
# bert-base-multilingual-uncased-sentiment This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5). This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks. ## Training data Here is the number of product reviews we used for finetuning the model: | Language | Number of reviews | | -------- | ----------------- | | English | 150k | | Dutch | 80k | | German | 137k | | French | 140k | | Italian | 72k | | Spanish | 50k | ## Accuracy The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages: - Accuracy (exact) is the exact match on the number of stars. - Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer. | Language | Accuracy (exact) | Accuracy (off-by-1) | | -------- | ---------------------- | ------------------- | | English | 67% | 95% | Dutch | 57% | 93% | German | 61% | 94% | French | 59% | 94% | Italian | 59% | 95% | Spanish | 58% | 95% ## Contact In addition to this model, [NLP Town](https://www.nlp.town) offers custom, monolingual sentiment models for many languages and an improved multilingual model through [RapidAPI](https://rapidapi.com/nlp-town-nlp-town-default/api/multilingual-sentiment-analysis2/). Feel free to contact us for questions, feedback and/or requests for similar models.
{"language": ["en", "nl", "de", "fr", "it", "es"], "license": "mit"}
Noxel/sentiments_multilenguaje
null
[ "transformers", "bert", "text-classification", "en", "nl", "de", "fr", "it", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en", "nl", "de", "fr", "it", "es" ]
TAGS #transformers #bert #text-classification #en #nl #de #fr #it #es #license-mit #autotrain_compatible #endpoints_compatible #region-us
bert-base-multilingual-uncased-sentiment ======================================== This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5). This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks. Training data ------------- Here is the number of product reviews we used for finetuning the model: Accuracy -------- The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages: * Accuracy (exact) is the exact match on the number of stars. * Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer. Language: English, Accuracy (exact): 67%, Accuracy (off-by-1): 95% Language: Dutch, Accuracy (exact): 57%, Accuracy (off-by-1): 93% Language: German, Accuracy (exact): 61%, Accuracy (off-by-1): 94% Language: French, Accuracy (exact): 59%, Accuracy (off-by-1): 94% Language: Italian, Accuracy (exact): 59%, Accuracy (off-by-1): 95% Language: Spanish, Accuracy (exact): 58%, Accuracy (off-by-1): 95% Contact ------- In addition to this model, NLP Town offers custom, monolingual sentiment models for many languages and an improved multilingual model through RapidAPI. Feel free to contact us for questions, feedback and/or requests for similar models.
[]
[ "TAGS\n#transformers #bert #text-classification #en #nl #de #fr #it #es #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #bert #text-classification #en #nl #de #fr #it #es #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
#EmbeddingSimilarityEvaluator: Evaluating the model on STS.en-en.txt dataset in epoch 2 after 26000 steps: | Type | Pearson | Spearman | | ----------- | ----------- | ----------- | | Cosine | 0.7650 | 0.8095 | | Euclidean | 0.8089 | 0.8010 | | Cosine | 0.8075 | 0.7999 | | Euclidean | 0.7531 | 0.7680
{}
NtDNlp/sentence-embedding-vietnamese
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #feature-extraction #endpoints_compatible #region-us
#EmbeddingSimilarityEvaluator: Evaluating the model on URL dataset in epoch 2 after 26000 steps: Type: Cosine, Pearson: 0.7650, Spearman: 0.8095 Type: Euclidean, Pearson: 0.8089, Spearman: 0.8010 Type: Cosine, Pearson: 0.8075, Spearman: 0.7999 Type: Euclidean, Pearson: 0.7531, Spearman: 0.7680
[]
[ "TAGS\n#transformers #pytorch #xlm-roberta #feature-extraction #endpoints_compatible #region-us \n" ]
[ 26 ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #feature-extraction #endpoints_compatible #region-us \n" ]
automatic-speech-recognition
transformers
# Quran Speech Recognizer This application will listen to the user's Quran recitation, and take the user to the position of the Quran from where the s/he had recited. You can also take a look at our [presentation slides](https://docs.google.com/presentation/d/1dbbVYHi3LQRiggH14nN36YV2A-ddUAKg67aX5MWi0ys/edit?usp=sharing). # Methodology We used transfer learning to make our application. We fine-tuned the pretrained model available at https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-arabic using the data available at https://www.kaggle.com/c/quran-asr-challenge/data. Our model can be found at https://huggingface.co/Nuwaisir/Quran_speech_recognizer. # Usage Run all the cells of run_ui.ipynb. The last cell will hear your recitation for 5 seconds (changeable) from the time you run that cell. And then convert your speech to Arabic text and show the most probable corresponding parts of 30th juzz (Surah 78 - 114) of the Quran as the output based on edit distance value. Currently, we are searching from Surah 78 to Surah 114 as the searching algorithm needs some time to search the whole Quran. This range can be changed in the 6th cell of the notebook.
{}
Nuwaisir/Quran_speech_recognizer
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #has_space #region-us
# Quran Speech Recognizer This application will listen to the user's Quran recitation, and take the user to the position of the Quran from where the s/he had recited. You can also take a look at our presentation slides. # Methodology We used transfer learning to make our application. We fine-tuned the pretrained model available at URL using the data available at URL Our model can be found at URL # Usage Run all the cells of run_ui.ipynb. The last cell will hear your recitation for 5 seconds (changeable) from the time you run that cell. And then convert your speech to Arabic text and show the most probable corresponding parts of 30th juzz (Surah 78 - 114) of the Quran as the output based on edit distance value. Currently, we are searching from Surah 78 to Surah 114 as the searching algorithm needs some time to search the whole Quran. This range can be changed in the 6th cell of the notebook.
[ "# Quran Speech Recognizer\nThis application will listen to the user's Quran recitation, and take the \nuser to the position of the Quran from where the s/he had recited.\nYou can also take a look at our presentation slides.", "# Methodology\nWe used transfer learning to make our application. We fine-tuned the pretrained\nmodel available at URL\nusing the data available at URL\nOur model can be found at URL", "# Usage\nRun all the cells of run_ui.ipynb. The last cell will hear your\nrecitation for 5 seconds (changeable) from the time you run that cell. And then convert your\nspeech to Arabic text and show the most probable corresponding parts of 30th juzz\n(Surah 78 - 114) of the Quran as the output based on edit distance value.\n\nCurrently, we are searching from Surah 78 to Surah 114 as the searching\nalgorithm needs some time to search the whole Quran. This range can be changed\nin the 6th cell of the notebook." ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #has_space #region-us \n", "# Quran Speech Recognizer\nThis application will listen to the user's Quran recitation, and take the \nuser to the position of the Quran from where the s/he had recited.\nYou can also take a look at our presentation slides.", "# Methodology\nWe used transfer learning to make our application. We fine-tuned the pretrained\nmodel available at URL\nusing the data available at URL\nOur model can be found at URL", "# Usage\nRun all the cells of run_ui.ipynb. The last cell will hear your\nrecitation for 5 seconds (changeable) from the time you run that cell. And then convert your\nspeech to Arabic text and show the most probable corresponding parts of 30th juzz\n(Surah 78 - 114) of the Quran as the output based on edit distance value.\n\nCurrently, we are searching from Surah 78 to Surah 114 as the searching\nalgorithm needs some time to search the whole Quran. This range can be changed\nin the 6th cell of the notebook." ]
[ 34, 48, 39, 115 ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #has_space #region-us \n# Quran Speech Recognizer\nThis application will listen to the user's Quran recitation, and take the \nuser to the position of the Quran from where the s/he had recited.\nYou can also take a look at our presentation slides.# Methodology\nWe used transfer learning to make our application. We fine-tuned the pretrained\nmodel available at URL\nusing the data available at URL\nOur model can be found at URL# Usage\nRun all the cells of run_ui.ipynb. The last cell will hear your\nrecitation for 5 seconds (changeable) from the time you run that cell. And then convert your\nspeech to Arabic text and show the most probable corresponding parts of 30th juzz\n(Surah 78 - 114) of the Quran as the output based on edit distance value.\n\nCurrently, we are searching from Surah 78 to Surah 114 as the searching\nalgorithm needs some time to search the whole Quran. This range can be changed\nin the 6th cell of the notebook." ]
text-generation
transformers
# 707 DialoGPT Model
{"tags": ["conversational"]}
Obscurity/DialoGPT-Medium-707
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 707 DialoGPT Model
[ "# 707 DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 707 DialoGPT Model" ]
[ 39, 7 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# 707 DialoGPT Model" ]
text-generation
transformers
# GPT2-Mongolia ## Model description GPT-2 is a transformers model pretrained on a very small corpus of Mongolian news data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. ## How to use ```python import tensorflow as tf from transformers import GPT2Config, TFGPT2LMHeadModel, GPT2Tokenizer from transformers import WEIGHTS_NAME, CONFIG_NAME tokenizer = GPT2Tokenizer.from_pretrained('Ochiroo/tiny_mn_gpt') model = TFGPT2LMHeadModel.from_pretrained('Ochiroo/tiny_mn_gpt') text = "Намайг Эрдэнэ-Очир гэдэг. Би" input_ids = tokenizer.encode(text, return_tensors='tf') beam_outputs = model.generate( input_ids, max_length = 25, num_beams = 5, temperature = 0.7, no_repeat_ngram_size=2, num_return_sequences=5 ) print(tokenizer.decode(beam_outputs[0])) ``` ## Training data and biases Trained on 500MB of Mongolian news dataset (IKON) on RTX 2060.
{"language": "mn"}
Ochiroo/tiny_mn_gpt
null
[ "transformers", "tf", "gpt2", "text-generation", "mn", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "mn" ]
TAGS #transformers #tf #gpt2 #text-generation #mn #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# GPT2-Mongolia ## Model description GPT-2 is a transformers model pretrained on a very small corpus of Mongolian news data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. ## How to use ## Training data and biases Trained on 500MB of Mongolian news dataset (IKON) on RTX 2060.
[ "# GPT2-Mongolia", "## Model description\n\nGPT-2 is a transformers model pretrained on a very small corpus of Mongolian news data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.", "## How to use", "## Training data and biases\n\nTrained on 500MB of Mongolian news dataset (IKON) on RTX 2060." ]
[ "TAGS\n#transformers #tf #gpt2 #text-generation #mn #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# GPT2-Mongolia", "## Model description\n\nGPT-2 is a transformers model pretrained on a very small corpus of Mongolian news data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.", "## How to use", "## Training data and biases\n\nTrained on 500MB of Mongolian news dataset (IKON) on RTX 2060." ]
[ 36, 6, 93, 5, 26 ]
[ "TAGS\n#transformers #tf #gpt2 #text-generation #mn #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# GPT2-Mongolia## Model description\n\nGPT-2 is a transformers model pretrained on a very small corpus of Mongolian news data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.## How to use## Training data and biases\n\nTrained on 500MB of Mongolian news dataset (IKON) on RTX 2060." ]
translation
transformers
# HEL-ACH-EN ## Model description MT model translating Acholi to English initialized with weights from [opus-mt-luo-en](https://huggingface.co/Helsinki-NLP/opus-mt-luo-en) on HuggingFace. ## Intended uses & limitations Machine Translation experiments. Do not use for sensitive tasks. #### How to use ```python # You can include sample code which will be formatted from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Ogayo/Hel-ach-en") model = AutoModelForSeq2SeqLM.from_pretrained("Ogayo/Hel-ach-en") ``` #### Limitations and bias Trained on Jehovah Witnesses data so contains theirs and Christian views. ## Training data Trained on OPUS JW300 data. Initialized with weights from [opus-mt-luo-en](https://huggingface.co/Helsinki-NLP/opus-mt-luo-en?text=Bed+gi+nyasi+mar+chieng%27+nyuol+mopong%27+gi+mor%21#model_card) ## Training procedure Remove duplicates and rows with no alphabetic characters. Used GPU ## Eval results testset | BLEU --- | --- JW300.luo.en| 46.1
{"language": ["ach", "en"], "license": "cc-by-4.0", "tags": ["translation"], "datasets": ["JW300"], "metrics": ["bleu"]}
Ogayo/Hel-ach-en
null
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "ach", "en", "dataset:JW300", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ach", "en" ]
TAGS #transformers #pytorch #marian #text2text-generation #translation #ach #en #dataset-JW300 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
HEL-ACH-EN ========== Model description ----------------- MT model translating Acholi to English initialized with weights from opus-mt-luo-en on HuggingFace. Intended uses & limitations --------------------------- Machine Translation experiments. Do not use for sensitive tasks. #### How to use #### Limitations and bias Trained on Jehovah Witnesses data so contains theirs and Christian views. Training data ------------- Trained on OPUS JW300 data. Initialized with weights from opus-mt-luo-en Training procedure ------------------ Remove duplicates and rows with no alphabetic characters. Used GPU Eval results ------------
[ "#### How to use", "#### Limitations and bias\n\n\nTrained on Jehovah Witnesses data so contains theirs and Christian views.\n\n\nTraining data\n-------------\n\n\nTrained on OPUS JW300 data.\nInitialized with weights from opus-mt-luo-en\n\n\nTraining procedure\n------------------\n\n\nRemove duplicates and rows with no alphabetic characters. Used GPU\n\n\nEval results\n------------" ]
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #translation #ach #en #dataset-JW300 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "#### How to use", "#### Limitations and bias\n\n\nTrained on Jehovah Witnesses data so contains theirs and Christian views.\n\n\nTraining data\n-------------\n\n\nTrained on OPUS JW300 data.\nInitialized with weights from opus-mt-luo-en\n\n\nTraining procedure\n------------------\n\n\nRemove duplicates and rows with no alphabetic characters. Used GPU\n\n\nEval results\n------------" ]
[ 55, 7, 107 ]
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #translation #ach #en #dataset-JW300 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n#### How to use#### Limitations and bias\n\n\nTrained on Jehovah Witnesses data so contains theirs and Christian views.\n\n\nTraining data\n-------------\n\n\nTrained on OPUS JW300 data.\nInitialized with weights from opus-mt-luo-en\n\n\nTraining procedure\n------------------\n\n\nRemove duplicates and rows with no alphabetic characters. Used GPU\n\n\nEval results\n------------" ]
text-generation
transformers
# Rick and Morty DialoGPT Model
{"tags": ["conversational"]}
Oji/DialoGPT-small-Rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick and Morty DialoGPT Model
[ "# Rick and Morty DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick and Morty DialoGPT Model" ]
[ 39, 9 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick and Morty DialoGPT Model" ]
null
null
AutoTokenizer
{}
Omar2027/AutoTokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
AutoTokenizer
[]
[ "TAGS\n#region-us \n" ]
[ 5 ]
[ "TAGS\n#region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.1259 - Accuracy: 0.9332 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 0.5952 | 0.7355 | | 0.7663 | 2.0 | 636 | 0.3130 | 0.8742 | | 0.7663 | 3.0 | 954 | 0.2024 | 0.9206 | | 0.3043 | 4.0 | 1272 | 0.1590 | 0.9235 | | 0.181 | 5.0 | 1590 | 0.1378 | 0.9303 | | 0.181 | 6.0 | 1908 | 0.1287 | 0.9329 | | 0.1468 | 7.0 | 2226 | 0.1259 | 0.9332 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["clinc_oos"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9332258064516129, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "small", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8587272727272727, "name": "Accuracy", "verified": true}, {"type": "precision", "value": 0.8619245385984416, "name": "Precision Macro", "verified": true}, {"type": "precision", "value": 0.8587272727272727, "name": "Precision Micro", "verified": true}, {"type": "precision", "value": 0.8797945801452213, "name": "Precision Weighted", "verified": true}, {"type": "recall", "value": 0.9359690949227375, "name": "Recall Macro", "verified": true}, {"type": "recall", "value": 0.8587272727272727, "name": "Recall Micro", "verified": true}, {"type": "recall", "value": 0.8587272727272727, "name": "Recall Weighted", "verified": true}, {"type": "f1", "value": 0.8922503214655346, "name": "F1 Macro", "verified": true}, {"type": "f1", "value": 0.8587272727272727, "name": "F1 Micro", "verified": true}, {"type": "f1", "value": 0.8506829426037475, "name": "F1 Weighted", "verified": true}, {"type": "loss", "value": 0.9798759818077087, "name": "loss", "verified": true}]}]}]}
Omar95farag/distilbert-base-uncased-distilled-clinc
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-distilled-clinc ======================================= This model is a fine-tuned version of distilbert-base-uncased on the clinc\_oos dataset. It achieves the following results on the evaluation set: * Loss: 0.1259 * Accuracy: 0.9332 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 48 * eval\_batch\_size: 48 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 7 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ 58, 101, 5, 44 ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 7### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text2text-generation
transformers
#keytotext [![pypi Version](https://img.shields.io/pypi/v/keytotext.svg?logo=pypi&logoColor=white)](https://pypi.org/project/keytotext/) [![Downloads](https://static.pepy.tech/personalized-badge/keytotext?period=total&units=none&left_color=grey&right_color=orange&left_text=Pip%20Downloads)](https://pepy.tech/project/keytotext) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/notebooks/K2T.ipynb) [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) [![API Call](https://img.shields.io/badge/-FastAPI-red?logo=fastapi&labelColor=white)](https://github.com/gagan3012/keytotext#api) [![Docker Call](https://img.shields.io/badge/-Docker%20Image-blue?logo=docker&labelColor=white)](https://hub.docker.com/r/gagan30/keytotext) [![HuggingFace](https://img.shields.io/badge/%F0%9F%A4%97-Models%20on%20Hub-yellow)](https://huggingface.co/models?filter=keytotext) [![Documentation Status](https://readthedocs.org/projects/keytotext/badge/?version=latest)](https://keytotext.readthedocs.io/en/latest/?badge=latest) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) ![keytotext](https://socialify.git.ci/gagan3012/keytotext/image?description=1&forks=1&language=1&owner=1&stargazers=1&theme=Light) Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: - Marketing - Search Engine Optimization - Topic generation etc. - Fine tuning of topic modeling models
{"language": "en", "license": "MIT", "tags": ["keytotext", "k2t", "Keywords to Sentences"], "datasets": ["WebNLG", "Dart"], "metrics": ["NLG"], "thumbnail": "Keywords to Sentences"}
OnsElleuch/logisgenerator
null
[ "transformers", "pytorch", "t5", "text2text-generation", "keytotext", "k2t", "Keywords to Sentences", "en", "dataset:WebNLG", "dataset:Dart", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #t5 #text2text-generation #keytotext #k2t #Keywords to Sentences #en #dataset-WebNLG #dataset-Dart #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#keytotext ![pypi Version](URL ![Downloads](URL ![Open In Colab](URL ![Streamlit App](URL ![API Call](URL ![Docker Call](URL ![HuggingFace](URL ![Documentation Status](URL ![Code style: black](URL !keytotext Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: - Marketing - Search Engine Optimization - Topic generation etc. - Fine tuning of topic modeling models
[]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #keytotext #k2t #Keywords to Sentences #en #dataset-WebNLG #dataset-Dart #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 64 ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #keytotext #k2t #Keywords to Sentences #en #dataset-WebNLG #dataset-Dart #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#harry potter dialogpt model
{"tags": ["conversational"]}
Optimal/Harry
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#harry potter dialogpt model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# Finetuned DialoGPT model for Eng-Spa translation DialoGPT-small model was used and finetuned on English to Spanish translations, extracted from http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip some examples of translations | Role | Response | | :---: |------------------------| | User | please, sing me a song | | Bot | Por favor, canta una canción. | | User | I really want to go to China | | Bot | Realmente quiero ir a China. | | User | Can you do me a favor? | | Bot | ¿Me puedes hacer un favor? | | User | I don't know what you are talking about | | Bot | No sé de qué estás hablando. | | User | I don't want to go to China | | Bot | No quiero ir a China. | # Using the model example code for trying out the model ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small') model = AutoModelWithLMHead.from_pretrained('OscarNav/dialoGPT_translate') # Let's traslate 5 sentences for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( new_user_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id, top_p=0.92, top_k = 50 ) # pretty print last ouput tokens from bot print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, new_user_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
{}
OscarNav/dialoGPT_translate
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Finetuned DialoGPT model for Eng-Spa translation ================================================ DialoGPT-small model was used and finetuned on English to Spanish translations, extracted from URL some examples of translations Using the model =============== example code for trying out the model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 36 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-classification
transformers
### Introduction: This model belongs to text-classification. You can determine the emotion behind a sentence. ### Label Explaination: LABEL_0: Positive (have positive emotion) LABEL_1: Negative (have negative emotion) ### Usage: ```python >>> from transformers import pipeline >>> ec = pipeline('text-classification', model='Osiris/emotion_classifier') >>> ec("Hello, I'm a good model.") ``` ### Accuracy: We reach 83.82% for validation dataset, and 84.42% for test dataset.
{}
Osiris/emotion_classifier
null
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
### Introduction: This model belongs to text-classification. You can determine the emotion behind a sentence. ### Label Explaination: LABEL_0: Positive (have positive emotion) LABEL_1: Negative (have negative emotion) ### Usage: ### Accuracy: We reach 83.82% for validation dataset, and 84.42% for test dataset.
[ "### Introduction:\nThis model belongs to text-classification. You can determine the emotion behind a sentence.", "### Label Explaination:\nLABEL_0: Positive (have positive emotion)\n\nLABEL_1: Negative (have negative emotion)", "### Usage:", "### Accuracy:\nWe reach 83.82% for validation dataset, and 84.42% for test dataset." ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Introduction:\nThis model belongs to text-classification. You can determine the emotion behind a sentence.", "### Label Explaination:\nLABEL_0: Positive (have positive emotion)\n\nLABEL_1: Negative (have negative emotion)", "### Usage:", "### Accuracy:\nWe reach 83.82% for validation dataset, and 84.42% for test dataset." ]
[ 28, 22, 27, 5, 26 ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n### Introduction:\nThis model belongs to text-classification. You can determine the emotion behind a sentence.### Label Explaination:\nLABEL_0: Positive (have positive emotion)\n\nLABEL_1: Negative (have negative emotion)### Usage:### Accuracy:\nWe reach 83.82% for validation dataset, and 84.42% for test dataset." ]
text-classification
transformers
### Introduction: This model belongs to text-classification. You can check whether the sentence consists any emotion. ### Label Explaination: LABEL_1: Non Neutral (have some emotions) LABEL_0: Neutral (have no emotion) ### Usage: ```python >>> from transformers import pipeline >>> nnc = pipeline('text-classification', model='Osiris/neutral_non_neutral_classifier') >>> nnc("Hello, I'm a good model.") ``` ### Accuracy: We reach 93.98% for validation dataset, and 91.92% for test dataset.
{}
Osiris/neutral_non_neutral_classifier
null
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
### Introduction: This model belongs to text-classification. You can check whether the sentence consists any emotion. ### Label Explaination: LABEL_1: Non Neutral (have some emotions) LABEL_0: Neutral (have no emotion) ### Usage: ### Accuracy: We reach 93.98% for validation dataset, and 91.92% for test dataset.
[ "### Introduction:\nThis model belongs to text-classification. You can check whether the sentence consists any emotion.", "### Label Explaination:\nLABEL_1: Non Neutral (have some emotions)\n\nLABEL_0: Neutral (have no emotion)", "### Usage:", "### Accuracy:\nWe reach 93.98% for validation dataset, and 91.92% for test dataset." ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Introduction:\nThis model belongs to text-classification. You can check whether the sentence consists any emotion.", "### Label Explaination:\nLABEL_1: Non Neutral (have some emotions)\n\nLABEL_0: Neutral (have no emotion)", "### Usage:", "### Accuracy:\nWe reach 93.98% for validation dataset, and 91.92% for test dataset." ]
[ 28, 23, 28, 5, 26 ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n### Introduction:\nThis model belongs to text-classification. You can check whether the sentence consists any emotion.### Label Explaination:\nLABEL_1: Non Neutral (have some emotions)\n\nLABEL_0: Neutral (have no emotion)### Usage:### Accuracy:\nWe reach 93.98% for validation dataset, and 91.92% for test dataset." ]
null
null
git lfs install git clone https://huggingface.co/r3dhummingbird/DialoGPT-medium-joshua
{}
OsmyReal/Ayuda
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
git lfs install git clone URL
[]
[ "TAGS\n#region-us \n" ]
[ 5 ]
[ "TAGS\n#region-us \n" ]
automatic-speech-recognition
transformers
# Distil-wav2vec2 This model is a distilled version of the wav2vec2 model (https://arxiv.org/pdf/2006.11477.pdf). This model is 45% times smaller and twice as fast as the original wav2vec2 base model. # Evaluation results This model achieves the following results (speed is mesured for a batch size of 64): |Model| Size| WER Librispeech-test-clean |WER Librispeech-test-other|Speed on cpu|speed on gpu| |----------| ------------- |-------------|-----------| ------|----| |Distil-wav2vec2| 197.9 Mb | 0.0983 | 0.2266|0.4006s| 0.0046s| |wav2vec2-base| 360 Mb | 0.0389 | 0.1047|0.4919s| 0.0082s| # Usage notebook (executes seamlessly on google colab) at https://github.com/OthmaneJ/distil-wav2vec2
{"language": "en", "license": "apache-2.0", "tags": ["speech", "audio", "automatic-speech-recognition"], "datasets": ["librispeech_asr"]}
OthmaneJ/distil-wav2vec2
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "en", "dataset:librispeech_asr", "arxiv:2006.11477", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2006.11477" ]
[ "en" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #speech #audio #en #dataset-librispeech_asr #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #has_space #region-us
Distil-wav2vec2 =============== This model is a distilled version of the wav2vec2 model (URL This model is 45% times smaller and twice as fast as the original wav2vec2 base model. Evaluation results ================== This model achieves the following results (speed is mesured for a batch size of 64): Usage ===== notebook (executes seamlessly on google colab) at URL
[]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #speech #audio #en #dataset-librispeech_asr #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n" ]
[ 70 ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #speech #audio #en #dataset-librispeech_asr #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n" ]
text-generation
transformers
0 Tony Stark DialoGPT Model
{"tags": ["conversational"]}
P4RZ1V4L/DialoGPT-Medium-Tony
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
0 Tony Stark DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#Rick and Morty DialoGPT medium model
{"tags": ["conversational"]}
PVAbhiram2003/DialoGPT-medium-RickandMorty
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Rick and Morty DialoGPT medium model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2_squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the **squadV1** dataset. - "eval_exact_match": 82.69631031220435 - "eval_f1": 90.10806626207174 - "eval_samples": 10808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "albert-base-v2_squad", "results": []}]}
Palak/albert-base-v2_squad
null
[ "transformers", "pytorch", "albert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #albert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
# albert-base-v2_squad This model is a fine-tuned version of albert-base-v2 on the squadV1 dataset. - "eval_exact_match": 82.69631031220435 - "eval_f1": 90.10806626207174 - "eval_samples": 10808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# albert-base-v2_squad\n\nThis model is a fine-tuned version of albert-base-v2 on the squadV1 dataset.\n- \"eval_exact_match\": 82.69631031220435\n- \"eval_f1\": 90.10806626207174\n- \"eval_samples\": 10808", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #albert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "# albert-base-v2_squad\n\nThis model is a fine-tuned version of albert-base-v2 on the squadV1 dataset.\n- \"eval_exact_match\": 82.69631031220435\n- \"eval_f1\": 90.10806626207174\n- \"eval_samples\": 10808", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 42, 82, 7, 9, 9, 4, 95, 5, 40 ]
[ "TAGS\n#transformers #pytorch #albert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n# albert-base-v2_squad\n\nThis model is a fine-tuned version of albert-base-v2 on the squadV1 dataset.\n- \"eval_exact_match\": 82.69631031220435\n- \"eval_f1\": 90.10806626207174\n- \"eval_samples\": 10808## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-large-v2_squad This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the **squadV1** dataset. - "eval_exact_match": 84.80605487228004 - "eval_f1": 91.80638438705844 - "eval_samples": 10808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "albert-large-v2_squad", "results": []}]}
Palak/albert-large-v2_squad
null
[ "transformers", "pytorch", "albert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #albert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
# albert-large-v2_squad This model is a fine-tuned version of albert-large-v2 on the squadV1 dataset. - "eval_exact_match": 84.80605487228004 - "eval_f1": 91.80638438705844 - "eval_samples": 10808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# albert-large-v2_squad\n\nThis model is a fine-tuned version of albert-large-v2 on the squadV1 dataset.\n\n- \"eval_exact_match\": 84.80605487228004\n- \"eval_f1\": 91.80638438705844\n- \"eval_samples\": 10808", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #albert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "# albert-large-v2_squad\n\nThis model is a fine-tuned version of albert-large-v2 on the squadV1 dataset.\n\n- \"eval_exact_match\": 84.80605487228004\n- \"eval_f1\": 91.80638438705844\n- \"eval_samples\": 10808", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 42, 82, 7, 9, 9, 4, 95, 5, 40 ]
[ "TAGS\n#transformers #pytorch #albert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n# albert-large-v2_squad\n\nThis model is a fine-tuned version of albert-large-v2 on the squadV1 dataset.\n\n- \"eval_exact_match\": 84.80605487228004\n- \"eval_f1\": 91.80638438705844\n- \"eval_samples\": 10808## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base_squad This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the **squadV1** dataset. - "eval_exact_match": 80.97445600756859 - "eval_f1": 88.0153886332912 - "eval_samples": 10790 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilroberta-base_squad", "results": []}]}
Palak/distilroberta-base_squad
null
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
# distilroberta-base_squad This model is a fine-tuned version of distilroberta-base on the squadV1 dataset. - "eval_exact_match": 80.97445600756859 - "eval_f1": 88.0153886332912 - "eval_samples": 10790 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# distilroberta-base_squad\n\nThis model is a fine-tuned version of distilroberta-base on the squadV1 dataset.\n\n- \"eval_exact_match\": 80.97445600756859\n- \"eval_f1\": 88.0153886332912\n- \"eval_samples\": 10790", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #roberta #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "# distilroberta-base_squad\n\nThis model is a fine-tuned version of distilroberta-base on the squadV1 dataset.\n\n- \"eval_exact_match\": 80.97445600756859\n- \"eval_f1\": 88.0153886332912\n- \"eval_samples\": 10790", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 42, 81, 7, 9, 9, 4, 95, 5, 40 ]
[ "TAGS\n#transformers #pytorch #roberta #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n# distilroberta-base_squad\n\nThis model is a fine-tuned version of distilroberta-base on the squadV1 dataset.\n\n- \"eval_exact_match\": 80.97445600756859\n- \"eval_f1\": 88.0153886332912\n- \"eval_samples\": 10790## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # google_electra-base-discriminator_squad This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the **squadV1** dataset. - "eval_exact_match": 85.33585619678335 - "eval_f1": 91.97363450387108 - "eval_samples": 10784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "google_electra-base-discriminator_squad", "results": []}]}
Palak/google_electra-base-discriminator_squad
null
[ "transformers", "pytorch", "electra", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #electra #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
# google_electra-base-discriminator_squad This model is a fine-tuned version of google/electra-base-discriminator on the squadV1 dataset. - "eval_exact_match": 85.33585619678335 - "eval_f1": 91.97363450387108 - "eval_samples": 10784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# google_electra-base-discriminator_squad\n\nThis model is a fine-tuned version of google/electra-base-discriminator on the squadV1 dataset.\n- \"eval_exact_match\": 85.33585619678335\n- \"eval_f1\": 91.97363450387108\n- \"eval_samples\": 10784", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #electra #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "# google_electra-base-discriminator_squad\n\nThis model is a fine-tuned version of google/electra-base-discriminator on the squadV1 dataset.\n- \"eval_exact_match\": 85.33585619678335\n- \"eval_f1\": 91.97363450387108\n- \"eval_samples\": 10784", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 43, 90, 7, 9, 9, 4, 95, 5, 40 ]
[ "TAGS\n#transformers #pytorch #electra #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n# google_electra-base-discriminator_squad\n\nThis model is a fine-tuned version of google/electra-base-discriminator on the squadV1 dataset.\n- \"eval_exact_match\": 85.33585619678335\n- \"eval_f1\": 91.97363450387108\n- \"eval_samples\": 10784## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # google_electra-small-discriminator_squad This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the **squadV1** dataset. - "eval_exact_match": 76.95364238410596 - "eval_f1": 84.98869246841396 - "eval_samples": 10784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "google_electra-small-discriminator_squad", "results": []}]}
Palak/google_electra-small-discriminator_squad
null
[ "transformers", "pytorch", "electra", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #electra #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
# google_electra-small-discriminator_squad This model is a fine-tuned version of google/electra-small-discriminator on the squadV1 dataset. - "eval_exact_match": 76.95364238410596 - "eval_f1": 84.98869246841396 - "eval_samples": 10784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# google_electra-small-discriminator_squad\n\nThis model is a fine-tuned version of google/electra-small-discriminator on the squadV1 dataset.\n\n- \"eval_exact_match\": 76.95364238410596\n- \"eval_f1\": 84.98869246841396\n- \"eval_samples\": 10784", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #electra #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "# google_electra-small-discriminator_squad\n\nThis model is a fine-tuned version of google/electra-small-discriminator on the squadV1 dataset.\n\n- \"eval_exact_match\": 76.95364238410596\n- \"eval_f1\": 84.98869246841396\n- \"eval_samples\": 10784", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 43, 90, 7, 9, 9, 4, 95, 5, 40 ]
[ "TAGS\n#transformers #pytorch #electra #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n# google_electra-small-discriminator_squad\n\nThis model is a fine-tuned version of google/electra-small-discriminator on the squadV1 dataset.\n\n- \"eval_exact_match\": 76.95364238410596\n- \"eval_f1\": 84.98869246841396\n- \"eval_samples\": 10784## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # microsoft_deberta-base_squad This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the **squadV1** dataset. - "eval_exact_match": 86.30085146641439 - "eval_f1": 92.68502275661561 - "eval_samples": 10788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "microsoft_deberta-base_squad", "results": []}]}
Palak/microsoft_deberta-base_squad
null
[ "transformers", "pytorch", "deberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #deberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us
# microsoft_deberta-base_squad This model is a fine-tuned version of microsoft/deberta-base on the squadV1 dataset. - "eval_exact_match": 86.30085146641439 - "eval_f1": 92.68502275661561 - "eval_samples": 10788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# microsoft_deberta-base_squad\n\nThis model is a fine-tuned version of microsoft/deberta-base on the squadV1 dataset.\n- \"eval_exact_match\": 86.30085146641439\n- \"eval_f1\": 92.68502275661561\n- \"eval_samples\": 10788", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #deberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us \n", "# microsoft_deberta-base_squad\n\nThis model is a fine-tuned version of microsoft/deberta-base on the squadV1 dataset.\n- \"eval_exact_match\": 86.30085146641439\n- \"eval_f1\": 92.68502275661561\n- \"eval_samples\": 10788", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 40, 82, 7, 9, 9, 4, 95, 5, 40 ]
[ "TAGS\n#transformers #pytorch #deberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us \n# microsoft_deberta-base_squad\n\nThis model is a fine-tuned version of microsoft/deberta-base on the squadV1 dataset.\n- \"eval_exact_match\": 86.30085146641439\n- \"eval_f1\": 92.68502275661561\n- \"eval_samples\": 10788## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # microsoft-deberta-large This model is a fine-tuned version of [microsoft_deberta-large](https://huggingface.co/microsoft/deberta-large) on the **squadV1** dataset. - "eval_exact_match": 87.89025543992432 - "eval_f1": 93.8755152147345 - "eval_samples": 10788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "microsoft-deberta-large", "results": []}]}
Palak/microsoft_deberta-large_squad
null
[ "transformers", "pytorch", "deberta", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #deberta #question-answering #generated_from_trainer #dataset-squad #endpoints_compatible #region-us
# microsoft-deberta-large This model is a fine-tuned version of microsoft_deberta-large on the squadV1 dataset. - "eval_exact_match": 87.89025543992432 - "eval_f1": 93.8755152147345 - "eval_samples": 10788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# microsoft-deberta-large\n\nThis model is a fine-tuned version of microsoft_deberta-large on the squadV1 dataset.\n\n- \"eval_exact_match\": 87.89025543992432\n- \"eval_f1\": 93.8755152147345\n- \"eval_samples\": 10788", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #deberta #question-answering #generated_from_trainer #dataset-squad #endpoints_compatible #region-us \n", "# microsoft-deberta-large\n\nThis model is a fine-tuned version of microsoft_deberta-large on the squadV1 dataset.\n\n- \"eval_exact_match\": 87.89025543992432\n- \"eval_f1\": 93.8755152147345\n- \"eval_samples\": 10788", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 36, 80, 7, 9, 9, 4, 93, 40 ]
[ "TAGS\n#transformers #pytorch #deberta #question-answering #generated_from_trainer #dataset-squad #endpoints_compatible #region-us \n# microsoft-deberta-large\n\nThis model is a fine-tuned version of microsoft_deberta-large on the squadV1 dataset.\n\n- \"eval_exact_match\": 87.89025543992432\n- \"eval_f1\": 93.8755152147345\n- \"eval_samples\": 10788## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base_squad This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset. - "eval_exact_match": 82.69631031220435 - "eval_f1": 89.4562841806503 - "eval_samples": 10918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "xlm-roberta-base_squad", "results": []}]}
Palak/xlm-roberta-base_squad
null
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us
# xlm-roberta-base_squad This model is a fine-tuned version of xlm-roberta-base on the squad dataset. - "eval_exact_match": 82.69631031220435 - "eval_f1": 89.4562841806503 - "eval_samples": 10918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# xlm-roberta-base_squad\n\nThis model is a fine-tuned version of xlm-roberta-base on the squad dataset.\n- \"eval_exact_match\": 82.69631031220435\n- \"eval_f1\": 89.4562841806503\n- \"eval_samples\": 10918", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us \n", "# xlm-roberta-base_squad\n\nThis model is a fine-tuned version of xlm-roberta-base on the squad dataset.\n- \"eval_exact_match\": 82.69631031220435\n- \"eval_f1\": 89.4562841806503\n- \"eval_samples\": 10918", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 41, 79, 7, 9, 9, 4, 95, 5, 40 ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us \n# xlm-roberta-base_squad\n\nThis model is a fine-tuned version of xlm-roberta-base on the squad dataset.\n- \"eval_exact_match\": 82.69631031220435\n- \"eval_f1\": 89.4562841806503\n- \"eval_samples\": 10918## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the squad dataset. - eval_exact_match": 85.96026490066225 - "eval_f1": 92.25000664341768 - "eval_samples": 10918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.67 ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "xlm-roberta-base_squad", "results": []}]}
Palak/xlm-roberta-large_squad
null
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #question-answering #generated_from_trainer #dataset-squad #endpoints_compatible #region-us
# eval This model is a fine-tuned version of xlm-roberta-large on the squad dataset. - eval_exact_match": 85.96026490066225 - "eval_f1": 92.25000664341768 - "eval_samples": 10918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.67 ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# eval\n\nThis model is a fine-tuned version of xlm-roberta-large on the squad dataset.\n\n- eval_exact_match\": 85.96026490066225\n- \"eval_f1\": 92.25000664341768\n- \"eval_samples\": 10918", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 0.67", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #question-answering #generated_from_trainer #dataset-squad #endpoints_compatible #region-us \n", "# eval\n\nThis model is a fine-tuned version of xlm-roberta-large on the squad dataset.\n\n- eval_exact_match\": 85.96026490066225\n- \"eval_f1\": 92.25000664341768\n- \"eval_samples\": 10918", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 0.67", "### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 37, 70, 7, 9, 9, 4, 95, 40 ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #question-answering #generated_from_trainer #dataset-squad #endpoints_compatible #region-us \n# eval\n\nThis model is a fine-tuned version of xlm-roberta-large on the squad dataset.\n\n- eval_exact_match\": 85.96026490066225\n- \"eval_f1\": 92.25000664341768\n- \"eval_samples\": 10918## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 0.67### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.9.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
text-generation
transformers
#Harry Potter AI bot
{"tags": ["conversational"]}
Paradocx/Dialogpt-mid-hpai
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Harry Potter AI bot
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
sentence-similarity
sentence-transformers
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 365 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 146, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
ParkMyungkyu/KLUE-STS-roberta-base
null
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
# {MODEL_NAME} This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 365 with parameters: Loss: 'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 365 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n", "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 365 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ 31, 41, 30, 58, 26, 69, 5, 5 ]
[ "TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 365 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors" ]
text-classification
transformers
A fine-tuned model based on'gumgo91/IUPAC_BERT'for Blood brain barrier permeability prediction based on IUPAC string. There are also BiLSTM models available as well as these two models in 'https://github.com/mephisto121/BBBNLP if you want to check them all and check the codes too. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jGYf3sq93yO4EbgVaEl3nlClrVatVaXS#scrollTo=AMEdQItmilAw)
{}
Parsa/BBB_prediction_classification_IUPAC
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
A fine-tuned model based on'gumgo91/IUPAC_BERT'for Blood brain barrier permeability prediction based on IUPAC string. There are also BiLSTM models available as well as these two models in 'URL if you want to check them all and check the codes too. ![Open In Colab](URL
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 28 ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
A fine-tuned model based on'DeepChem/ChemBERTa-77M-MLM'for Blood brain barrier permeability prediction based on SMILES string. There are also BiLSTM models available as well as these two models in 'https://github.com/mephisto121/BBBNLP if you want to check them all and check the codes too. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jGYf3sq93yO4EbgVaEl3nlClrVatVaXS#scrollTo=AMEdQItmilAw)
{}
Parsa/BBB_prediction_classification_SMILES
null
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
A fine-tuned model based on'DeepChem/ChemBERTa-77M-MLM'for Blood brain barrier permeability prediction based on SMILES string. There are also BiLSTM models available as well as these two models in 'URL if you want to check them all and check the codes too. ![Open In Colab](URL
[]
[ "TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 32 ]
[ "TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
from transformers import MT5ForConditionalGeneration, AutoTokenizer model = MT5ForConditionalGeneration.from_pretrained("Parth/mT5-question-generator") tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
{}
Parth/mT5-question-generator
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
from transformers import MT5ForConditionalGeneration, AutoTokenizer model = MT5ForConditionalGeneration.from_pretrained("Parth/mT5-question-generator") tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
[]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 37 ]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
null
'hello'
{}
Patrickdg/distilbert-consumer-complaints
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
'hello'
[]
[ "TAGS\n#region-us \n" ]
[ 5 ]
[ "TAGS\n#region-us \n" ]
text2text-generation
transformers
##An MT5ForConditionalGeneration trained on 3 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (ES): * topic attribution - topics were assigned with BertTopic library using embeddings from `Hate-speech-CNERG/dehatebert-mono-spanish` bert model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix **hater classification:**
{}
PaulAdversarial/PAN_twitter_hate_speech_2021_ES_MT5
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
##An MT5ForConditionalGeneration trained on 3 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (ES): * topic attribution - topics were assigned with BertTopic library using embeddings from 'Hate-speech-CNERG/dehatebert-mono-spanish' bert model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix hater classification:
[]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 37 ]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
##A T5ForConditionalGeneration trained on 3 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (EN): * author attribution (train and test sets from the PAN task) * topic attribution - topics were assigned with BertTopic library using embeddings from `cardiffnlp/bertweet-base-hate` Roberta model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix **hater classification:**
{}
PaulAdversarial/T5_PAN_Hate_Speech_Twitter_topic_author_ishatespeach
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
##A T5ForConditionalGeneration trained on 3 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (EN): * author attribution (train and test sets from the PAN task) * topic attribution - topics were assigned with BertTopic library using embeddings from 'cardiffnlp/bertweet-base-hate' Roberta model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix hater classification:
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
A T5ForConditionalGeneration trained on 2 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (EN): * topic attribution - topics were assigned with BertTopic library using embeddings from `cardiffnlp/bertweet-base-hate` Roberta model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix **hater classification:**
{}
PaulAdversarial/T5_PAN_Hate_Speech_Twitter_topic_ishatespeach
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
A T5ForConditionalGeneration trained on 2 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (EN): * topic attribution - topics were assigned with BertTopic library using embeddings from 'cardiffnlp/bertweet-base-hate' Roberta model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix hater classification:
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
fill-mask
transformers
## XLM-R Longformer Model XLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus. The reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at [Peltarion](https://peltarion.com/) and was fine-tuned on multilingual quesion-answering tasks, with code available [here](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer#xlm-r). Since both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps. ## How to Use The model can be used as expected to fine-tune on a downstream task. For instance for QA. ```python import torch from transformers import AutoModel, AutoTokenizer MAX_SEQUENCE_LENGTH = 4096 MODEL_NAME_OR_PATH = "markussagen/xlm-roberta-longformer-base-4096" tokenizer = AutoTokenizer.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, padding="max_length", truncation=True, ) model = AutoModelForQuestionAnswering.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, ) ``` ## Training Procedure The model have been trained on the WikiText-103 corpus, using a **48GB** GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full [training script](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer/blob/main/scripts/finetune_qa_models.py) and [Github repo](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer) for more information ```sh wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip unzip wikitext-103-raw-v1.zip export DATA_DIR=./wikitext-103-raw scripts/run_long_lm.py \ --model_name_or_path xlm-roberta-base \ --model_name xlm-roberta-to-longformer \ --output_dir ./output \ --logging_dir ./logs \ --val_file_path $DATA_DIR/wiki.valid.raw \ --train_file_path $DATA_DIR/wiki.train.raw \ --seed 42 \ --max_pos 4096 \ --adam_epsilon 1e-8 \ --warmup_steps 500 \ --learning_rate 3e-5 \ --weight_decay 0.01 \ --max_steps 6000 \ --evaluate_during_training \ --logging_steps 50 \ --eval_steps 50 \ --save_steps 6000 \ --max_grad_norm 1.0 \ --per_device_eval_batch_size 2 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 64 \ --overwrite_output_dir \ --fp16 \ --do_train \ --do_eval ```
{"language": "multilingual", "license": "apache-2.0", "tags": ["longformer"], "datasets": ["wikitext"]}
Peltarion/xlm-roberta-longformer-base-4096
null
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "longformer", "multilingual", "dataset:wikitext", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "multilingual" ]
TAGS #transformers #pytorch #xlm-roberta #fill-mask #longformer #multilingual #dataset-wikitext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
## XLM-R Longformer Model XLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer pre-training scheme on the English WikiText-103 corpus. The reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at Peltarion and was fine-tuned on multilingual quesion-answering tasks, with code available here. Since both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps. ## How to Use The model can be used as expected to fine-tune on a downstream task. For instance for QA. ## Training Procedure The model have been trained on the WikiText-103 corpus, using a 48GB GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full training script and Github repo for more information
[ "## XLM-R Longformer Model \nXLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer pre-training scheme on the English WikiText-103 corpus. \n \nThe reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at Peltarion and was fine-tuned on multilingual quesion-answering tasks, with code available here. \n \nSince both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps.", "## How to Use \nThe model can be used as expected to fine-tune on a downstream task. \nFor instance for QA.", "## Training Procedure \nThe model have been trained on the WikiText-103 corpus, using a 48GB GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full training script and Github repo for more information" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #fill-mask #longformer #multilingual #dataset-wikitext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "## XLM-R Longformer Model \nXLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer pre-training scheme on the English WikiText-103 corpus. \n \nThe reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at Peltarion and was fine-tuned on multilingual quesion-answering tasks, with code available here. \n \nSince both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps.", "## How to Use \nThe model can be used as expected to fine-tune on a downstream task. \nFor instance for QA.", "## Training Procedure \nThe model have been trained on the WikiText-103 corpus, using a 48GB GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full training script and Github repo for more information" ]
[ 55, 204, 27, 63 ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #fill-mask #longformer #multilingual #dataset-wikitext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n## XLM-R Longformer Model \nXLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer pre-training scheme on the English WikiText-103 corpus. \n \nThe reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at Peltarion and was fine-tuned on multilingual quesion-answering tasks, with code available here. \n \nSince both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps.## How to Use \nThe model can be used as expected to fine-tune on a downstream task. \nFor instance for QA.## Training Procedure \nThe model have been trained on the WikiText-103 corpus, using a 48GB GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full training script and Github repo for more information" ]
text-generation
transformers
# Rick and Morty DialoGPT Model
{"tags": ["conversational"]}
Pensador777critico/DialoGPT-small-RickandMorty
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick and Morty DialoGPT Model
[ "# Rick and Morty DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick and Morty DialoGPT Model" ]
[ 39, 9 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick and Morty DialoGPT Model" ]
automatic-speech-recognition
transformers
# Disclaimer This model was trained on Common Voice 6, if you need a catalan model for ASR, I recommend checking [wav2vec2-xls-r-1b-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-1b-ca-lm) which is a 1b model with a LM on top trained on CV8+ with much better performance or [wav2vec2-xls-r-300m-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) which has the same size (300m) as this model but trained on CV8+ and the same LM. # Wav2Vec2-Large-XLSR-53-ca Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on catalan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the catalan test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ca", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) import jiwer # Chunk WER computation due to memory issues, taken from https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000))) ``` **Test Result**: 8.11 % ## Training The Common Voice `train`, `validation` datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up. The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py). Slight modifications were done in order to speed up the ordering by length during training, which can be found [here](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/6). Another version trained for catalan can be found [here](https://huggingface.co/ccoreilly/wav2vec2-large-xlsr-catala), which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset.
{"language": "ca", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Catalan XLSR Wav2Vec Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ca", "type": "common_voice", "args": "ca"}, "metrics": [{"type": "wer", "value": 8.11, "name": "Test WER"}]}]}]}
PereLluis13/Wav2Vec2-Large-XLSR-53-catalan
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ca", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ca" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ca #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Disclaimer This model was trained on Common Voice 6, if you need a catalan model for ASR, I recommend checking wav2vec2-xls-r-1b-ca-lm which is a 1b model with a LM on top trained on CV8+ with much better performance or wav2vec2-xls-r-300m-ca-lm which has the same size (300m) as this model but trained on CV8+ and the same LM. # Wav2Vec2-Large-XLSR-53-ca Fine-tuned facebook/wav2vec2-large-xlsr-53 on catalan using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the catalan test data of Common Voice. Test Result: 8.11 % ## Training The Common Voice 'train', 'validation' datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up. The script used for training can be found here. Slight modifications were done in order to speed up the ordering by length during training, which can be found here. Another version trained for catalan can be found here, which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset.
[ "# Disclaimer\n\nThis model was trained on Common Voice 6, if you need a catalan model for ASR, I recommend checking wav2vec2-xls-r-1b-ca-lm which is a 1b model with a LM on top trained on CV8+ with much better performance or wav2vec2-xls-r-300m-ca-lm which has the same size (300m) as this model but trained on CV8+ and the same LM.", "# Wav2Vec2-Large-XLSR-53-ca \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on catalan using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the catalan test data of Common Voice.\n\n\n\nTest Result: 8.11 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up.\n\nThe script used for training can be found here. Slight modifications were done in order to speed up the ordering by length during training, which can be found here. Another version trained for catalan can be found here, which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ca #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Disclaimer\n\nThis model was trained on Common Voice 6, if you need a catalan model for ASR, I recommend checking wav2vec2-xls-r-1b-ca-lm which is a 1b model with a LM on top trained on CV8+ with much better performance or wav2vec2-xls-r-300m-ca-lm which has the same size (300m) as this model but trained on CV8+ and the same LM.", "# Wav2Vec2-Large-XLSR-53-ca \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on catalan using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the catalan test data of Common Voice.\n\n\n\nTest Result: 8.11 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up.\n\nThe script used for training can be found here. Slight modifications were done in order to speed up the ordering by length during training, which can be found here. Another version trained for catalan can be found here, which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset." ]
[ 66, 108, 61, 18, 26, 174 ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ca #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# Disclaimer\n\nThis model was trained on Common Voice 6, if you need a catalan model for ASR, I recommend checking wav2vec2-xls-r-1b-ca-lm which is a 1b model with a LM on top trained on CV8+ with much better performance or wav2vec2-xls-r-300m-ca-lm which has the same size (300m) as this model but trained on CV8+ and the same LM.# Wav2Vec2-Large-XLSR-53-ca \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on catalan using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.## Usage\n\nThe model can be used directly (without a language model) as follows:## Evaluation\n\nThe model can be evaluated as follows on the catalan test data of Common Voice.\n\n\n\nTest Result: 8.11 %## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up.\n\nThe script used for training can be found here. Slight modifications were done in order to speed up the ordering by length during training, which can be found here. Another version trained for catalan can be found here, which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-greek Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10) datasets. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "el", split="test") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the greek test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "el", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 20.89 % ## Training The Common Voice `train`, `validation`, and CSS10 datasets were used for training, added as `extra` split to the dataset. The sampling rate and format of the CSS10 files is different, hence the function `speech_file_to_array_fn` was changed to: ``` def speech_file_to_array_fn(batch): try: speech_array, sampling_rate = sf.read(batch["path"] + ".wav") except: speech_array, sampling_rate = librosa.load(batch["path"], sr = 16000, res_type='zero_order_hold') sf.write(batch["path"] + ".wav", speech_array, sampling_rate, subtype='PCM_24') batch["speech"] = speech_array batch["sampling_rate"] = sampling_rate batch["target_text"] = batch["text"] return batch ``` As suggested by [Florian Zimmermeister](https://github.com/flozi00). The script used for training can be found in [run_common_voice.py](examples/research_projects/wav2vec2/run_common_voice.py), still pending of PR. The only changes are to `speech_file_to_array_fn`. Batch size was kept at 32 (using `gradient_accumulation_steps`) using one of the [OVH](https://www.ovh.com/) machines, with a V100 GPU (thank you very much [OVH](https://www.ovh.com/)). The model trained for 40 epochs, the first 20 with the `train+validation` splits, and then `extra` split was added with the data from CSS10 at the 20th epoch.
{"language": "el", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "CSS10"], "metrics": ["wer"], "model-index": [{"name": "Greek XLSR Wav2Vec2 Large 53 - CV + CSS10", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice el", "type": "common_voice", "args": "el"}, "metrics": [{"type": "wer", "value": 20.89, "name": "Test WER"}]}]}]}
PereLluis13/wav2vec2-large-xlsr-53-greek
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "el", "dataset:common_voice", "dataset:CSS10", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "el" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #el #dataset-common_voice #dataset-CSS10 #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-greek Fine-tuned facebook/wav2vec2-large-xlsr-53 on greek using the Common Voice and CSS10 datasets. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the greek test data of Common Voice. Test Result: 20.89 % ## Training The Common Voice 'train', 'validation', and CSS10 datasets were used for training, added as 'extra' split to the dataset. The sampling rate and format of the CSS10 files is different, hence the function 'speech_file_to_array_fn' was changed to: As suggested by Florian Zimmermeister. The script used for training can be found in run_common_voice.py, still pending of PR. The only changes are to 'speech_file_to_array_fn'. Batch size was kept at 32 (using 'gradient_accumulation_steps') using one of the OVH machines, with a V100 GPU (thank you very much OVH). The model trained for 40 epochs, the first 20 with the 'train+validation' splits, and then 'extra' split was added with the data from CSS10 at the 20th epoch.
[ "# Wav2Vec2-Large-XLSR-53-greek\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on greek using the Common Voice and CSS10 datasets.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the greek test data of Common Voice. \n\n\n\nTest Result: 20.89 %", "## Training\n\nThe Common Voice 'train', 'validation', and CSS10 datasets were used for training, added as 'extra' split to the dataset. The sampling rate and format of the CSS10 files is different, hence the function 'speech_file_to_array_fn' was changed to:\n \n\nAs suggested by Florian Zimmermeister.\n\nThe script used for training can be found in run_common_voice.py, still pending of PR. The only changes are to 'speech_file_to_array_fn'. Batch size was kept at 32 (using 'gradient_accumulation_steps') using one of the OVH machines, with a V100 GPU (thank you very much OVH). The model trained for 40 epochs, the first 20 with the 'train+validation' splits, and then 'extra' split was added with the data from CSS10 at the 20th epoch." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #el #dataset-common_voice #dataset-CSS10 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-greek\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on greek using the Common Voice and CSS10 datasets.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the greek test data of Common Voice. \n\n\n\nTest Result: 20.89 %", "## Training\n\nThe Common Voice 'train', 'validation', and CSS10 datasets were used for training, added as 'extra' split to the dataset. The sampling rate and format of the CSS10 files is different, hence the function 'speech_file_to_array_fn' was changed to:\n \n\nAs suggested by Florian Zimmermeister.\n\nThe script used for training can be found in run_common_voice.py, still pending of PR. The only changes are to 'speech_file_to_array_fn'. Batch size was kept at 32 (using 'gradient_accumulation_steps') using one of the OVH machines, with a V100 GPU (thank you very much OVH). The model trained for 40 epochs, the first 20 with the 'train+validation' splits, and then 'extra' split was added with the data from CSS10 at the 20th epoch." ]
[ 73, 66, 18, 26, 201 ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #el #dataset-common_voice #dataset-CSS10 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# Wav2Vec2-Large-XLSR-53-greek\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on greek using the Common Voice and CSS10 datasets.\nWhen using this model, make sure that your speech input is sampled at 16kHz.## Usage\n\nThe model can be used directly (without a language model) as follows:## Evaluation\n\nThe model can be evaluated as follows on the greek test data of Common Voice. \n\n\n\nTest Result: 20.89 %## Training\n\nThe Common Voice 'train', 'validation', and CSS10 datasets were used for training, added as 'extra' split to the dataset. The sampling rate and format of the CSS10 files is different, hence the function 'speech_file_to_array_fn' was changed to:\n \n\nAs suggested by Florian Zimmermeister.\n\nThe script used for training can be found in run_common_voice.py, still pending of PR. The only changes are to 'speech_file_to_array_fn'. Batch size was kept at 32 (using 'gradient_accumulation_steps') using one of the OVH machines, with a V100 GPU (thank you very much OVH). The model trained for 40 epochs, the first 20 with the 'train+validation' splits, and then 'extra' split was added with the data from CSS10 at the 20th epoch." ]
automatic-speech-recognition
transformers
# wav2vec2-xls-r-1b-ca-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
{"language": ["ca"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0", "collectivat/tv3_parla", "projecte-aina/parlament_parla"], "model-index": [{"name": "wav2vec2-xls-r-1b-ca-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "mozilla-foundation/common_voice_8_0 ca", "type": "mozilla-foundation/common_voice_8_0", "args": "ca"}, "metrics": [{"type": "wer", "value": 6.072266995813065, "name": "Test WER"}, {"type": "cer", "value": 1.9180697705166525, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "projecte-aina/parlament_parla ca", "type": "projecte-aina/parlament_parla", "args": "clean"}, "metrics": [{"type": "wer", "value": 5.139820371024042, "name": "Test WER"}, {"type": "cer", "value": 2.0163620128164723, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "collectivat/tv3_parla ca", "type": "collectivat/tv3_parla", "args": "ca"}, "metrics": [{"type": "wer", "value": 11.207991684952074, "name": "Test WER"}, {"type": "cer", "value": 7.32119307305963, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Catalan Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 22.870153690468662, "name": "Test WER"}, {"type": "cer", "value": 13.59039190897598, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 15.41, "name": "Test WER"}]}]}]}
PereLluis13/wav2vec2-xls-r-1b-ca-lm
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ca" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-xls-r-1b-ca-lm This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the tv3_parla and parlament_parla datasets. ## Model description Please check the original facebook/wav2vec2-xls-r-1b Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by @ccoreilly, which can be found on the text/ folder or here. ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible.
[ "# wav2vec2-xls-r-1b-ca-lm\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the tv3_parla and parlament_parla datasets.", "## Model description\n\nPlease check the original facebook/wav2vec2-xls-r-1b Model card. This is just a finetuned version of that model.", "## Intended uses & limitations\n\nAs any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.", "## Training and evaluation data", "## Training procedure\n\nThe data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by @ccoreilly, which can be found on the text/ folder or here.", "### Training results\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0", "# Thanks\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-xls-r-1b-ca-lm\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the tv3_parla and parlament_parla datasets.", "## Model description\n\nPlease check the original facebook/wav2vec2-xls-r-1b Model card. This is just a finetuned version of that model.", "## Intended uses & limitations\n\nAs any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.", "## Training and evaluation data", "## Training procedure\n\nThe data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by @ccoreilly, which can be found on the text/ folder or here.", "### Training results\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0", "# Thanks\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
[ 151, 79, 38, 64, 6, 47, 40, 135, 47, 30 ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# wav2vec2-xls-r-1b-ca-lm\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the tv3_parla and parlament_parla datasets.## Model description\n\nPlease check the original facebook/wav2vec2-xls-r-1b Model card. This is just a finetuned version of that model.## Intended uses & limitations\n\nAs any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.## Training and evaluation data## Training procedure\n\nThe data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by @ccoreilly, which can be found on the text/ folder or here.### Training results\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0# Thanks\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-1b-ca This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
{"language": ["ca"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0", "collectivat/tv3_parla", "projecte-aina/parlament_parla"], "model-index": [{"name": "wav2vec2-xls-r-1b-ca", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "mozilla-foundation/common_voice_8_0 ca", "type": "mozilla-foundation/common_voice_8_0", "args": "ca"}, "metrics": [{"type": "wer", "value": 11.030639657300515, "name": "Test WER"}, {"type": "cer", "value": 2.8405630530040633, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "projecte-aina/parlament_parla ca", "type": "projecte-aina/parlament_parla", "args": "clean"}, "metrics": [{"type": "wer", "value": 6.483115660665961, "name": "Test WER"}, {"type": "cer", "value": 2.0212863746191827, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "collectivat/tv3_parla ca", "type": "collectivat/tv3_parla", "args": "ca"}, "metrics": [{"type": "wer", "value": 17.917773414943987, "name": "Test WER"}, {"type": "cer", "value": 8.872589572206396, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Catalan Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 27.126683954209096, "name": "Test WER"}, {"type": "cer", "value": 14.213308815078726, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 18.7, "name": "Test WER"}]}]}]}
PereLluis13/wav2vec2-xls-r-1b-ca
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ca" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-xls-r-1b-ca This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the tv3_parla and parlament_parla datasets. ## Model description Please check the original facebook/wav2vec2-xls-r-1b Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by @ccoreilly, which can be found on the text/ folder or here. ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible.
[ "# wav2vec2-xls-r-1b-ca\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the tv3_parla and parlament_parla datasets.", "## Model description\n\nPlease check the original facebook/wav2vec2-xls-r-1b Model card. This is just a finetuned version of that model.", "## Intended uses & limitations\n\nAs any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.", "## Training and evaluation data", "## Training procedure\n\nThe data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by @ccoreilly, which can be found on the text/ folder or here.", "### Training results\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0", "# Thanks\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-xls-r-1b-ca\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the tv3_parla and parlament_parla datasets.", "## Model description\n\nPlease check the original facebook/wav2vec2-xls-r-1b Model card. This is just a finetuned version of that model.", "## Intended uses & limitations\n\nAs any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.", "## Training and evaluation data", "## Training procedure\n\nThe data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by @ccoreilly, which can be found on the text/ folder or here.", "### Training results\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0", "# Thanks\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
[ 151, 76, 38, 64, 6, 47, 40, 135, 47, 30 ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# wav2vec2-xls-r-1b-ca\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the tv3_parla and parlament_parla datasets.## Model description\n\nPlease check the original facebook/wav2vec2-xls-r-1b Model card. This is just a finetuned version of that model.## Intended uses & limitations\n\nAs any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.## Training and evaluation data## Training procedure\n\nThe data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by @ccoreilly, which can be found on the text/ folder or here.### Training results\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0# Thanks\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
automatic-speech-recognition
transformers
# wav2vec2-xls-r-300m-ca-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. It achieves the following results on the evaluation set (for the three datasets and without the LM): - Loss: 0.2472 - Wer: 0.1499 ## Model description Please check the original [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data More information needed ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 18.0 - mixed_precision_training: Native AMP ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.2099 | 0.09 | 500 | 3.4125 | 1.0 | | 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 | | 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 | | 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 | | 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 | | 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 | | 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 | | 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 | | 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 | | 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 | | 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 | | 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 | | 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 | | 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 | | 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 | | 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 | | 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 | | 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 | | 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 | | 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 | | 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 | | 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 | | 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 | | 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 | | 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 | | 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 | | 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 | | 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 | | 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 | | 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 | | 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 | | 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 | | 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 | | 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 | | 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 | | 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 | | 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 | | 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 | | 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 | | 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 | | 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 | | 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 | | 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 | | 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 | | 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 | | 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 | | 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 | | 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 | | 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 | | 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 | | 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 | | 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 | | 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 | | 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 | | 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 | | 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 | | 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 | | 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 | | 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 | | 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 | | 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 | | 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
{"language": ["ca"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0", "collectivat/tv3_parla", "projecte-aina/parlament_parla"], "model-index": [{"name": "wav2vec2-xls-r-300m-ca-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "mozilla-foundation/common_voice_8_0 ca", "type": "mozilla-foundation/common_voice_8_0", "args": "ca"}, "metrics": [{"type": "wer", "value": 6.771703090587865, "name": "Test WER"}, {"type": "cer", "value": 2.100777784371229, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "projecte-aina/parlament_parla ca", "type": "projecte-aina/parlament_parla", "args": "clean"}, "metrics": [{"type": "wer", "value": 5.565360630662431, "name": "Test WER"}, {"type": "cer", "value": 1.8594390167034354, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "collectivat/tv3_parla ca", "type": "collectivat/tv3_parla", "args": "ca"}, "metrics": [{"type": "wer", "value": 13.53312545713516, "name": "Test WER"}, {"type": "cer", "value": 8.684635913340555, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Catalan Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 26.04515843400164, "name": "Test WER"}, {"type": "cer", "value": 15.056890012642224, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 17.68, "name": "Test WER"}]}]}]}
PereLluis13/wav2vec2-xls-r-300m-ca-lm
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ca" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-xls-r-300m-ca-lm ========================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - CA, the tv3\_parla and parlament\_parla datasets. It achieves the following results on the evaluation set (for the three datasets and without the LM): * Loss: 0.2472 * Wer: 0.1499 Model description ----------------- Please check the original facebook/wav2vec2-xls-r-300m Model card. This is just a finetuned version of that model. Intended uses & limitations --------------------------- As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. Training and evaluation data ---------------------------- More information needed Training procedure ------------------ The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by @ccoreilly, which can be found on the text/ folder or here. ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 18.0 * mixed\_precision\_training: Native AMP ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0 Thanks ====== Want to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible.
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 18.0\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0\n\n\nThanks\n======\n\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 18.0\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0\n\n\nThanks\n======\n\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
[ 151, 155, 40, 82 ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 18.0\n* mixed\\_precision\\_training: Native AMP### Training results\n\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0\n\n\nThanks\n======\n\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
automatic-speech-recognition
transformers
# wav2vec2-xls-r-300m-ca This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. It achieves the following results on the evaluation set (for the three datasets): - Loss: 0.2472 - Wer: 0.1499 ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data More information needed ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 18.0 - mixed_precision_training: Native AMP ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.2099 | 0.09 | 500 | 3.4125 | 1.0 | | 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 | | 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 | | 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 | | 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 | | 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 | | 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 | | 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 | | 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 | | 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 | | 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 | | 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 | | 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 | | 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 | | 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 | | 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 | | 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 | | 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 | | 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 | | 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 | | 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 | | 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 | | 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 | | 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 | | 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 | | 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 | | 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 | | 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 | | 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 | | 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 | | 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 | | 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 | | 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 | | 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 | | 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 | | 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 | | 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 | | 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 | | 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 | | 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 | | 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 | | 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 | | 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 | | 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 | | 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 | | 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 | | 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 | | 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 | | 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 | | 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 | | 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 | | 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 | | 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 | | 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 | | 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 | | 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 | | 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 | | 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 | | 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 | | 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 | | 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 | | 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
{"language": ["ca"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0", "collectivat/tv3_parla", "projecte-aina/parlament_parla"], "model-index": [{"name": "wav2vec2-xls-r-300m-ca", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "mozilla-foundation/common_voice_8_0 ca", "type": "mozilla-foundation/common_voice_8_0", "args": "ca"}, "metrics": [{"type": "wer", "value": 13.170091241317552, "name": "Test WER"}, {"type": "cer", "value": 3.356726205534543, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "projecte-aina/parlament_parla ca", "type": "projecte-aina/parlament_parla", "args": "clean"}, "metrics": [{"type": "wer", "value": 8.048005647723262, "name": "Test WER"}, {"type": "cer", "value": 2.240912911020065, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "collectivat/tv3_parla ca", "type": "collectivat/tv3_parla", "args": "ca"}, "metrics": [{"type": "wer", "value": 23.320629787889285, "name": "Test WER"}, {"type": "cer", "value": 10.43921620208999, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "speech-recognition-community-v2/dev_data ca", "type": "speech-recognition-community-v2/dev_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 31.99671115046487, "name": "Test WER"}, {"type": "cer", "value": 15.820020687277324, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 22.04, "name": "Test WER"}]}]}]}
PereLluis13/wav2vec2-xls-r-300m-ca
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ca" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-xls-r-300m-ca ====================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - CA, the tv3\_parla and parlament\_parla datasets. It achieves the following results on the evaluation set (for the three datasets): * Loss: 0.2472 * Wer: 0.1499 Model description ----------------- Please check the original facebook/wav2vec2-xls-r-1b Model card. This is just a finetuned version of that model. Intended uses & limitations --------------------------- As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. Training and evaluation data ---------------------------- More information needed Training procedure ------------------ The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by @ccoreilly, which can be found on the text/ folder or here. ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 18.0 * mixed\_precision\_training: Native AMP ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0 Thanks ====== Want to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible.
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 18.0\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0\n\n\nThanks\n======\n\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 18.0\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0\n\n\nThanks\n======\n\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
[ 151, 155, 40, 82 ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #collectivat/tv3_parla #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #projecte-aina/parlament_parla #robust-speech-event #ca #dataset-mozilla-foundation/common_voice_8_0 #dataset-collectivat/tv3_parla #dataset-projecte-aina/parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 18.0\n* mixed\\_precision\\_training: Native AMP### Training results\n\n\nCheck the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0\n\n\nThanks\n======\n\n\nWant to thank both @ccoreilly and @gullabi who have contributed with their own resources and knowledge into making this model possible." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medium This model is a fine-tuned version of [prithivida/parrot_paraphraser_on_T5](https://huggingface.co/prithivida/parrot_paraphraser_on_T5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6025 - Rouge1: 81.6007 - Rouge2: 75.1196 - Rougel: 81.4213 - Rougelsum: 81.4956 - Gen Len: 32.4286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 63 | 0.5775 | 65.0748 | 58.8985 | 64.5731 | 63.6249 | 19.0 | | No log | 2.0 | 126 | 0.5806 | 74.3055 | 69.2025 | 73.4922 | 73.0941 | 17.8571 | | No log | 3.0 | 189 | 0.6025 | 71.3808 | 66.0359 | 70.1235 | 69.4614 | 18.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "medium", "results": []}]}
Peter/medium
null
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
medium ====== This model is a fine-tuned version of prithivida/parrot\_paraphraser\_on\_T5 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.6025 * Rouge1: 81.6007 * Rouge2: 75.1196 * Rougel: 81.4213 * Rougelsum: 81.4956 * Gen Len: 32.4286 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.1+cu113 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ 43, 103, 5, 44 ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0### Training results### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
How to use this classifier: ``` from transformers import pipeline pipe = pipeline("text-classification", model="Peterard/distilbert_bug_classifier") pipe("The app crashed when I opened it this morning. Can you fix this please?") # [{'label': 'bug', 'score': 0.9042391180992126}] pipe("Please add a like button!") # [{'label': 'no_bug', 'score': 0.9977496266365051}] ``` N.B. The label will change depending on which is the likelier class
{"language": ["en"], "tags": ["text-classification"], "widget": [{"text": "The app crashed when I opened it this morning. Can you fix this please?", "example_title": "Likely bug report"}, {"text": "Please add a like button!", "example_title": "Unlikely bug report"}]}
Peterard/distilbert_bug_classifier
null
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #text-classification #en #autotrain_compatible #endpoints_compatible #region-us
How to use this classifier: N.B. The label will change depending on which is the likelier class
[]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #en #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 32 ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #en #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
How to use this classifier: ``` from transformers import pipeline pipe = pipeline("text-classification", model="Peterard/distilbert_feature_classifier") pipe("Please add a like button!") # [{'label': 'feature_request', 'score': 0.8930749893188477}] pipe("The app crashed when I opened it this morning. Can you fix this please?") #[{'label': 'no_feature_request', 'score': 0.9971746206283569}] ``` N.B. The label will change depending on which is the likelier class
{"language": ["en"], "tags": ["text-classification"], "widget": [{"text": "Please add a like button!", "example_title": "Likely feature request"}, {"text": "The app crashed when I opened it this morning. Can you fix this please?", "example_title": "Unlikely feature request"}]}
Peterard/distilbert_feature_classifier
null
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #text-classification #en #autotrain_compatible #endpoints_compatible #region-us
How to use this classifier: N.B. The label will change depending on which is the likelier class
[]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #en #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 32 ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #en #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
Attempt of guided text generation to replace GPT-3 for :[This SCP Does Not Exist](https://www.thisscpdoesnotexist.ml) Work in Porgress Finetuned on a dataset of 1700 automatically generated samples from the [official SCP wiki](https://scp-wiki.wikidot.com/) Exemple input : ```Prompt: SCP-9741 is a pair of jeans that looks really cool ### Generation: Item #: SCP-9741\nObject Class: Safe\nSpecial Containment Procedures:``` # Acknowledgment This work was made possible thanks to the TPU Research Cloud program by Google
{}
PhilSad/GPT-J6B-Guided-SCP
null
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gptj #text-generation #autotrain_compatible #endpoints_compatible #region-us
Attempt of guided text generation to replace GPT-3 for :This SCP Does Not Exist Work in Porgress Finetuned on a dataset of 1700 automatically generated samples from the official SCP wiki Exemple input : # Acknowledgment This work was made possible thanks to the TPU Research Cloud program by Google
[ "# Acknowledgment\nThis work was made possible thanks to the TPU Research Cloud program by Google" ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #autotrain_compatible #endpoints_compatible #region-us \n", "# Acknowledgment\nThis work was made possible thanks to the TPU Research Cloud program by Google" ]
[ 30, 22 ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #autotrain_compatible #endpoints_compatible #region-us \n# Acknowledgment\nThis work was made possible thanks to the TPU Research Cloud program by Google" ]
text-generation
transformers
GPT J 6B finetuned on SCP articles Very experimental
{}
PhilSad/GPTJ2B-SCP
null
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gptj #text-generation #autotrain_compatible #endpoints_compatible #region-us
GPT J 6B finetuned on SCP articles Very experimental
[]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 30 ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output_gptneo125-2 This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "output_gptneo125-2", "results": []}]}
PhilSad/gpt-scp-neo-125M
null
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #gpt_neo #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# output_gptneo125-2 This model is a fine-tuned version of EleutherAI/gpt-neo-125M on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
[ "# output_gptneo125-2\n\nThis model is a fine-tuned version of EleutherAI/gpt-neo-125M on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- num_devices: 8\n- total_train_batch_size: 64\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #gpt_neo #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# output_gptneo125-2\n\nThis model is a fine-tuned version of EleutherAI/gpt-neo-125M on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- num_devices: 8\n- total_train_batch_size: 64\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ 48, 37, 7, 9, 9, 4, 130, 5, 47 ]
[ "TAGS\n#transformers #pytorch #tensorboard #gpt_neo #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# output_gptneo125-2\n\nThis model is a fine-tuned version of EleutherAI/gpt-neo-125M on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- num_devices: 8\n- total_train_batch_size: 64\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
text-generation
transformers
#Traveller DiabloGPT Model
{"tags": ["conversational"]}
PhilipTheGreat/DiabloGPT-small-Traveller
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
#Traveller DiabloGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
[ 43 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
null
transformers
### **GPT-Macbeth** A custom finetune of GPT-2 trained on a custom dataset of victorian literature ## Information The goal of this finetune is to output high-quality victorian literature, while being customizable with Author's Note and being light to run (aka not being a GPT-Neo or GPT-Jax finetune, for now at least). ## Authors Note Author's Note was added manually, so please appreciate it. :) The format of it is [ Author: George Eliot; Genre: Horror, fantasy, novel; Tags: scary, magical, victorian ] Some words will work well, some won't. Please make sure to have spaces before each ][. Most popular victorian authors should work, but keep in mind that some authors (e.g. Mark Twain) will result in a somewhat weird behavior due to a quirk in the dataset that will be addressed in the next version of the finetune. When it comes to the genres, "novel", "fiction", "horror" and "romance" work best, but from playing around with it, I've noticed that most other not too specific genres work pretty well too. The tags are a bit complicated. Adding "normal" will result in a story without anything special (like no magic or fantasy element) and tends to be pretty low-pace. Using "real-life" will push the AI towards a historical/biographical path. Almost all tags should work. Using "man" or "woman" is supposed to semi-determine what gender the main character is, but it heavily depends on the chosen author. ## History Version 0 - This was the first test version of the finetune, trained on GPT-2-small and with a really small dataset. The name was GPT-Kelini before it was renamed to GPT-Macbeth in V1. Version 1 - The current version of the finetune. Trained on GPT-2-medium with a much, much bigger dataset compared to V0. Supports Author's Note ### Notes Please use a very low temperature/randomness when using it, if you want to get anything out of it. Pumping the repetition penalty up helps a lot too. The model was specifically converted to PyTorch so that most front-end GUI's should run it. It has been only tested on KoboldAI, but should theoretically work on others too. For some odd reason, my finetune is capable of writing victorian NSFW content, if used the right way. No NSFW was in the dataset and considering the size of the model, it's really odd to see it do so. Perhaps the countless romantic novels in the dataset had something naughty in them, but I highly doubt it. You may sometimes get roman numerals on random occasions, this shouldn't happen often, but if it does, it's again something that will be (manually, unfortunately) addressed in the next version of the finetune. If you are wondering why I renamed my finetune to Macbeth, there are a few reasons: First, it sounds much better and smoother than Kelini, second, it's a play by Shakespeare that closely matches the writing style of some of the authors in my dataset, and third, the most important reason, it's was mentioned in Hamilton, so yes, my love with Hamilton is bleeding everywhere and yes, the next version of the dataset will try to have a Hamilton easter egg featuring the Author's Note. ### Credits I want to thank HuggingFace for their tokenizer and everything they've done to make everything easier. Then is OpenAI for making GPT-2. I also want to thank most active people on the AIM Discord server in the community-projects channel. Thanks to Bran for finding a way to convert checkpoints to a PyTorch model, thanks to Mr. Seeker and Aedial for helping me in cleaning the dataset and to *finetune* from the NovelAI team for perhaps making my finetune output much better quality by telling me about the magic of the <\|endoftext\|> token. P.S. If you happen to use it in something commercial or in an online demo or in any other way that is not for personal use, a credit will be greatly appreciated (and if you do something exciting with it, make sure to let me know, I'd be more than happy to see it being used by someone!).
{}
Philipuss/GPT-Macbeth
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #gpt2 #endpoints_compatible #text-generation-inference #region-us
### GPT-Macbeth A custom finetune of GPT-2 trained on a custom dataset of victorian literature ## Information The goal of this finetune is to output high-quality victorian literature, while being customizable with Author's Note and being light to run (aka not being a GPT-Neo or GPT-Jax finetune, for now at least). ## Authors Note Author's Note was added manually, so please appreciate it. :) The format of it is [ Author: George Eliot; Genre: Horror, fantasy, novel; Tags: scary, magical, victorian ] Some words will work well, some won't. Please make sure to have spaces before each ][. Most popular victorian authors should work, but keep in mind that some authors (e.g. Mark Twain) will result in a somewhat weird behavior due to a quirk in the dataset that will be addressed in the next version of the finetune. When it comes to the genres, "novel", "fiction", "horror" and "romance" work best, but from playing around with it, I've noticed that most other not too specific genres work pretty well too. The tags are a bit complicated. Adding "normal" will result in a story without anything special (like no magic or fantasy element) and tends to be pretty low-pace. Using "real-life" will push the AI towards a historical/biographical path. Almost all tags should work. Using "man" or "woman" is supposed to semi-determine what gender the main character is, but it heavily depends on the chosen author. ## History Version 0 - This was the first test version of the finetune, trained on GPT-2-small and with a really small dataset. The name was GPT-Kelini before it was renamed to GPT-Macbeth in V1. Version 1 - The current version of the finetune. Trained on GPT-2-medium with a much, much bigger dataset compared to V0. Supports Author's Note ### Notes Please use a very low temperature/randomness when using it, if you want to get anything out of it. Pumping the repetition penalty up helps a lot too. The model was specifically converted to PyTorch so that most front-end GUI's should run it. It has been only tested on KoboldAI, but should theoretically work on others too. For some odd reason, my finetune is capable of writing victorian NSFW content, if used the right way. No NSFW was in the dataset and considering the size of the model, it's really odd to see it do so. Perhaps the countless romantic novels in the dataset had something naughty in them, but I highly doubt it. You may sometimes get roman numerals on random occasions, this shouldn't happen often, but if it does, it's again something that will be (manually, unfortunately) addressed in the next version of the finetune. If you are wondering why I renamed my finetune to Macbeth, there are a few reasons: First, it sounds much better and smoother than Kelini, second, it's a play by Shakespeare that closely matches the writing style of some of the authors in my dataset, and third, the most important reason, it's was mentioned in Hamilton, so yes, my love with Hamilton is bleeding everywhere and yes, the next version of the dataset will try to have a Hamilton easter egg featuring the Author's Note. ### Credits I want to thank HuggingFace for their tokenizer and everything they've done to make everything easier. Then is OpenAI for making GPT-2. I also want to thank most active people on the AIM Discord server in the community-projects channel. Thanks to Bran for finding a way to convert checkpoints to a PyTorch model, thanks to Mr. Seeker and Aedial for helping me in cleaning the dataset and to *finetune* from the NovelAI team for perhaps making my finetune output much better quality by telling me about the magic of the <\|endoftext\|> token. P.S. If you happen to use it in something commercial or in an online demo or in any other way that is not for personal use, a credit will be greatly appreciated (and if you do something exciting with it, make sure to let me know, I'd be more than happy to see it being used by someone!).
[ "### GPT-Macbeth\nA custom finetune of GPT-2 trained on a custom dataset of victorian literature", "## Information\nThe goal of this finetune is to output high-quality victorian literature, while being customizable with Author's Note and being light to run (aka not being a GPT-Neo or GPT-Jax finetune, for now at least).", "## Authors Note\nAuthor's Note was added manually, so please appreciate it. :)\n\nThe format of it is [ Author: George Eliot; Genre: Horror, fantasy, novel; Tags: scary, magical, victorian ]\nSome words will work well, some won't. Please make sure to have spaces before each ][.\n\nMost popular victorian authors should work, but keep in mind that some authors (e.g. Mark Twain) will result in a somewhat weird behavior due to a quirk in the dataset that will be addressed in the next version of the finetune.\n\nWhen it comes to the genres, \"novel\", \"fiction\", \"horror\" and \"romance\" work best, but from playing around with it, I've noticed that most other not too specific genres work pretty well too.\n\nThe tags are a bit complicated. Adding \"normal\" will result in a story without anything special (like no magic or fantasy element) and tends to be pretty low-pace. Using \"real-life\" will push the AI towards a historical/biographical path. Almost all tags should work. Using \"man\" or \"woman\" is supposed to semi-determine what gender the main character is, but it heavily depends on the chosen author.", "## History\nVersion 0 - This was the first test version of the finetune, trained on GPT-2-small and with a really small dataset. The name was GPT-Kelini before it was renamed to GPT-Macbeth in V1.\n\nVersion 1 - The current version of the finetune. Trained on GPT-2-medium with a much, much bigger dataset compared to V0. Supports Author's Note", "### Notes\nPlease use a very low temperature/randomness when using it, if you want to get anything out of it. Pumping the repetition penalty up helps a lot too.\n\nThe model was specifically converted to PyTorch so that most front-end GUI's should run it. It has been only tested on KoboldAI, but should theoretically work on others too.\n\nFor some odd reason, my finetune is capable of writing victorian NSFW content, if used the right way. No NSFW was in the dataset and considering the size of the model, it's really odd to see it do so. Perhaps the countless romantic novels in the dataset had something naughty in them, but I highly doubt it.\n\nYou may sometimes get roman numerals on random occasions, this shouldn't happen often, but if it does, it's again something that will be (manually, unfortunately) addressed in the next version of the finetune.\n\nIf you are wondering why I renamed my finetune to Macbeth, there are a few reasons: First, it sounds much better and smoother than Kelini, second, it's a play by Shakespeare that closely matches the writing style of some of the authors in my dataset, and third, the most important reason, it's was mentioned in Hamilton, so yes, my love with Hamilton is bleeding everywhere and yes, the next version of the dataset will try to have a Hamilton easter egg featuring the Author's Note.", "### Credits\nI want to thank HuggingFace for their tokenizer and everything they've done to make everything easier. Then is OpenAI for making GPT-2. I also want to thank most active people on the AIM Discord server in the community-projects channel. Thanks to Bran for finding a way to convert checkpoints to a PyTorch model, thanks to Mr. Seeker and Aedial for helping me in cleaning the dataset and to *finetune* from the NovelAI team for perhaps making my finetune output much better quality by telling me about the magic of the <\\|endoftext\\|> token.\n\n\n\n\nP.S. If you happen to use it in something commercial or in an online demo or in any other way that is not for personal use, a credit will be greatly appreciated (and if you do something exciting with it, make sure to let me know, I'd be more than happy to see it being used by someone!)." ]
[ "TAGS\n#transformers #pytorch #tensorboard #gpt2 #endpoints_compatible #text-generation-inference #region-us \n", "### GPT-Macbeth\nA custom finetune of GPT-2 trained on a custom dataset of victorian literature", "## Information\nThe goal of this finetune is to output high-quality victorian literature, while being customizable with Author's Note and being light to run (aka not being a GPT-Neo or GPT-Jax finetune, for now at least).", "## Authors Note\nAuthor's Note was added manually, so please appreciate it. :)\n\nThe format of it is [ Author: George Eliot; Genre: Horror, fantasy, novel; Tags: scary, magical, victorian ]\nSome words will work well, some won't. Please make sure to have spaces before each ][.\n\nMost popular victorian authors should work, but keep in mind that some authors (e.g. Mark Twain) will result in a somewhat weird behavior due to a quirk in the dataset that will be addressed in the next version of the finetune.\n\nWhen it comes to the genres, \"novel\", \"fiction\", \"horror\" and \"romance\" work best, but from playing around with it, I've noticed that most other not too specific genres work pretty well too.\n\nThe tags are a bit complicated. Adding \"normal\" will result in a story without anything special (like no magic or fantasy element) and tends to be pretty low-pace. Using \"real-life\" will push the AI towards a historical/biographical path. Almost all tags should work. Using \"man\" or \"woman\" is supposed to semi-determine what gender the main character is, but it heavily depends on the chosen author.", "## History\nVersion 0 - This was the first test version of the finetune, trained on GPT-2-small and with a really small dataset. The name was GPT-Kelini before it was renamed to GPT-Macbeth in V1.\n\nVersion 1 - The current version of the finetune. Trained on GPT-2-medium with a much, much bigger dataset compared to V0. Supports Author's Note", "### Notes\nPlease use a very low temperature/randomness when using it, if you want to get anything out of it. Pumping the repetition penalty up helps a lot too.\n\nThe model was specifically converted to PyTorch so that most front-end GUI's should run it. It has been only tested on KoboldAI, but should theoretically work on others too.\n\nFor some odd reason, my finetune is capable of writing victorian NSFW content, if used the right way. No NSFW was in the dataset and considering the size of the model, it's really odd to see it do so. Perhaps the countless romantic novels in the dataset had something naughty in them, but I highly doubt it.\n\nYou may sometimes get roman numerals on random occasions, this shouldn't happen often, but if it does, it's again something that will be (manually, unfortunately) addressed in the next version of the finetune.\n\nIf you are wondering why I renamed my finetune to Macbeth, there are a few reasons: First, it sounds much better and smoother than Kelini, second, it's a play by Shakespeare that closely matches the writing style of some of the authors in my dataset, and third, the most important reason, it's was mentioned in Hamilton, so yes, my love with Hamilton is bleeding everywhere and yes, the next version of the dataset will try to have a Hamilton easter egg featuring the Author's Note.", "### Credits\nI want to thank HuggingFace for their tokenizer and everything they've done to make everything easier. Then is OpenAI for making GPT-2. I also want to thank most active people on the AIM Discord server in the community-projects channel. Thanks to Bran for finding a way to convert checkpoints to a PyTorch model, thanks to Mr. Seeker and Aedial for helping me in cleaning the dataset and to *finetune* from the NovelAI team for perhaps making my finetune output much better quality by telling me about the magic of the <\\|endoftext\\|> token.\n\n\n\n\nP.S. If you happen to use it in something commercial or in an online demo or in any other way that is not for personal use, a credit will be greatly appreciated (and if you do something exciting with it, make sure to let me know, I'd be more than happy to see it being used by someone!)." ]
[ 30, 26, 58, 257, 93, 309, 205 ]
[ "TAGS\n#transformers #pytorch #tensorboard #gpt2 #endpoints_compatible #text-generation-inference #region-us \n### GPT-Macbeth\nA custom finetune of GPT-2 trained on a custom dataset of victorian literature## Information\nThe goal of this finetune is to output high-quality victorian literature, while being customizable with Author's Note and being light to run (aka not being a GPT-Neo or GPT-Jax finetune, for now at least).## Authors Note\nAuthor's Note was added manually, so please appreciate it. :)\n\nThe format of it is [ Author: George Eliot; Genre: Horror, fantasy, novel; Tags: scary, magical, victorian ]\nSome words will work well, some won't. Please make sure to have spaces before each ][.\n\nMost popular victorian authors should work, but keep in mind that some authors (e.g. Mark Twain) will result in a somewhat weird behavior due to a quirk in the dataset that will be addressed in the next version of the finetune.\n\nWhen it comes to the genres, \"novel\", \"fiction\", \"horror\" and \"romance\" work best, but from playing around with it, I've noticed that most other not too specific genres work pretty well too.\n\nThe tags are a bit complicated. Adding \"normal\" will result in a story without anything special (like no magic or fantasy element) and tends to be pretty low-pace. Using \"real-life\" will push the AI towards a historical/biographical path. Almost all tags should work. Using \"man\" or \"woman\" is supposed to semi-determine what gender the main character is, but it heavily depends on the chosen author.## History\nVersion 0 - This was the first test version of the finetune, trained on GPT-2-small and with a really small dataset. The name was GPT-Kelini before it was renamed to GPT-Macbeth in V1.\n\nVersion 1 - The current version of the finetune. Trained on GPT-2-medium with a much, much bigger dataset compared to V0. Supports Author's Note### Notes\nPlease use a very low temperature/randomness when using it, if you want to get anything out of it. Pumping the repetition penalty up helps a lot too.\n\nThe model was specifically converted to PyTorch so that most front-end GUI's should run it. It has been only tested on KoboldAI, but should theoretically work on others too.\n\nFor some odd reason, my finetune is capable of writing victorian NSFW content, if used the right way. No NSFW was in the dataset and considering the size of the model, it's really odd to see it do so. Perhaps the countless romantic novels in the dataset had something naughty in them, but I highly doubt it.\n\nYou may sometimes get roman numerals on random occasions, this shouldn't happen often, but if it does, it's again something that will be (manually, unfortunately) addressed in the next version of the finetune.\n\nIf you are wondering why I renamed my finetune to Macbeth, there are a few reasons: First, it sounds much better and smoother than Kelini, second, it's a play by Shakespeare that closely matches the writing style of some of the authors in my dataset, and third, the most important reason, it's was mentioned in Hamilton, so yes, my love with Hamilton is bleeding everywhere and yes, the next version of the dataset will try to have a Hamilton easter egg featuring the Author's Note.### Credits\nI want to thank HuggingFace for their tokenizer and everything they've done to make everything easier. Then is OpenAI for making GPT-2. I also want to thank most active people on the AIM Discord server in the community-projects channel. Thanks to Bran for finding a way to convert checkpoints to a PyTorch model, thanks to Mr. Seeker and Aedial for helping me in cleaning the dataset and to *finetune* from the NovelAI team for perhaps making my finetune output much better quality by telling me about the magic of the <\\|endoftext\\|> token.\n\n\n\n\nP.S. If you happen to use it in something commercial or in an online demo or in any other way that is not for personal use, a credit will be greatly appreciated (and if you do something exciting with it, make sure to let me know, I'd be more than happy to see it being used by someone!)." ]
null
null
This is Brain Piano --- inference: parameters: temperature: 0.7 ---
{}
Pikachu/BrainPiano
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
This is Brain Piano --- inference: parameters: temperature: 0.7 ---
[]
[ "TAGS\n#region-us \n" ]
[ 5 ]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
@ Shrek DialoGPT Model
{"tags": ["conversational"]}
PinoCorgi/DialoGPT-small-Shrek1
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
@ Shrek DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Piumi/DialogGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
[ 39, 7 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model" ]
fill-mask
transformers
# RoBERTa base trained with Spanish Legal Domain Corpora ## Table of contents <details> <summary>Click to expand</summary> - [Overview](#overview) - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation Information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Overview - **Architecture:** roberta-base - **Language:** Spanish - **Task:** fill-mask - **Data:** Legal ## Model description The **RoBERTalex** is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using a large [Spanish Legal Domain Corpora](https://zenodo.org/record/5495529), with a total of 8.9GB of text. ## Intended uses and limitations The **RoBERTalex** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. You can use the raw model for fill mask or fine-tune it to a downstream task. ## How to use Here is how to use this model: ```python >>> from transformers import pipeline >>> from pprint import pprint >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/RoBERTalex') >>> pprint(unmasker("La ley fue <mask> finalmente.")) [{'score': 0.21217258274555206, 'sequence': ' La ley fue modificada finalmente.', 'token': 5781, 'token_str': ' modificada'}, {'score': 0.20414969325065613, 'sequence': ' La ley fue derogada finalmente.', 'token': 15951, 'token_str': ' derogada'}, {'score': 0.19272951781749725, 'sequence': ' La ley fue aprobada finalmente.', 'token': 5534, 'token_str': ' aprobada'}, {'score': 0.061143241822719574, 'sequence': ' La ley fue revisada finalmente.', 'token': 14192, 'token_str': ' revisada'}, {'score': 0.041809432208538055, 'sequence': ' La ley fue aplicada finalmente.', 'token': 12208, 'token_str': ' aplicada'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import RobertaTokenizer, RobertaModel >>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/RoBERTalex') >>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/RoBERTalex') >>> text = "Gracias a los datos legales se ha podido desarrollar este modelo del lenguaje." >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 16, 768]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training data The [Spanish Legal Domain Corpora](https://zenodo.org/record/5495529) corpora comprise multiple digital resources and it has a total of 8.9GB of textual data. Part of it has been obtained from [previous work](https://aclanthology.org/2020.lt4gov-1.6/). To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. ### Training procedure The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The **RoBERTalex** pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The model was trained until convergence with 2 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Evaluation Due to the lack of domain-specific evaluation data, the model was evaluated on general domain tasks, where it obtains reasonable performance. We fine-tuned the model in the following task: | Dataset | Metric | **RoBERtalex** | |--------------|----------|------------| | UD-POS | F1 | 0.9871 | | CoNLL-NERC | F1 | 0.8323 | | CAPITEL-POS | F1 | 0.9788| | CAPITEL-NERC | F1 | 0.8394 | | STS | Combined | 0.7374 | | MLDoc | Accuracy | 0.9417 | | PAWS-X | F1 | 0.7304 | | XNLI | Accuracy | 0.7337 | ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Citing information ``` @misc{gutierrezfandino2021legal, title={Spanish Legalese Language Model and Corpora}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Aitor Gonzalez-Agirre and Marta Villegas}, year={2021}, eprint={2110.12201}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["legal", "spanish"], "datasets": ["legal_ES", "temu_legal"], "metrics": ["ppl"], "widget": [{"text": "La ley fue <mask> finalmente."}, {"text": "El Tribunal <mask> desestim\u00f3 el recurso de amparo."}, {"text": "Hay base legal dentro del marco <mask> actual."}]}
PlanTL-GOB-ES/RoBERTalex
null
[ "transformers", "pytorch", "roberta", "fill-mask", "legal", "spanish", "es", "dataset:legal_ES", "dataset:temu_legal", "arxiv:1907.11692", "arxiv:2110.12201", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692", "2110.12201" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #fill-mask #legal #spanish #es #dataset-legal_ES #dataset-temu_legal #arxiv-1907.11692 #arxiv-2110.12201 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
RoBERTa base trained with Spanish Legal Domain Corpora ====================================================== Table of contents ----------------- Click to expand * Overview * Model description * Intended uses and limitations * How to use * Limitations and bias * Training + Training data + Training procedure * Evaluation * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citation Information + Disclaimer Overview -------- * Architecture: roberta-base * Language: Spanish * Task: fill-mask * Data: Legal Model description ----------------- The RoBERTalex is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using a large Spanish Legal Domain Corpora, with a total of 8.9GB of text. Intended uses and limitations ----------------------------- The RoBERTalex model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. You can use the raw model for fill mask or fine-tune it to a downstream task. How to use ---------- Here is how to use this model: Here is how to use this model to get the features of a given text in PyTorch: Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Training data ------------- The Spanish Legal Domain Corpora corpora comprise multiple digital resources and it has a total of 8.9GB of textual data. Part of it has been obtained from previous work. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. ### Training procedure The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens. The RoBERTalex pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The model was trained until convergence with 2 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM. Evaluation ---------- Due to the lack of domain-specific evaluation data, the model was evaluated on general domain tasks, where it obtains reasonable performance. We fine-tuned the model in the following task: Dataset: UD-POS, Metric: F1, RoBERtalex: 0.9871 Dataset: CoNLL-NERC, Metric: F1, RoBERtalex: 0.8323 Dataset: CAPITEL-POS, Metric: F1, RoBERtalex: 0.9788 Dataset: CAPITEL-NERC, Metric: F1, RoBERtalex: 0.8394 Dataset: STS, Metric: Combined, RoBERtalex: 0.7374 Dataset: MLDoc, Metric: Accuracy, RoBERtalex: 0.9417 Dataset: PAWS-X, Metric: F1, RoBERtalex: 0.7304 Dataset: XNLI, Metric: Accuracy, RoBERtalex: 0.7337 Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. Citing information ------------------ Disclaimer ---------- The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training procedure\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens.\n\n\nThe RoBERTalex pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The model was trained until convergence with 2 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nEvaluation\n----------\n\n\nDue to the lack of domain-specific evaluation data, the model was evaluated on general domain tasks, where it obtains reasonable performance. We fine-tuned the model in the following task:\n\n\nDataset: UD-POS, Metric: F1, RoBERtalex: 0.9871\nDataset: CoNLL-NERC, Metric: F1, RoBERtalex: 0.8323\nDataset: CAPITEL-POS, Metric: F1, RoBERtalex: 0.9788\nDataset: CAPITEL-NERC, Metric: F1, RoBERtalex: 0.8394\nDataset: STS, Metric: Combined, RoBERtalex: 0.7374\nDataset: MLDoc, Metric: Accuracy, RoBERtalex: 0.9417\nDataset: PAWS-X, Metric: F1, RoBERtalex: 0.7304\nDataset: XNLI, Metric: Accuracy, RoBERtalex: 0.7337\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\nCiting information\n------------------\n\n\nDisclaimer\n----------\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #legal #spanish #es #dataset-legal_ES #dataset-temu_legal #arxiv-1907.11692 #arxiv-2110.12201 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training procedure\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens.\n\n\nThe RoBERTalex pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The model was trained until convergence with 2 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nEvaluation\n----------\n\n\nDue to the lack of domain-specific evaluation data, the model was evaluated on general domain tasks, where it obtains reasonable performance. We fine-tuned the model in the following task:\n\n\nDataset: UD-POS, Metric: F1, RoBERtalex: 0.9871\nDataset: CoNLL-NERC, Metric: F1, RoBERtalex: 0.8323\nDataset: CAPITEL-POS, Metric: F1, RoBERtalex: 0.9788\nDataset: CAPITEL-NERC, Metric: F1, RoBERtalex: 0.8394\nDataset: STS, Metric: Combined, RoBERtalex: 0.7374\nDataset: MLDoc, Metric: Accuracy, RoBERtalex: 0.9417\nDataset: PAWS-X, Metric: F1, RoBERtalex: 0.7304\nDataset: XNLI, Metric: Accuracy, RoBERtalex: 0.7337\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\nCiting information\n------------------\n\n\nDisclaimer\n----------\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 81, 329, 28, 40, 24, 12, 505 ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #legal #spanish #es #dataset-legal_ES #dataset-temu_legal #arxiv-1907.11692 #arxiv-2110.12201 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### Training procedure\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens.\n\n\nThe RoBERTalex pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The model was trained until convergence with 2 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nEvaluation\n----------\n\n\nDue to the lack of domain-specific evaluation data, the model was evaluated on general domain tasks, where it obtains reasonable performance. We fine-tuned the model in the following task:\n\n\nDataset: UD-POS, Metric: F1, RoBERtalex: 0.9871\nDataset: CoNLL-NERC, Metric: F1, RoBERtalex: 0.8323\nDataset: CAPITEL-POS, Metric: F1, RoBERtalex: 0.9788\nDataset: CAPITEL-NERC, Metric: F1, RoBERtalex: 0.8394\nDataset: STS, Metric: Combined, RoBERtalex: 0.7374\nDataset: MLDoc, Metric: Accuracy, RoBERtalex: 0.9417\nDataset: PAWS-X, Metric: F1, RoBERtalex: 0.7304\nDataset: XNLI, Metric: Accuracy, RoBERtalex: 0.7337\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\nCiting information\n------------------\n\n\nDisclaimer\n----------\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
text-generation
transformers
# GPT2-base (gpt2-base-bne) trained with data from the National Library of Spain (BNE) ## Table of Contents <details> <summary>Click to expand</summary> - [Overview](#overview) - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to Use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation Information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Overview - **Architecture:** gpt2-base - **Language:** Spanish - **Task:** text-generation - **Data:** BNE ## Model description **GPT2-base-bne** is a transformer-based model for the Spanish language. It is based on the [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations You can use the raw model for text generation or fine-tune it to a downstream task. ## How to Use Here is how to use this model: You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model) >>> set_seed(42) >>> generator("La Biblioteca Nacional de España es una entidad pública y sus fines son", num_return_sequences=5) [{'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son difundir la cultura y el arte hispánico, así como potenciar las publicaciones de la Biblioteca y colecciones de la Biblioteca Nacional de España para su difusión e inquisición. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son diversos. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la publicación, difusión y producción de obras de arte español, y su patrimonio intelectual es el que tiene la distinción de Patrimonio de la Humanidad. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son los de colaborar en el mantenimiento de los servicios bibliotecarios y mejorar la calidad de la información de titularidad institucional y en su difusión, acceso y salvaguarda para la sociedad. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la conservación, enseñanza y difusión del patrimonio bibliográfico en su lengua específica y/o escrita. '}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import AutoTokenizer, GPT2Model >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> model = GPT2Model.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> text = "La Biblioteca Nacional de España es una entidad pública y sus fines son" >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 14, 768]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model) >>> set_seed(42) >>> generator("El hombre se dedica a", num_return_sequences=5) [{'generated_text': 'El hombre se dedica a comprar armas a sus amigos, pero les cuenta la historia de las ventajas de ser "buenos y regulares en la vida" e ir "bien" por los pueblos. '}, {'generated_text': 'El hombre se dedica a la venta de todo tipo de juguetes durante todo el año y los vende a través de Internet con la intención de alcanzar una mayor rentabilidad. '}, {'generated_text': 'El hombre se dedica a la venta ambulante en plena Plaza Mayor. '}, {'generated_text': 'El hombre se dedica a los toros y él se dedica a los servicios religiosos. '}, {'generated_text': 'El hombre se dedica a la caza y a la tala de pinos. '}] >>> set_seed(42) >>> generator("La mujer se dedica a", num_return_sequences=5) [{'generated_text': 'La mujer se dedica a comprar vestidos de sus padres, como su madre, y siempre le enseña el último que ha hecho en poco menos de un año para ver si le da tiempo. '}, {'generated_text': 'La mujer se dedica a la venta ambulante y su pareja vende su cuerpo desde que tenía uso del automóvil. '}, {'generated_text': 'La mujer se dedica a la venta ambulante en plena ola de frío. '}, {'generated_text': 'La mujer se dedica a limpiar los suelos y paredes en pueblos con mucha humedad. '}, {'generated_text': 'La mujer se dedica a la prostitución en varios locales de alterne clandestinos en Barcelona. '}] ``` ## Training ### Training Data The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ### Training Procedure The pretraining objective used for this architecture is next token prediction. The configuration of the **GPT2-base-bne** model is as follows: - gpt2-base: 12-layer, 768-hidden, 12-heads, 117M parameters. The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model with a vocabulary size of 50,262 tokens. The GPT2-base-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 3 days with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citation information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "gpt2-base-bne"], "datasets": ["bne"], "widget": [{"text": "El modelo del lenguaje GPT es capaz de"}, {"text": "La Biblioteca Nacional de Espa\u00f1a es una entidad p\u00fablica y sus fines son"}]}
PlanTL-GOB-ES/gpt2-base-bne
null
[ "transformers", "pytorch", "gpt2", "text-generation", "national library of spain", "spanish", "bne", "gpt2-base-bne", "es", "dataset:bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "es" ]
TAGS #transformers #pytorch #gpt2 #text-generation #national library of spain #spanish #bne #gpt2-base-bne #es #dataset-bne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
GPT2-base (gpt2-base-bne) trained with data from the National Library of Spain (BNE) ==================================================================================== Table of Contents ----------------- Click to expand * Overview * Model description * Intended uses and limitations * How to Use * Limitations and bias * Training + Training data + Training procedure * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citation Information + Disclaimer Overview -------- * Architecture: gpt2-base * Language: Spanish * Task: text-generation * Data: BNE Model description ----------------- GPT2-base-bne is a transformer-based model for the Spanish language. It is based on the GPT-2 model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Intended uses and limitations ----------------------------- You can use the raw model for text generation or fine-tune it to a downstream task. How to Use ---------- Here is how to use this model: You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: Training -------- ### Training Data The National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: ### Training Procedure The pretraining objective used for this architecture is next token prediction. The configuration of the GPT2-base-bne model is as follows: * gpt2-base: 12-layer, 768-hidden, 12-heads, 117M parameters. The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original GPT-2 model with a vocabulary size of 50,262 tokens. The GPT2-base-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 3 days with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information This work is licensed under a Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. information If you use this model, please cite our paper: ### Disclaimer Click to expand The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training Data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:", "### Training Procedure\n\n\nThe pretraining objective used for this architecture is next token prediction.\nThe configuration of the GPT2-base-bne model is as follows:\n\n\n* gpt2-base: 12-layer, 768-hidden, 12-heads, 117M parameters.\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original GPT-2 model with a vocabulary size of 50,262 tokens.\n\n\nThe GPT2-base-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2.\n\n\nThe training lasted a total of 3 days with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #national library of spain #spanish #bne #gpt2-base-bne #es #dataset-bne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "### Training Data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:", "### Training Procedure\n\n\nThe pretraining objective used for this architecture is next token prediction.\nThe configuration of the GPT2-base-bne model is as follows:\n\n\n* gpt2-base: 12-layer, 768-hidden, 12-heads, 117M parameters.\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original GPT-2 model with a vocabulary size of 50,262 tokens.\n\n\nThe GPT2-base-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2.\n\n\nThe training lasted a total of 3 days with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 75, 147, 187, 28, 40, 24, 18, 45, 448 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #national library of spain #spanish #bne #gpt2-base-bne #es #dataset-bne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n### Training Data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:### Training Procedure\n\n\nThe pretraining objective used for this architecture is next token prediction.\nThe configuration of the GPT2-base-bne model is as follows:\n\n\n* gpt2-base: 12-layer, 768-hidden, 12-heads, 117M parameters.\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original GPT-2 model with a vocabulary size of 50,262 tokens.\n\n\nThe GPT2-base-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2.\n\n\nThe training lasted a total of 3 days with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use this model, please cite our paper:### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
text-generation
transformers
# GPT2-large trained with data from the National Library of Spain (BNE) ## Table of Contents <details> <summary>Click to expand</summary> - [Overview](#overview) - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Additional Information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Disclaimer](#disclaimer) </details> ## Overview - **Architecture:** gpt2-large - **Language:** Spanish - **Task:** text-generation - **Data:** BNE ## Model description **GPT2-large-bne** is a transformer-based model for the Spanish language. It is based on the [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations You can use the raw model for text generation or fine-tune it to a downstream task. ## How to use Here is how to use this model: You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model) >>> set_seed(42) >>> generator("La Biblioteca Nacional de España es una entidad pública y sus fines son", num_return_sequences=5) [{'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son servir como herramienta básica en la difusión de la cultura. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son el desarrollo de la educación, la cultura y el conocimiento, promoviendo actividades a través de Internet con la información que recibe del acceso a los fondos que en ella se almacenan. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la publicación y difusión cultural. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son preservar y difundir los fondos y colecciones de la Biblioteca Nacional, así como servir de punto de encuentro para toda la comunidad científica, la academia y para la sociedad civil. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la conservación, estudio y difusión del Patrimonio Bibliográfico en cualquiera de sus formas así como la formación y perfeccionamiento de los especialistas e investigadores en el campo de la información y de las bibliotecas.'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import AutoTokenizer, GPT2Model >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> model = GPT2Model.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> text = "La Biblioteca Nacional de España es una entidad pública y sus fines son" >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 14, 1280]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model) >>> set_seed(42) >>> generator("El hombre se dedica a", num_return_sequences=5) [{'generated_text': 'El hombre se dedica a comprar móviles a sus padres, pero les paga por ellos y luego les devuelve la pasta a ella. '}, {'generated_text': 'El hombre se dedica a la venta ambulante ilegal en la zona de la Alameda, con puestos del rastro callejero o de supermercados a los que luego roba. '}, {'generated_text': 'El hombre se dedica a la venta ambulante en el Paseo de Melilla. '}, {'generated_text': 'El hombre se dedica a los tatuajes y los dibujos en el cuerpo con su apariencia física y no da a basto en las tareas domésticas. '}, {'generated_text': 'El hombre se dedica a la caza indiscriminada de animales. '}] >>> set_seed(42) >>> generator("La mujer se dedica a", num_return_sequences=5) [{'generated_text': 'La mujer se dedica a comprar móviles a sus padres, pero les paga por ellos y luego no paga la factura." '}, {'generated_text': 'La mujer se dedica a la venta ambulante y su pareja vende cupones en el mercadillo navideño. '}, {'generated_text': 'La mujer se dedica a la venta al por mayor de perfumes, cosmética, complementos, y otros bienes de consumo. '}, {'generated_text': 'La mujer se dedica a los servicios sexuales y se aprovecha de los servicios religiosos. '}, {'generated_text': 'La mujer se dedica a la prostitución y tiene dos hijas del matrimonio y la propia familia de la víctima. '}] ``` ## Training ### Training data The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ### Training procedure The pretraining objective used for this architecture is next token prediction. The configuration of the **GPT2-large-bne** model is as follows: - gpt2-large: 36-layer, 1280-hidden, 20-heads, 774M parameters. The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model with a vocabulary size of 50,262 tokens. The GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citation information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "gpt2-large-bne"], "datasets": ["bne"], "widget": [{"text": "El modelo del lenguaje GPT es capaz de"}, {"text": "La Biblioteca Nacional de Espa\u00f1a es una entidad p\u00fablica y sus fines son"}]}
PlanTL-GOB-ES/gpt2-large-bne
null
[ "transformers", "pytorch", "gpt2", "text-generation", "national library of spain", "spanish", "bne", "gpt2-large-bne", "es", "dataset:bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "es" ]
TAGS #transformers #pytorch #gpt2 #text-generation #national library of spain #spanish #bne #gpt2-large-bne #es #dataset-bne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
GPT2-large trained with data from the National Library of Spain (BNE) ===================================================================== Table of Contents ----------------- Click to expand * Overview * Model description * Intended uses and limitations * How to use * Limitations and bias * Training + Training data + Training procedure * Additional Information + Author + Contact information + Copyright + Licensing information + Funding + Disclaimer Overview -------- * Architecture: gpt2-large * Language: Spanish * Task: text-generation * Data: BNE Model description ----------------- GPT2-large-bne is a transformer-based model for the Spanish language. It is based on the GPT-2 model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Intended uses and limitations ----------------------------- You can use the raw model for text generation or fine-tune it to a downstream task. How to use ---------- Here is how to use this model: You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: Training -------- ### Training data The National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: ### Training procedure The pretraining objective used for this architecture is next token prediction. The configuration of the GPT2-large-bne model is as follows: * gpt2-large: 36-layer, 1280-hidden, 20-heads, 774M parameters. The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original GPT-2 model with a vocabulary size of 50,262 tokens. The GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information This work is licensed under a Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. information If you use this model, please cite our paper: ### Disclaimer Click to expand The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:", "### Training procedure\n\n\nThe pretraining objective used for this architecture is next token prediction.\nThe configuration of the GPT2-large-bne model is as follows:\n\n\n* gpt2-large: 36-layer, 1280-hidden, 20-heads, 774M parameters.\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original GPT-2 model with a vocabulary size of 50,262 tokens.\n\n\nThe GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2.\n\n\nThe training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #national library of spain #spanish #bne #gpt2-large-bne #es #dataset-bne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "### Training data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:", "### Training procedure\n\n\nThe pretraining objective used for this architecture is next token prediction.\nThe configuration of the GPT2-large-bne model is as follows:\n\n\n* gpt2-large: 36-layer, 1280-hidden, 20-heads, 774M parameters.\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original GPT-2 model with a vocabulary size of 50,262 tokens.\n\n\nThe GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2.\n\n\nThe training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 75, 147, 188, 28, 40, 24, 18, 45, 448 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #national library of spain #spanish #bne #gpt2-large-bne #es #dataset-bne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n### Training data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:### Training procedure\n\n\nThe pretraining objective used for this architecture is next token prediction.\nThe configuration of the GPT2-large-bne model is as follows:\n\n\n* gpt2-large: 36-layer, 1280-hidden, 20-heads, 774M parameters.\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original GPT-2 model with a vocabulary size of 50,262 tokens.\n\n\nThe GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2.\n\n\nThe training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use this model, please cite our paper:### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
fill-mask
transformers
# Biomedical-clinical language model for Spanish ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Model description Biomedical pretrained language model for Spanish. This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical-clinical** corpus in Spanish collected from several sources. ## Intended uses and limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. ## How to use ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") model = AutoModelForMaskedLM.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") from transformers import pipeline unmasker = pipeline('fill-mask', model="BSC-TeMU/roberta-base-biomedical-es") unmasker("El único antecedente personal a reseñar era la <mask> arterial.") ``` ``` # Output [ { "sequence": " El único antecedente personal a reseñar era la hipertensión arterial.", "score": 0.9855039715766907, "token": 3529, "token_str": " hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la diabetes arterial.", "score": 0.0039140828885138035, "token": 1945, "token_str": " diabetes" }, { "sequence": " El único antecedente personal a reseñar era la hipotensión arterial.", "score": 0.002484665485098958, "token": 11483, "token_str": " hipotensión" }, { "sequence": " El único antecedente personal a reseñar era la Hipertensión arterial.", "score": 0.0023484621196985245, "token": 12238, "token_str": " Hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la presión arterial.", "score": 0.0008009297889657319, "token": 2267, "token_str": " presión" } ] ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are: - data parsing in different formats - sentence splitting - language detection - filtering of ill-formed sentences - deduplication of repetitive contents - keep the original document boundaries Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora have been applied. Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora: | Name | No. tokens | Description | |-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Medical crawler](https://zenodo.org/record/4561970) | 745,705,946 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. | | Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. | | Clinical notes/documents | 91,250,080 | Collection of more than 278K clinical documents, including discharge reports, clinical course notes and X-ray reports, for a total of 91M tokens. | | [Scielo](https://github.com/PlanTL-SANIDAD/SciELO-Spain-Crawler) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. | | [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. | | Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. | | Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". | | [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. | | [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source. | | PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. | ## Evaluation The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: - [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). - [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). - ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the [mBERT](https://huggingface.co/bert-base-multilingual-cased) and [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) models: | F1 - Precision - Recall | roberta-base-biomedical-clinical-es | mBERT | BETO | |---------------------------|----------------------------|-------------------------------|-------------------------| | PharmaCoNER | **90.04** - **88.92** - **91.18** | 87.46 - 86.50 - 88.46 | 88.18 - 87.12 - 89.28 | | CANTEMIST | **83.34** - **81.48** - **85.30** | 82.61 - 81.12 - 84.15 | 82.42 - 80.91 - 84.00 | | ICTUSnet | **88.08** - **84.92** - **91.50** | 86.75 - 83.53 - 90.23 | 85.95 - 83.10 - 89.02 | ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citation information If you use our models, please cite our latest preprint: ```bibtex @misc{carrino2021biomedical, title={Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Asier Gutiérrez-Fandiño and Joan Llop-Palao and Marc Pàmies and Aitor Gonzalez-Agirre and Marta Villegas}, year={2021}, eprint={2109.03570}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` If you use our Medical Crawler corpus, please cite the preprint: ```bibtex @misc{carrino2021spanish, title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Ona de Gibert Bonet and Asier Gutiérrez-Fandiño and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas}, year={2021}, eprint={2109.07765}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["biomedical", "clinical", "spanish"], "metrics": ["ppl"], "widget": [{"text": "El \u00fanico antecedente personal a rese\u00f1ar era la <mask> arterial."}, {"text": "Las radiolog\u00edas \u00f3seas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales."}, {"text": "En el <mask> toraco-abd\u00f3mino-p\u00e9lvico no se encontraron hallazgos patol\u00f3gicos de inter\u00e9s."}]}
PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
null
[ "transformers", "pytorch", "roberta", "fill-mask", "biomedical", "clinical", "spanish", "es", "arxiv:2109.03570", "arxiv:2109.07765", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2109.03570", "2109.07765" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #fill-mask #biomedical #clinical #spanish #es #arxiv-2109.03570 #arxiv-2109.07765 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Biomedical-clinical language model for Spanish ============================================== Table of contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Limitations and bias * Training * Evaluation * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citation information + Disclaimer Model description ----------------- Biomedical pretrained language model for Spanish. This model is a RoBERTa-based model trained on a biomedical-clinical corpus in Spanish collected from several sources. Intended uses and limitations ----------------------------- The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. How to use ---------- Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Training -------- The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are: * data parsing in different formats * sentence splitting * language detection * filtering of ill-formed sentences * deduplication of repetitive contents * keep the original document boundaries Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora have been applied. Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora: Name: Medical crawler, No. tokens: 745,705,946, Description: Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. Name: Clinical cases misc., No. tokens: 102,855,267, Description: A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. Name: Clinical notes/documents, No. tokens: 91,250,080, Description: Collection of more than 278K clinical documents, including discharge reports, clinical course notes and X-ray reports, for a total of 91M tokens. Name: Scielo, No. tokens: 60,007,289, Description: Publications written in Spanish crawled from the Spanish SciELO server in 2017. Name: BARR2\_background, No. tokens: 24,516,442, Description: Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. Name: Wikipedia\_life\_sciences, No. tokens: 13,890,501, Description: Wikipedia articles crawled 04/01/2021 with the Wikipedia API python library starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. Name: Patents, No. tokens: 13,463,387, Description: Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". Name: EMEA, No. tokens: 5,377,448, Description: Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. Name: mespen\_Medline, No. tokens: 4,166,077, Description: Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source. Name: PubMed, No. tokens: 1,858,966, Description: Open-access articles from the PubMed repository crawled in 2017. Evaluation ---------- The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: * PharmaCoNER: is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: URL * CANTEMIST: is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: URL * ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the mBERT and BETO models: Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. information If you use our models, please cite our latest preprint: If you use our Medical Crawler corpus, please cite the preprint: ### Disclaimer Click to expand The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use our models, please cite our latest preprint:\n\n\nIf you use our Medical Crawler corpus, please cite the preprint:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #biomedical #clinical #spanish #es #arxiv-2109.03570 #arxiv-2109.07765 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use our models, please cite our latest preprint:\n\n\nIf you use our Medical Crawler corpus, please cite the preprint:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 66, 28, 40, 24, 12, 64, 448 ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #biomedical #clinical #spanish #es #arxiv-2109.03570 #arxiv-2109.07765 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use our models, please cite our latest preprint:\n\n\nIf you use our Medical Crawler corpus, please cite the preprint:### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
fill-mask
transformers
# Biomedical language model for Spanish ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Tokenization and model pretraining](#Tokenization-pretraining) - [Training corpora and preprocessing](#training-corpora-preprocessing) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Disclaimer](#disclaimer) </details> ## Model description Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-SANIDAD/lm-biomedical-clinical-es) and read our [preprint](https://arxiv.org/abs/2109.03570). ## Intended uses and limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. ## How to use ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") model = AutoModelForMaskedLM.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") from transformers import pipeline unmasker = pipeline('fill-mask', model="BSC-TeMU/roberta-base-biomedical-es") unmasker("El único antecedente personal a reseñar era la <mask> arterial.") ``` ``` # Output [ { "sequence": " El único antecedente personal a reseñar era la hipertensión arterial.", "score": 0.9855039715766907, "token": 3529, "token_str": " hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la diabetes arterial.", "score": 0.0039140828885138035, "token": 1945, "token_str": " diabetes" }, { "sequence": " El único antecedente personal a reseñar era la hipotensión arterial.", "score": 0.002484665485098958, "token": 11483, "token_str": " hipotensión" }, { "sequence": " El único antecedente personal a reseñar era la Hipertensión arterial.", "score": 0.0023484621196985245, "token": 12238, "token_str": " Hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la presión arterial.", "score": 0.0008009297889657319, "token": 2267, "token_str": " presión" } ] ``` ## Training ### Tokenization and model pretraining This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical** corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. ### Training corpora and preprocessing The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers. To obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied: - data parsing in different formats - sentence splitting - language detection - filtering of ill-formed sentences - deduplication of repetitive contents - keep the original document boundaries Finally, the corpora are concatenated and further global deduplication among the corpora have been applied. The result is a medium-size biomedical corpus for Spanish composed of about 963M tokens. The table below shows some basic statistics of the individual cleaned corpora: | Name | No. tokens | Description | |-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Medical crawler](https://zenodo.org/record/4561970) | 745,705,946 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. | | Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. | | [Scielo](https://github.com/PlanTL-SANIDAD/SciELO-Spain-Crawler) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. | | [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. | | Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. | | Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". | | [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. | | [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source. | | PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. | ## Evaluation The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: - [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). - [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). - ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the [mBERT](https://huggingface.co/bert-base-multilingual-cased) and [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) models: | F1 - Precision - Recall | roberta-base-biomedical-es | mBERT | BETO | |---------------------------|----------------------------|-------------------------------|-------------------------| | PharmaCoNER | **89.48** - **87.85** - **91.18** | 87.46 - 86.50 - 88.46 | 88.18 - 87.12 - 89.28 | | CANTEMIST | **83.87** - **81.70** - **86.17** | 82.61 - 81.12 - 84.15 | 82.42 - 80.91 - 84.00 | | ICTUSnet | **88.12** - **85.56** - **90.83** | 86.75 - 83.53 - 90.23 | 85.95 - 83.10 - 89.02 | ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Citation information If you use our models, please cite our latest preprint: ```bibtex @misc{carrino2021biomedical, title={Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Asier Gutiérrez-Fandiño and Joan Llop-Palao and Marc Pàmies and Aitor Gonzalez-Agirre and Marta Villegas}, year={2021}, eprint={2109.03570}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` If you use our Medical Crawler corpus, please cite the preprint: ```bibtex @misc{carrino2021spanish, title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Ona de Gibert Bonet and Asier Gutiérrez-Fandiño and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas}, year={2021}, eprint={2109.07765}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["biomedical", "spanish"], "metrics": ["ppl"], "widget": [{"text": "El \u00fanico antecedente personal a rese\u00f1ar era la <mask> arterial."}, {"text": "Las radiolog\u00edas \u00f3seas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales."}, {"text": "En el <mask> toraco-abd\u00f3mino-p\u00e9lvico no se encontraron hallazgos patol\u00f3gicos de inter\u00e9s."}]}
PlanTL-GOB-ES/roberta-base-biomedical-es
null
[ "transformers", "pytorch", "roberta", "fill-mask", "biomedical", "spanish", "es", "arxiv:2109.03570", "arxiv:2109.07765", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2109.03570", "2109.07765" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #fill-mask #biomedical #spanish #es #arxiv-2109.03570 #arxiv-2109.07765 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Biomedical language model for Spanish ===================================== Table of contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Limitations and bias * Training + Tokenization and model pretraining + Training corpora and preprocessing * Evaluation * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Disclaimer Model description ----------------- Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official repository and read our preprint. Intended uses and limitations ----------------------------- The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. How to use ---------- Training -------- ### Tokenization and model pretraining This model is a RoBERTa-based model trained on a biomedical corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. ### Training corpora and preprocessing The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers. To obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied: * data parsing in different formats + sentence splitting + language detection + filtering of ill-formed sentences + deduplication of repetitive contents + keep the original document boundaries Finally, the corpora are concatenated and further global deduplication among the corpora have been applied. The result is a medium-size biomedical corpus for Spanish composed of about 963M tokens. The table below shows some basic statistics of the individual cleaned corpora: Name: Medical crawler, No. tokens: 745,705,946, Description: Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. Name: Clinical cases misc., No. tokens: 102,855,267, Description: A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. Name: Scielo, No. tokens: 60,007,289, Description: Publications written in Spanish crawled from the Spanish SciELO server in 2017. Name: BARR2\_background, No. tokens: 24,516,442, Description: Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. Name: Wikipedia\_life\_sciences, No. tokens: 13,890,501, Description: Wikipedia articles crawled 04/01/2021 with the Wikipedia API python library starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. Name: Patents, No. tokens: 13,463,387, Description: Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". Name: EMEA, No. tokens: 5,377,448, Description: Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. Name: mespen\_Medline, No. tokens: 4,166,077, Description: Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source. Name: PubMed, No. tokens: 1,858,966, Description: Open-access articles from the PubMed repository crawled in 2017. Evaluation ---------- The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: * PharmaCoNER: is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: URL * CANTEMIST: is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: URL * ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the mBERT and BETO models: Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. information If you use our models, please cite our latest preprint: If you use our Medical Crawler corpus, please cite the preprint: ### Disclaimer Click to expand The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Tokenization and model pretraining\n\n\nThis model is a RoBERTa-based model trained on a\nbiomedical corpus in Spanish collected from several sources (see next section).\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE)\nused in the original RoBERTA model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences.", "### Training corpora and preprocessing\n\n\nThe training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers.\nTo obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied:\n\n\n* data parsing in different formats\n\t+ sentence splitting\n\t+ language detection\n\t+ filtering of ill-formed sentences\n\t+ deduplication of repetitive contents\n\t+ keep the original document boundaries\n\n\nFinally, the corpora are concatenated and further global deduplication among the corpora have been applied.\nThe result is a medium-size biomedical corpus for Spanish composed of about 963M tokens. The table below shows some basic statistics of the individual cleaned corpora:\n\n\nName: Medical crawler, No. tokens: 745,705,946, Description: Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains.\nName: Clinical cases misc., No. tokens: 102,855,267, Description: A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document.\nName: Scielo, No. tokens: 60,007,289, Description: Publications written in Spanish crawled from the Spanish SciELO server in 2017.\nName: BARR2\\_background, No. tokens: 24,516,442, Description: Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines.\nName: Wikipedia\\_life\\_sciences, No. tokens: 13,890,501, Description: Wikipedia articles crawled 04/01/2021 with the Wikipedia API python library starting from the \"Ciencias\\_de\\_la\\_vida\" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content.\nName: Patents, No. tokens: 13,463,387, Description: Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: \"A61B\", \"A61C\",\"A61F\", \"A61H\", \"A61K\", \"A61L\",\"A61M\", \"A61B\", \"A61P\".\nName: EMEA, No. tokens: 5,377,448, Description: Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency.\nName: mespen\\_Medline, No. tokens: 4,166,077, Description: Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source.\nName: PubMed, No. tokens: 1,858,966, Description: Open-access articles from the PubMed repository crawled in 2017.\n\n\nEvaluation\n----------\n\n\nThe model has been evaluated on the Named Entity Recognition (NER) using the following datasets:\n\n\n* PharmaCoNER: is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: URL\n* CANTEMIST: is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: URL\n* ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables.\n\n\nThe evaluation results are compared against the mBERT and BETO models:\n\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use our models, please cite our latest preprint:\n\n\nIf you use our Medical Crawler corpus, please cite the preprint:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #biomedical #spanish #es #arxiv-2109.03570 #arxiv-2109.07765 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Tokenization and model pretraining\n\n\nThis model is a RoBERTa-based model trained on a\nbiomedical corpus in Spanish collected from several sources (see next section).\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE)\nused in the original RoBERTA model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences.", "### Training corpora and preprocessing\n\n\nThe training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers.\nTo obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied:\n\n\n* data parsing in different formats\n\t+ sentence splitting\n\t+ language detection\n\t+ filtering of ill-formed sentences\n\t+ deduplication of repetitive contents\n\t+ keep the original document boundaries\n\n\nFinally, the corpora are concatenated and further global deduplication among the corpora have been applied.\nThe result is a medium-size biomedical corpus for Spanish composed of about 963M tokens. The table below shows some basic statistics of the individual cleaned corpora:\n\n\nName: Medical crawler, No. tokens: 745,705,946, Description: Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains.\nName: Clinical cases misc., No. tokens: 102,855,267, Description: A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document.\nName: Scielo, No. tokens: 60,007,289, Description: Publications written in Spanish crawled from the Spanish SciELO server in 2017.\nName: BARR2\\_background, No. tokens: 24,516,442, Description: Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines.\nName: Wikipedia\\_life\\_sciences, No. tokens: 13,890,501, Description: Wikipedia articles crawled 04/01/2021 with the Wikipedia API python library starting from the \"Ciencias\\_de\\_la\\_vida\" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content.\nName: Patents, No. tokens: 13,463,387, Description: Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: \"A61B\", \"A61C\",\"A61F\", \"A61H\", \"A61K\", \"A61L\",\"A61M\", \"A61B\", \"A61P\".\nName: EMEA, No. tokens: 5,377,448, Description: Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency.\nName: mespen\\_Medline, No. tokens: 4,166,077, Description: Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source.\nName: PubMed, No. tokens: 1,858,966, Description: Open-access articles from the PubMed repository crawled in 2017.\n\n\nEvaluation\n----------\n\n\nThe model has been evaluated on the Named Entity Recognition (NER) using the following datasets:\n\n\n* PharmaCoNER: is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: URL\n* CANTEMIST: is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: URL\n* ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables.\n\n\nThe evaluation results are compared against the mBERT and BETO models:\n\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use our models, please cite our latest preprint:\n\n\nIf you use our Medical Crawler corpus, please cite the preprint:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 64, 160, 814, 28, 40, 24, 12, 64, 448 ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #biomedical #spanish #es #arxiv-2109.03570 #arxiv-2109.07765 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Tokenization and model pretraining\n\n\nThis model is a RoBERTa-based model trained on a\nbiomedical corpus in Spanish collected from several sources (see next section).\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE)\nused in the original RoBERTA model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences.### Training corpora and preprocessing\n\n\nThe training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers.\nTo obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied:\n\n\n* data parsing in different formats\n\t+ sentence splitting\n\t+ language detection\n\t+ filtering of ill-formed sentences\n\t+ deduplication of repetitive contents\n\t+ keep the original document boundaries\n\n\nFinally, the corpora are concatenated and further global deduplication among the corpora have been applied.\nThe result is a medium-size biomedical corpus for Spanish composed of about 963M tokens. The table below shows some basic statistics of the individual cleaned corpora:\n\n\nName: Medical crawler, No. tokens: 745,705,946, Description: Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains.\nName: Clinical cases misc., No. tokens: 102,855,267, Description: A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document.\nName: Scielo, No. tokens: 60,007,289, Description: Publications written in Spanish crawled from the Spanish SciELO server in 2017.\nName: BARR2\\_background, No. tokens: 24,516,442, Description: Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines.\nName: Wikipedia\\_life\\_sciences, No. tokens: 13,890,501, Description: Wikipedia articles crawled 04/01/2021 with the Wikipedia API python library starting from the \"Ciencias\\_de\\_la\\_vida\" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content.\nName: Patents, No. tokens: 13,463,387, Description: Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: \"A61B\", \"A61C\",\"A61F\", \"A61H\", \"A61K\", \"A61L\",\"A61M\", \"A61B\", \"A61P\".\nName: EMEA, No. tokens: 5,377,448, Description: Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency.\nName: mespen\\_Medline, No. tokens: 4,166,077, Description: Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source.\nName: PubMed, No. tokens: 1,858,966, Description: Open-access articles from the PubMed repository crawled in 2017.\n\n\nEvaluation\n----------\n\n\nThe model has been evaluated on the Named Entity Recognition (NER) using the following datasets:\n\n\n* PharmaCoNER: is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: URL\n* CANTEMIST: is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: URL\n* ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables.\n\n\nThe evaluation results are compared against the mBERT and BETO models:\n\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use our models, please cite our latest preprint:\n\n\nIf you use our Medical Crawler corpus, please cite the preprint:### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
token-classification
transformers
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-bne-capitel-ner-plus** is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. This model is a more robust version of the [roberta-base-bne-capitel-ner](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner) model that recognizes better lowercased Named Entities (NE). ## Intended uses and limitations **roberta-base-bne-capitel-ner-plus** model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases. ## How to use ```python from transformers import pipeline from pprint import pprint nlp = pipeline("ner", model="PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus") example = "Me llamo francisco javier y vivo en madrid." ner_results = nlp(example) pprint(ner_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used for training and evaluation is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). We lowercased and uppercased the dataset, and added the additional sentences to the training. ### Training procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 score. ## Evaluation results We evaluated the **roberta-base-bne-capitel-ner-plus** on the CAPITEL-NERC test set against standard multilingual and monolingual baselines: | Model | CAPITEL-NERC (F1) | | ------------|:----| | roberta-large-bne-capitel-ner | **90.51** | | roberta-base-bne-capitel-ner | 89.60| | roberta-base-bne-capitel-ner-plus | 89.60| | BETO | 87.72 | | mBERT | 88.10 | | BERTIN | 88.56 | | ELECTRA | 80.35 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "capitel", "ner"], "datasets": ["bne", "capitel"], "metrics": ["f1"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": ["Me llamo francisco javier y vivo en madrid.", "Mi hermano ram\u00f3n y su mejor amigo luis trabajan en el bsc."], "model-index": [{"name": "roberta-base-bne-capiter-ner-plus", "results": [{"task": {"type": "token-classification"}, "dataset": {"name": "CAPITEL-NERC", "type": "ner"}, "metrics": [{"type": "f1", "value": 0.896, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus
null
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "ner", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #ner #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. ================================================================================================= Table of contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Limitations and bias * Training * Training + Training data + Training procedure * Evaluation * Evaluation + Variable and metrics + Evaluation results * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citing information + Disclaimer Model description ----------------- The roberta-base-bne-capitel-ner-plus is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the roberta-base-bne model, a RoBERTa base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. This model is a more robust version of the roberta-base-bne-capitel-ner model that recognizes better lowercased Named Entities (NE). Intended uses and limitations ----------------------------- roberta-base-bne-capitel-ner-plus model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases. How to use ---------- Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Training -------- The dataset used for training and evaluation is the one from the CAPITEL competition at IberLEF 2020 (sub-task 1). We lowercased and uppercased the dataset, and added the additional sentences to the training. ### Training procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. Evaluation ---------- ### Variable and metrics This model was finetuned maximizing F1 score. Evaluation results ------------------ We evaluated the roberta-base-bne-capitel-ner-plus on the CAPITEL-NERC test set against standard multilingual and monolingual baselines: For more details, check the fine-tuning and evaluation scripts in the official GitHub repository. Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our paper: ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------", "### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-capitel-ner-plus on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #ner #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------", "### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-capitel-ner-plus on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 82, 65, 122, 28, 40, 24, 12, 33, 16, 445 ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #ner #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-capitel-ner-plus on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.### Citing information\n\n\nIf you use this model, please cite our paper:### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
token-classification
transformers
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-bne-capitel-ner** is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations **roberta-base-bne-capitel-ner** model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases. ## How to use ```python from transformers import pipeline from pprint import pprint nlp = pipeline("ner", model="PlanTL-GOB-ES/roberta-base-bne-capitel-ner") example = "Me llamo Francisco Javier y vivo en Madrid." ner_results = nlp(example) pprint(ner_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used for training and evaluation is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). ### Training procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 score. ## Evaluation results We evaluated the **roberta-base-bne-capitel-ner** on the CAPITEL-NERC test set against standard multilingual and monolingual baselines: | Model | CAPITEL-NERC (F1) | | ------------|:----| | roberta-large-bne-capitel-ner | **90.51** | | roberta-base-bne-capitel-ner | 89.60| | BETO | 87.72 | | mBERT | 88.10 | | BERTIN | 88.56 | | ELECTRA | 80.35 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "capitel", "ner"], "datasets": ["bne", "capitel"], "metrics": ["f1"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": ["Me llamo Francisco Javier y vivo en Madrid.", "Mi hermano Ram\u00f3n y su mejor amigo Luis trabajan en el BSC."], "model-index": [{"name": "roberta-base-bne-capiter-ner", "results": [{"task": {"type": "token-classification"}, "dataset": {"name": "CAPITEL-NERC", "type": "ner"}, "metrics": [{"type": "f1", "value": 0.896, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-base-bne-capitel-ner
null
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "ner", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #ner #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. ================================================================================================= Table of contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Limitations and bias * Training * Training + Training data + Training procedure * Evaluation * Evaluation + Variable and metrics + Evaluation results * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citing information + Disclaimer Model description ----------------- The roberta-base-bne-capitel-ner is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the roberta-base-bne model, a RoBERTa base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Intended uses and limitations ----------------------------- roberta-base-bne-capitel-ner model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases. How to use ---------- Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Training -------- The dataset used for training and evaluation is the one from the CAPITEL competition at IberLEF 2020 (sub-task 1). ### Training procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. Evaluation ---------- ### Variable and metrics This model was finetuned maximizing F1 score. Evaluation results ------------------ We evaluated the roberta-base-bne-capitel-ner on the CAPITEL-NERC test set against standard multilingual and monolingual baselines: For more details, check the fine-tuning and evaluation scripts in the official GitHub repository. Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our paper: ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------", "### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-capitel-ner on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #ner #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------", "### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-capitel-ner on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 86, 65, 120, 28, 40, 24, 12, 33, 16, 445 ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #ner #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-capitel-ner on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.### Citing information\n\n\nIf you use this model, please cite our paper:### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
token-classification
transformers
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-bne-capitel-pos** is a Part-of-speech-tagging (POS) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. # Intended uses and limitations **roberta-base-bne-capitel-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("token-classification", model="PlanTL-GOB-ES/roberta-base-bne-capitel-pos") example = "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto." pos_results = nlp(example) pprint(pos_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2). ### Training procedure The model was trained with a batch size of 32 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 score. ## Evaluation results We evaluated the **roberta-base-bne-capitel-pos** on the CAPITEL-POS test set against standard multilingual and monolingual baselines: | Model | CAPITEL-POS (F1) | | ------------|:----| | roberta-large-bne-capitel-pos | **98.56** | | roberta-base-bne-capitel-pos | 98.46 | | BETO | 98.36 | | mBERT | 98.39 | | BERTIN | 98.47 | | ELECTRA | 98.16 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "capitel", "pos"], "datasets": ["bne", "capitel"], "metrics": ["f1"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": [{"text": "Festival de San Sebasti\u00e1n: Johnny Depp recibir\u00e1 el premio Donostia en pleno rifirrafe judicial con Amber Heard"}, {"text": "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto."}, {"text": "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje."}], "model-index": [{"name": "roberta-base-bne-capiter-pos", "results": [{"task": {"type": "token-classification"}, "dataset": {"name": "CAPITEL-POS", "type": "pos"}, "metrics": [{"type": "f1", "value": 0.9846, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-base-bne-capitel-pos
null
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "pos", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #pos #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset ====================================================================================== Table of contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Limitations and bias * Training * Training + Training data + Training procedure * Evaluation * Evaluation + Variable and metrics + Evaluation results * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citing information + Disclaimer Model description ----------------- The roberta-base-bne-capitel-pos is a Part-of-speech-tagging (POS) model for the Spanish language fine-tuned from the roberta-base-bne model, a RoBERTa base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Intended uses and limitations ============================= roberta-base-bne-capitel-pos model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases. How to use ---------- Here is how to use this model: Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Training -------- The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 2). ### Training procedure The model was trained with a batch size of 32 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. Evaluation ---------- ### Variable and metrics This model was finetuned maximizing F1 score. Evaluation results ------------------ We evaluated the roberta-base-bne-capitel-pos on the CAPITEL-POS test set against standard multilingual and monolingual baselines: For more details, check the fine-tuning and evaluation scripts in the official GitHub repository. Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our paper: ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training procedure\n\n\nThe model was trained with a batch size of 32 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------", "### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-capitel-pos on the CAPITEL-POS test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #pos #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training procedure\n\n\nThe model was trained with a batch size of 32 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------", "### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-capitel-pos on the CAPITEL-POS test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 82, 65, 120, 28, 40, 24, 12, 33, 16, 445 ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #pos #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training procedure\n\n\nThe model was trained with a batch size of 32 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-capitel-pos on the CAPITEL-POS test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.### Citing information\n\n\nIf you use this model, please cite our paper:### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
question-answering
transformers
# Spanish RoBERTa-base trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-bne-sqac** is a Question Answering (QA) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations **roberta-base-bne-sqac** model can be used for extractive question answering. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use ```python from transformers import pipeline nlp = pipeline("question-answering", model="PlanTL-GOB-ES/roberta-base-bne-sqac") text = "¿Dónde vivo?" context = "Me llamo Wolfgang y vivo en Berlin" qa_results = nlp(text, context) print(qa_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data We used the QA dataset in Spanish called [SQAC corpus](https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC) for training and evaluation. ### Training procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation results We evaluated the **roberta-base-bne-sqac** on the SQAC test set against standard multilingual and monolingual baselines: | Model | SQAC (F1) | | ------------|:----| | roberta-large-bne-sqac | **82.02** | | roberta-base-bne-sqac | 79.23| | BETO | 79.23 | | mBERT | 75.62 | | BERTIN | 76.78 | | ELECTRA | 73.83 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "qa", "question answering"], "datasets": ["PlanTL-GOB-ES/SQAC"], "metrics": ["f1", "exact match"], "model-index": [{"name": "roberta-base-bne-sqac", "results": [{"task": {"type": "question-answering"}, "dataset": {"name": "SQAC", "type": "PlanTL-GOB-ES/SQAC"}, "metrics": [{"type": "f1", "value": 0.7923, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-base-bne-sqac
null
[ "transformers", "pytorch", "roberta", "question-answering", "national library of spain", "spanish", "bne", "qa", "question answering", "es", "dataset:PlanTL-GOB-ES/SQAC", "arxiv:1907.11692", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #question-answering #national library of spain #spanish #bne #qa #question answering #es #dataset-PlanTL-GOB-ES/SQAC #arxiv-1907.11692 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
Spanish RoBERTa-base trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset. =================================================================================================== Table of contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Limitations and bias * Training * Training + Training data + Training procedure * Evaluation * Evaluation + Variable and metrics + Evaluation results * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citing information + Disclaimer Model description ----------------- The roberta-base-bne-sqac is a Question Answering (QA) model for the Spanish language fine-tuned from the roberta-base-bne model, a RoBERTa base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Intended uses and limitations ----------------------------- roberta-base-bne-sqac model can be used for extractive question answering. The model is limited by its training dataset and may not generalize well for all use cases. How to use ---------- Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Training -------- ### Training data We used the QA dataset in Spanish called SQAC corpus for training and evaluation. ### Training procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. Evaluation results ------------------ We evaluated the roberta-base-bne-sqac on the SQAC test set against standard multilingual and monolingual baselines: For more details, check the fine-tuning and evaluation scripts in the official GitHub repository. Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our paper: ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training data\n\n\nWe used the QA dataset in Spanish called SQAC corpus for training and evaluation.", "### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-sqac on the SQAC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #question-answering #national library of spain #spanish #bne #qa #question answering #es #dataset-PlanTL-GOB-ES/SQAC #arxiv-1907.11692 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "### Training data\n\n\nWe used the QA dataset in Spanish called SQAC corpus for training and evaluation.", "### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-sqac on the SQAC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 81, 23, 148, 28, 40, 24, 12, 33, 16, 445 ]
[ "TAGS\n#transformers #pytorch #roberta #question-answering #national library of spain #spanish #bne #qa #question answering #es #dataset-PlanTL-GOB-ES/SQAC #arxiv-1907.11692 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n### Training data\n\n\nWe used the QA dataset in Spanish called SQAC corpus for training and evaluation.### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-base-bne-sqac on the SQAC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.### Citing information\n\n\nIf you use this model, please cite our paper:### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
fill-mask
transformers
# RoBERTa base trained with data from the National Library of Spain (BNE) ## Table of Contents <details> <summary>Click to expand</summary> - [Overview](#overview) - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation Information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Overview - **Architecture:** roberta-base - **Language:** Spanish - **Task:** fill-mask - **Data:** BNE ## Model description The **roberta-base-bne** is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations The **roberta-base-bne** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. You can use the raw model for fill mask or fine-tune it to a downstream task. ## How to use Here is how to use this model: ```python >>> from transformers import pipeline >>> from pprint import pprint >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne') >>> pprint(unmasker("Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje.")) [{'score': 0.08422081917524338, 'token': 3832, 'token_str': ' desarrollar', 'sequence': 'Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje.'}, {'score': 0.06348305940628052, 'token': 3078, 'token_str': ' crear', 'sequence': 'Gracias a los datos de la BNE se ha podido crear este modelo del lenguaje.'}, {'score': 0.06148449331521988, 'token': 2171, 'token_str': ' realizar', 'sequence': 'Gracias a los datos de la BNE se ha podido realizar este modelo del lenguaje.'}, {'score': 0.056218471378088, 'token': 10880, 'token_str': ' elaborar', 'sequence': 'Gracias a los datos de la BNE se ha podido elaborar este modelo del lenguaje.'}, {'score': 0.05133328214287758, 'token': 31915, 'token_str': ' validar', 'sequence': 'Gracias a los datos de la BNE se ha podido validar este modelo del lenguaje.'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import RobertaTokenizer, RobertaModel >>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/roberta-base-bne') >>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/roberta-base-bne') >>> text = "Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje." >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 19, 768]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> from pprint import pprint >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne') >>> set_seed(42) >>> pprint(unmasker("Antonio está pensando en <mask>.")) [{'score': 0.07950365543365479, 'sequence': 'Antonio está pensando en ti.', 'token': 486, 'token_str': ' ti'}, {'score': 0.03375273942947388, 'sequence': 'Antonio está pensando en irse.', 'token': 13134, 'token_str': ' irse'}, {'score': 0.031026942655444145, 'sequence': 'Antonio está pensando en casarse.', 'token': 24852, 'token_str': ' casarse'}, {'score': 0.030703715980052948, 'sequence': 'Antonio está pensando en todo.', 'token': 665, 'token_str': ' todo'}, {'score': 0.02838558703660965, 'sequence': 'Antonio está pensando en ello.', 'token': 1577, 'token_str': ' ello'}] >>> set_seed(42) >>> pprint(unmasker("Mohammed está pensando en <mask>.")) [{'score': 0.05433618649840355, 'sequence': 'Mohammed está pensando en morir.', 'token': 9459, 'token_str': ' morir'}, {'score': 0.0400255024433136, 'sequence': 'Mohammed está pensando en irse.', 'token': 13134, 'token_str': ' irse'}, {'score': 0.03705748915672302, 'sequence': 'Mohammed está pensando en todo.', 'token': 665, 'token_str': ' todo'}, {'score': 0.03658654913306236, 'sequence': 'Mohammed está pensando en quedarse.', 'token': 9331, 'token_str': ' quedarse'}, {'score': 0.03329474478960037, 'sequence': 'Mohammed está pensando en ello.', 'token': 1577, 'token_str': ' ello'}] ``` ## Training ### Training data The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ### Training procedure The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The **roberta-base-bne** pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Evaluation When fine-tuned on downstream tasks, this model achieves the following results: | Dataset | Metric | [**RoBERTa-base**](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) | |--------------|----------|------------| | MLDoc | F1 | 0.9664 | | CoNLL-NERC | F1 | 0.8851 | | CAPITEL-NERC | F1 | 0.8960 | | PAWS-X | F1 | 0.9020 | | UD-POS | F1 | 0.9907 | | CAPITEL-POS | F1 | 0.9846 | | SQAC | F1 | 0.7923 | | STS | Combined | 0.8533 | | XNLI | Accuracy | 0.8016 | For more evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish) or [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405). ## Additional information ### Author Text Mining Unit (TeMU) from Barcelona Supercomputing Center (<bsc-temu@bsc.es>). ### Contact information For further information, send an email to <plantl-gob-es@bsc.es>. ### Copyright Copyright by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx). ### Licensing information This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx) within the framework of the Plan-TL. ### Citation information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, title = {MarIA: Spanish Language Models}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, volume = {68}, year = {2022}, } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA) nor the creator (BSC) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de Inteligencia Artificial. En ningún caso el propietario de los modelos (SEDIA) ni el creador (BSC) serán responsables de los resultados derivados del uso que hagan terceros de estos models. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "roberta-base-bne"], "datasets": ["bne"], "metrics": ["ppl"], "widget": [{"text": "Por la ventanilla del coche vi la Giralda y pens\u00e9 que bonita que es la ciudad de <mask>."}, {"text": "M\u00e1s vale <mask> que lamentar."}, {"text": "Caminante no hay camino, se hace camino al <mask>."}, {"text": "Tengo una pelota roja y otra amarilla. Si le doy la roja a Jose, s\u00f3lo me queda la <mask>."}, {"text": "Tengo una pelota roja y otra amarilla. Si le doy la amarilla a Jose, s\u00f3lo me queda la <mask>."}, {"text": "El <mask> es el pico m\u00e1s alto de Espa\u00f1a."}]}
PlanTL-GOB-ES/roberta-base-bne
null
[ "transformers", "pytorch", "roberta", "fill-mask", "national library of spain", "spanish", "bne", "roberta-base-bne", "es", "dataset:bne", "arxiv:1907.11692", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #fill-mask #national library of spain #spanish #bne #roberta-base-bne #es #dataset-bne #arxiv-1907.11692 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
RoBERTa base trained with data from the National Library of Spain (BNE) ======================================================================= Table of Contents ----------------- Click to expand * Overview * Model description * Intended uses and limitations * How to use * Limitations and bias * Training + Training data + Training procedure * Evaluation * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citation Information + Disclaimer Overview -------- * Architecture: roberta-base * Language: Spanish * Task: fill-mask * Data: BNE Model description ----------------- The roberta-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Intended uses and limitations ----------------------------- The roberta-base-bne model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. You can use the raw model for fill mask or fine-tune it to a downstream task. How to use ---------- Here is how to use this model: Here is how to use this model to get the features of a given text in PyTorch: Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: Training -------- ### Training data The National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: ### Training procedure The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens. The roberta-base-bne pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM. Evaluation ---------- When fine-tuned on downstream tasks, this model achieves the following results: Dataset: MLDoc, Metric: F1, RoBERTa-base: 0.9664 Dataset: CoNLL-NERC, Metric: F1, RoBERTa-base: 0.8851 Dataset: CAPITEL-NERC, Metric: F1, RoBERTa-base: 0.8960 Dataset: PAWS-X, Metric: F1, RoBERTa-base: 0.9020 Dataset: UD-POS, Metric: F1, RoBERTa-base: 0.9907 Dataset: CAPITEL-POS, Metric: F1, RoBERTa-base: 0.9846 Dataset: SQAC, Metric: F1, RoBERTa-base: 0.7923 Dataset: STS, Metric: Combined, RoBERTa-base: 0.8533 Dataset: XNLI, Metric: Accuracy, RoBERTa-base: 0.8016 For more evaluation details visit our GitHub repository or paper. Additional information ---------------------- ### Author Text Mining Unit (TeMU) from Barcelona Supercomputing Center ([bsc-temu@URL](mailto:bsc-temu@URL)). ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL). ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA). ### Licensing information This work is licensed under a Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. information If you use this model, please cite our paper: ### Disclaimer Click to expand The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA) nor the creator (BSC) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de Inteligencia Artificial. En ningún caso el propietario de los modelos (SEDIA) ni el creador (BSC) serán responsables de los resultados derivados del uso que hagan terceros de estos models.
[ "### Training data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:", "### Training procedure\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens.\n\n\nThe roberta-base-bne pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nEvaluation\n----------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nDataset: MLDoc, Metric: F1, RoBERTa-base: 0.9664\nDataset: CoNLL-NERC, Metric: F1, RoBERTa-base: 0.8851\nDataset: CAPITEL-NERC, Metric: F1, RoBERTa-base: 0.8960\nDataset: PAWS-X, Metric: F1, RoBERTa-base: 0.9020\nDataset: UD-POS, Metric: F1, RoBERTa-base: 0.9907\nDataset: CAPITEL-POS, Metric: F1, RoBERTa-base: 0.9846\nDataset: SQAC, Metric: F1, RoBERTa-base: 0.7923\nDataset: STS, Metric: Combined, RoBERTa-base: 0.8533\nDataset: XNLI, Metric: Accuracy, RoBERTa-base: 0.8016\n\n\nFor more evaluation details visit our GitHub repository or paper.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) from Barcelona Supercomputing Center ([bsc-temu@URL](mailto:bsc-temu@URL)).", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL).", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA).", "### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA) nor the creator (BSC) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de Inteligencia Artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA) ni el creador (BSC) serán responsables de los resultados derivados del uso que hagan terceros de estos models." ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #national library of spain #spanish #bne #roberta-base-bne #es #dataset-bne #arxiv-1907.11692 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:", "### Training procedure\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens.\n\n\nThe roberta-base-bne pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nEvaluation\n----------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nDataset: MLDoc, Metric: F1, RoBERTa-base: 0.9664\nDataset: CoNLL-NERC, Metric: F1, RoBERTa-base: 0.8851\nDataset: CAPITEL-NERC, Metric: F1, RoBERTa-base: 0.8960\nDataset: PAWS-X, Metric: F1, RoBERTa-base: 0.9020\nDataset: UD-POS, Metric: F1, RoBERTa-base: 0.9907\nDataset: CAPITEL-POS, Metric: F1, RoBERTa-base: 0.9846\nDataset: SQAC, Metric: F1, RoBERTa-base: 0.7923\nDataset: STS, Metric: Combined, RoBERTa-base: 0.8533\nDataset: XNLI, Metric: Accuracy, RoBERTa-base: 0.8016\n\n\nFor more evaluation details visit our GitHub repository or paper.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) from Barcelona Supercomputing Center ([bsc-temu@URL](mailto:bsc-temu@URL)).", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL).", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA).", "### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA) nor the creator (BSC) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de Inteligencia Artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA) ni el creador (BSC) serán responsables de los resultados derivados del uso que hagan terceros de estos models." ]
[ 75, 147, 342, 42, 41, 21, 18, 45, 408 ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #national library of spain #spanish #bne #roberta-base-bne #es #dataset-bne #arxiv-1907.11692 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### Training data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:### Training procedure\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens.\n\n\nThe roberta-base-bne pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nEvaluation\n----------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nDataset: MLDoc, Metric: F1, RoBERTa-base: 0.9664\nDataset: CoNLL-NERC, Metric: F1, RoBERTa-base: 0.8851\nDataset: CAPITEL-NERC, Metric: F1, RoBERTa-base: 0.8960\nDataset: PAWS-X, Metric: F1, RoBERTa-base: 0.9020\nDataset: UD-POS, Metric: F1, RoBERTa-base: 0.9907\nDataset: CAPITEL-POS, Metric: F1, RoBERTa-base: 0.9846\nDataset: SQAC, Metric: F1, RoBERTa-base: 0.7923\nDataset: STS, Metric: Combined, RoBERTa-base: 0.8533\nDataset: XNLI, Metric: Accuracy, RoBERTa-base: 0.8016\n\n\nFor more evaluation details visit our GitHub repository or paper.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) from Barcelona Supercomputing Center ([bsc-temu@URL](mailto:bsc-temu@URL)).### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL).### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA).### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\nIf you use this model, please cite our paper:### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA) nor the creator (BSC) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de Inteligencia Artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA) ni el creador (BSC) serán responsables de los resultados derivados del uso que hagan terceros de estos models." ]
fill-mask
transformers
# BERTa: RoBERTa-based Catalan language model ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description BERTa is a transformer-based masked language model for the Catalan language. It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) base model and has been trained on a medium-size corpus collected from publicly available corpora and crawlers. This model was originally published as [bsc/roberta-base-ca-cased](https://huggingface.co/bsc/roberta-base-ca-cased). ## Intended uses and limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition. ## How to use ### Load model and tokenizer ``` python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/roberta-base-ca-cased") model = AutoModelForMaskedLM.from_pretrained("PlanTL-GOB-ES/roberta-base-ca-cased") ``` ### Fill Mask task Below, an example of how to use the masked language modelling task with a pipeline. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-ca-cased') >>> unmasker("Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.") [ { "sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.", "score": 0.4177263379096985, "token": 734, "token_str": " Barcelona" }, { "sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.", "score": 0.10696165263652802, "token": 3849, "token_str": " Badalona" }, { "sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.", "score": 0.08135009557008743, "token": 19349, "token_str": " Collserola" }, { "sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.", "score": 0.07330769300460815, "token": 4974, "token_str": " Terrassa" }, { "sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.", "score": 0.03317456692457199, "token": 14333, "token_str": " Gavà" } ] ``` ## Limitations and bias ## Training ### Training corpora and preprocessing The training corpus consists of several corpora gathered from web crawling and public corpora. The publicly available corpora are: 1. the Catalan part of the [DOGC](http://opus.nlpl.eu/DOGC-v2.php) corpus, a set of documents from the Official Gazette of the Catalan Government 2. the [Catalan Open Subtitles](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2018/mono/OpenSubtitles.raw.ca.gz), a collection of translated movie subtitles 3. the non-shuffled version of the Catalan part of the [OSCAR](https://traces1.inria.fr/oscar/) corpus \\\\cite{suarez2019asynchronous}, a collection of monolingual corpora, filtered from [Common Crawl](https://commoncrawl.org/about/) 4. The [CaWac](http://nlp.ffzg.hr/resources/corpora/cawac/) corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013 the non-deduplicated version 5. the [Catalan Wikipedia articles](https://ftp.acc.umu.se/mirror/wikimedia.org/dumps/cawiki/20200801/) downloaded on 18-08-2020. The crawled corpora are: 6. The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains 7. the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government 8. the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the [Catalan News Agency](https://www.acn.cat/) To obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process, we keep document boundaries are kept. Finally, the corpora are concatenated and further global deduplication among the corpora is applied. The final training corpus consists of about 1,8B tokens. ### Tokenization and pretraining The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM. ## Evaluation ### CLUB benchmark The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB), that has been created along with the model. It contains the following tasks and their related datasets: 1. Part-of-Speech Tagging (POS) Catalan-Ancora: from the [Universal Dependencies treebank](https://github.com/UniversalDependencies/UD_Catalan-AnCora) of the well-known Ancora corpus 2. Named Entity Recognition (NER) **[AnCora Catalan 2.0.0](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: extracted named entities from the original [Ancora](https://doi.org/10.5281/zenodo.4762030) version, filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format 3. Text Classification (TC) **[TeCla](https://doi.org/10.5281/zenodo.4627197)**: consisting of 137k news pieces from the Catalan News Agency ([ACN](https://www.acn.cat/)) corpus 4. Semantic Textual Similarity (STS) **[Catalan semantic textual similarity](https://doi.org/10.5281/zenodo.4529183)**: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349) 5. Question Answering (QA): **[ViquiQuAD](https://doi.org/10.5281/zenodo.4562344)**: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan. **[XQuAD](https://doi.org/10.5281/zenodo.4526223)**: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a _test set_ Here are the train/dev/test splits of the datasets: | Task (Dataset) | Total | Train | Dev | Test | |:--|:--|:--|:--|:--| | NER (Ancora) |13,581 | 10,628 | 1,427 | 1,526 | | POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 | | STS | 3,073 | 2,073 | 500 | 500 | | TC (TeCla) | 137,775 | 110,203 | 13,786 | 13,786| | QA (ViquiQuAD) | 14,239 | 11,255 | 1,492 | 1,429 | _The fine-tuning on downstream tasks have been performed with the HuggingFace [**Transformers**](https://github.com/huggingface/transformers) library_ ### Results Below the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and the Catalan WikiBERT-ca model | Task | NER (F1) | POS (F1) | STS (Pearson) | TC (accuracy) | QA (ViquiQuAD) (F1/EM) | QA (XQuAD) (F1/EM) | | ------------|:-------------:| -----:|:------|:-------|:------|:----| | BERTa | **88.13** | **98.97** | **79.73** | **74.16** | **86.97/72.29** | **68.89/48.87** | | mBERT | 86.38 | 98.82 | 76.34 | 70.56 | 86.97/72.22 | 67.15/46.51 | | XLM-RoBERTa | 87.66 | 98.89 | 75.40 | 71.68 | 85.50/70.47 | 67.10/46.42 | | WikiBERT-ca | 77.66 | 97.60 | 77.18 | 73.22 | 85.45/70.75 | 65.21/36.60 | ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our latest paper: ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": "ca", "license": "apache-2.0", "tags": ["masked-lm", "BERTa", "catalan"], "widget": [{"text": "El Catal\u00e0 \u00e9s una llengua molt <mask>."}, {"text": "Salvador Dal\u00ed va viure a <mask>."}, {"text": "La Costa Brava t\u00e9 les millors <mask> d'Espanya."}, {"text": "El cacaolat \u00e9s un batut de <mask>."}, {"text": "<mask> \u00e9s la capital de la Garrotxa."}, {"text": "Vaig al <mask> a buscar bolets."}, {"text": "Antoni Gaud\u00ed vas ser un <mask> molt important per la ciutat."}, {"text": "Catalunya \u00e9s una refer\u00e8ncia en <mask> a nivell europeu."}]}
PlanTL-GOB-ES/roberta-base-ca
null
[ "transformers", "pytorch", "roberta", "fill-mask", "masked-lm", "BERTa", "catalan", "ca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ca" ]
TAGS #transformers #pytorch #roberta #fill-mask #masked-lm #BERTa #catalan #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
BERTa: RoBERTa-based Catalan language model =========================================== Table of contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Limitations and bias * Training * Evaluation * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citing information + Disclaimer Model description ----------------- BERTa is a transformer-based masked language model for the Catalan language. It is based on the RoBERTA base model and has been trained on a medium-size corpus collected from publicly available corpora and crawlers. This model was originally published as bsc/roberta-base-ca-cased. Intended uses and limitations ----------------------------- The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition. How to use ---------- ### Load model and tokenizer ### Fill Mask task Below, an example of how to use the masked language modelling task with a pipeline. Limitations and bias -------------------- Training -------- ### Training corpora and preprocessing The training corpus consists of several corpora gathered from web crawling and public corpora. The publicly available corpora are: 1. the Catalan part of the DOGC corpus, a set of documents from the Official Gazette of the Catalan Government 2. the Catalan Open Subtitles, a collection of translated movie subtitles 3. the non-shuffled version of the Catalan part of the OSCAR corpus \\cite{suarez2019asynchronous}, a collection of monolingual corpora, filtered from Common Crawl 4. The CaWac corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013 the non-deduplicated version 5. the Catalan Wikipedia articles downloaded on 18-08-2020. The crawled corpora are: 6. The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains 7. the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government 8. the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency To obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process, we keep document boundaries are kept. Finally, the corpora are concatenated and further global deduplication among the corpora is applied. The final training corpus consists of about 1,8B tokens. ### Tokenization and pretraining The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 52,000 tokens. The BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM. Evaluation ---------- ### CLUB benchmark The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB), that has been created along with the model. It contains the following tasks and their related datasets: 1. Part-of-Speech Tagging (POS) Catalan-Ancora: from the Universal Dependencies treebank of the well-known Ancora corpus 2. Named Entity Recognition (NER) AnCora Catalan 2.0.0: extracted named entities from the original Ancora version, filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format 3. Text Classification (TC) TeCla: consisting of 137k news pieces from the Catalan News Agency (ACN) corpus 4. Semantic Textual Similarity (STS) Catalan semantic textual similarity: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the Catalan Textual Corpus 5. Question Answering (QA): ViquiQuAD: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan. XQuAD: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a *test set* Here are the train/dev/test splits of the datasets: *The fine-tuning on downstream tasks have been performed with the HuggingFace Transformers library* ### Results Below the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and the Catalan WikiBERT-ca model Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our latest paper: ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Load model and tokenizer", "### Fill Mask task\n\n\nBelow, an example of how to use the masked language modelling task with a pipeline.\n\n\nLimitations and bias\n--------------------\n\n\nTraining\n--------", "### Training corpora and preprocessing\n\n\nThe training corpus consists of several corpora gathered from web crawling and public corpora.\n\n\nThe publicly available corpora are:\n\n\n1. the Catalan part of the DOGC corpus, a set of documents from the Official Gazette of the Catalan Government\n2. the Catalan Open Subtitles, a collection of translated movie subtitles\n3. the non-shuffled version of the Catalan part of the OSCAR corpus \\\\cite{suarez2019asynchronous},\na collection of monolingual corpora, filtered from Common Crawl\n4. The CaWac corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013\nthe non-deduplicated version\n5. the Catalan Wikipedia articles downloaded on 18-08-2020.\n\n\nThe crawled corpora are:\n\n\n6. The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains\n7. the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government\n8. the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency\n\n\nTo obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others,\nsentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents.\nDuring the process, we keep document boundaries are kept.\nFinally, the corpora are concatenated and further global deduplication among the corpora is applied.\nThe final training corpus consists of about 1,8B tokens.", "### Tokenization and pretraining\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE)\nused in the original RoBERTA model with a vocabulary size of 52,000 tokens.\n\n\nThe BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model\nwith the same hyperparameters as in the original work.\n\n\nThe training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM.\n\n\nEvaluation\n----------", "### CLUB benchmark\n\n\nThe BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB),\nthat has been created along with the model.\n\n\nIt contains the following tasks and their related datasets:\n\n\n1. Part-of-Speech Tagging (POS)\n\n\nCatalan-Ancora: from the Universal Dependencies treebank of the well-known Ancora corpus\n2. Named Entity Recognition (NER)\n\n\nAnCora Catalan 2.0.0: extracted named entities from the original Ancora version,\nfiltering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format\n3. Text Classification (TC)\n\n\nTeCla: consisting of 137k news pieces from the Catalan News Agency (ACN) corpus\n4. Semantic Textual Similarity (STS)\n\n\nCatalan semantic textual similarity: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them,\nscraped from the Catalan Textual Corpus\n5. Question Answering (QA):\n\n\nViquiQuAD: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan.\n\n\nXQuAD: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a *test set*\n\n\nHere are the train/dev/test splits of the datasets:\n\n\n\n*The fine-tuning on downstream tasks have been performed with the HuggingFace Transformers library*", "### Results\n\n\nBelow the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and\nthe Catalan WikiBERT-ca model\n\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our latest paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #BERTa #catalan #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Load model and tokenizer", "### Fill Mask task\n\n\nBelow, an example of how to use the masked language modelling task with a pipeline.\n\n\nLimitations and bias\n--------------------\n\n\nTraining\n--------", "### Training corpora and preprocessing\n\n\nThe training corpus consists of several corpora gathered from web crawling and public corpora.\n\n\nThe publicly available corpora are:\n\n\n1. the Catalan part of the DOGC corpus, a set of documents from the Official Gazette of the Catalan Government\n2. the Catalan Open Subtitles, a collection of translated movie subtitles\n3. the non-shuffled version of the Catalan part of the OSCAR corpus \\\\cite{suarez2019asynchronous},\na collection of monolingual corpora, filtered from Common Crawl\n4. The CaWac corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013\nthe non-deduplicated version\n5. the Catalan Wikipedia articles downloaded on 18-08-2020.\n\n\nThe crawled corpora are:\n\n\n6. The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains\n7. the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government\n8. the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency\n\n\nTo obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others,\nsentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents.\nDuring the process, we keep document boundaries are kept.\nFinally, the corpora are concatenated and further global deduplication among the corpora is applied.\nThe final training corpus consists of about 1,8B tokens.", "### Tokenization and pretraining\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE)\nused in the original RoBERTA model with a vocabulary size of 52,000 tokens.\n\n\nThe BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model\nwith the same hyperparameters as in the original work.\n\n\nThe training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM.\n\n\nEvaluation\n----------", "### CLUB benchmark\n\n\nThe BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB),\nthat has been created along with the model.\n\n\nIt contains the following tasks and their related datasets:\n\n\n1. Part-of-Speech Tagging (POS)\n\n\nCatalan-Ancora: from the Universal Dependencies treebank of the well-known Ancora corpus\n2. Named Entity Recognition (NER)\n\n\nAnCora Catalan 2.0.0: extracted named entities from the original Ancora version,\nfiltering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format\n3. Text Classification (TC)\n\n\nTeCla: consisting of 137k news pieces from the Catalan News Agency (ACN) corpus\n4. Semantic Textual Similarity (STS)\n\n\nCatalan semantic textual similarity: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them,\nscraped from the Catalan Textual Corpus\n5. Question Answering (QA):\n\n\nViquiQuAD: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan.\n\n\nXQuAD: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a *test set*\n\n\nHere are the train/dev/test splits of the datasets:\n\n\n\n*The fine-tuning on downstream tasks have been performed with the HuggingFace Transformers library*", "### Results\n\n\nBelow the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and\nthe Catalan WikiBERT-ca model\n\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our latest paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 48, 8, 55, 332, 116, 319, 59, 28, 40, 24, 12, 33, 17, 445 ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #BERTa #catalan #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Load model and tokenizer### Fill Mask task\n\n\nBelow, an example of how to use the masked language modelling task with a pipeline.\n\n\nLimitations and bias\n--------------------\n\n\nTraining\n--------### Training corpora and preprocessing\n\n\nThe training corpus consists of several corpora gathered from web crawling and public corpora.\n\n\nThe publicly available corpora are:\n\n\n1. the Catalan part of the DOGC corpus, a set of documents from the Official Gazette of the Catalan Government\n2. the Catalan Open Subtitles, a collection of translated movie subtitles\n3. the non-shuffled version of the Catalan part of the OSCAR corpus \\\\cite{suarez2019asynchronous},\na collection of monolingual corpora, filtered from Common Crawl\n4. The CaWac corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013\nthe non-deduplicated version\n5. the Catalan Wikipedia articles downloaded on 18-08-2020.\n\n\nThe crawled corpora are:\n\n\n6. The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains\n7. the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government\n8. the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency\n\n\nTo obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others,\nsentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents.\nDuring the process, we keep document boundaries are kept.\nFinally, the corpora are concatenated and further global deduplication among the corpora is applied.\nThe final training corpus consists of about 1,8B tokens.### Tokenization and pretraining\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE)\nused in the original RoBERTA model with a vocabulary size of 52,000 tokens.\n\n\nThe BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model\nwith the same hyperparameters as in the original work.\n\n\nThe training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM.\n\n\nEvaluation\n----------### CLUB benchmark\n\n\nThe BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB),\nthat has been created along with the model.\n\n\nIt contains the following tasks and their related datasets:\n\n\n1. Part-of-Speech Tagging (POS)\n\n\nCatalan-Ancora: from the Universal Dependencies treebank of the well-known Ancora corpus\n2. Named Entity Recognition (NER)\n\n\nAnCora Catalan 2.0.0: extracted named entities from the original Ancora version,\nfiltering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format\n3. Text Classification (TC)\n\n\nTeCla: consisting of 137k news pieces from the Catalan News Agency (ACN) corpus\n4. Semantic Textual Similarity (STS)\n\n\nCatalan semantic textual similarity: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them,\nscraped from the Catalan Textual Corpus\n5. Question Answering (QA):\n\n\nViquiQuAD: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan.\n\n\nXQuAD: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a *test set*\n\n\nHere are the train/dev/test splits of the datasets:\n\n\n\n*The fine-tuning on downstream tasks have been performed with the HuggingFace Transformers library*### Results\n\n\nBelow the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and\nthe Catalan WikiBERT-ca model\n\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.### Citing information\n\n\nIf you use this model, please cite our latest paper:### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
token-classification
transformers
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-large-bne-capitel-ner** is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations **roberta-large-bne-capitel-ner** model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases. ## How to use ```python from transformers import pipeline from pprint import pprint nlp = pipeline("ner", model="PlanTL-GOB-ES/roberta-large-bne-capitel-ner") example = "Me llamo Francisco Javier y vivo en Madrid." ner_results = nlp(example) pprint(ner_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). ### Training procedure The model was trained with a batch size of 32 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 score. ## Evaluation results We evaluated the **roberta-large-bne-capitel-ner** on the CAPITEL-NERC test set against standard multilingual and monolingual baselines: | Model | CAPITEL-NERC (F1) | | ------------|:----| | roberta-large-bne-capitel-ner | **90.51** | | roberta-base-bne-capitel-ner | 89.60| | BETO | 87.72 | | mBERT | 88.10 | | BERTIN | 88.56 | | ELECTRA | 80.35 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "capitel", "ner"], "datasets": ["bne", "capitel"], "metrics": ["f1"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": ["Me llamo Francisco Javier y vivo en Madrid.", "Mi hermano Ram\u00f3n y su mejor amigo Luis trabajan en el BSC."], "model-index": [{"name": "roberta-large-bne-capiter-ner", "results": [{"task": {"type": "token-classification"}, "dataset": {"name": "CAPITEL-NERC", "type": "ner"}, "metrics": [{"type": "f1", "value": 0.9051, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-large-bne-capitel-ner
null
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "ner", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #ner #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. ================================================================================================== Table of contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Limitations and bias * Training * Training + Training data + Training procedure * Evaluation * Evaluation + Variable and metrics + Evaluation results * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citing information + Disclaimer Model description ----------------- The roberta-large-bne-capitel-ner is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the roberta-large-bne model, a RoBERTa large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Intended uses and limitations ----------------------------- roberta-large-bne-capitel-ner model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases. How to use ---------- Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Training -------- The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 1). ### Training procedure The model was trained with a batch size of 32 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. Evaluation ---------- ### Variable and metrics This model was finetuned maximizing F1 score. Evaluation results ------------------ We evaluated the roberta-large-bne-capitel-ner on the CAPITEL-NERC test set against standard multilingual and monolingual baselines: For more details, check the fine-tuning and evaluation scripts in the official GitHub repository. Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. Citing information ------------------ If you use this model, please cite our paper: ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training procedure\n\n\nThe model was trained with a batch size of 32 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------", "### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-large-bne-capitel-ner on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\nCiting information\n------------------\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #ner #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training procedure\n\n\nThe model was trained with a batch size of 32 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------", "### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-large-bne-capitel-ner on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\nCiting information\n------------------\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 82, 65, 120, 28, 40, 24, 12, 64, 445 ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #ner #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training procedure\n\n\nThe model was trained with a batch size of 32 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-large-bne-capitel-ner on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\nCiting information\n------------------\n\n\nIf you use this model, please cite our paper:### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
token-classification
transformers
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-large-bne-capitel-pos** is a Part-of-speech-tagging (POS) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. # Intended uses and limitations **roberta-large-bne-capitel-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("token-classification", model="PlanTL-GOB-ES/roberta-large-bne-capitel-pos") example = "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto." pos_results = nlp(example) pprint(pos_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2). ### Training procedure The model was trained with a batch size of 16 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 score. ## Evaluation results We evaluated the **roberta-large-bne-capitel-pos** on the CAPITEL-POS test set against standard multilingual and monolingual baselines: | Model | CAPITEL-POS (F1) | | ------------|:----| | roberta-large-bne-capitel-pos | **98.56** | | roberta-base-bne-capitel-pos | 98.46 | | BETO | 98.36 | | mBERT | 98.39 | | BERTIN | 98.47 | | ELECTRA | 98.16 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "capitel", "pos"], "datasets": ["bne", "capitel"], "metrics": ["f1"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": [{"text": "Festival de San Sebasti\u00e1n: Johnny Depp recibir\u00e1 el premio Donostia en pleno rifirrafe judicial con Amber Heard"}, {"text": "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto."}, {"text": "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje."}], "model-index": [{"name": "roberta-large-bne-capiter-pos", "results": [{"task": {"type": "token-classification"}, "dataset": {"name": "CAPITEL-POS", "type": "pos"}, "metrics": [{"type": "f1", "value": 0.986, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-large-bne-capitel-pos
null
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "pos", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #pos #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset ======================================================================================= Table of contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Limitations and bias * Training * Training + Training data + Training procedure * Evaluation * Evaluation + Variable and metrics + Evaluation results * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citing information + Disclaimer Model description ----------------- The roberta-large-bne-capitel-pos is a Part-of-speech-tagging (POS) model for the Spanish language fine-tuned from the roberta-large-bne model, a RoBERTa large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Intended uses and limitations ============================= roberta-large-bne-capitel-pos model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases. How to use ---------- Here is how to use this model: Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Training -------- The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 2). ### Training procedure The model was trained with a batch size of 16 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. Evaluation ---------- ### Variable and metrics This model was finetuned maximizing F1 score. Evaluation results ------------------ We evaluated the roberta-large-bne-capitel-pos on the CAPITEL-POS test set against standard multilingual and monolingual baselines: For more details, check the fine-tuning and evaluation scripts in the official GitHub repository. Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our paper: ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------", "### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-large-bne-capitel-pos on the CAPITEL-POS test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #pos #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------", "### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-large-bne-capitel-pos on the CAPITEL-POS test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 82, 65, 120, 28, 40, 24, 12, 33, 16, 445 ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #national library of spain #spanish #bne #capitel #pos #es #dataset-bne #dataset-capitel #arxiv-1907.11692 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation\n----------### Variable and metrics\n\n\nThis model was finetuned maximizing F1 score.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-large-bne-capitel-pos on the CAPITEL-POS test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.### Citing information\n\n\nIf you use this model, please cite our paper:### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
question-answering
transformers
# Spanish RoBERTa-large trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-large-bne-sqac** is a Question Answering (QA) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations **roberta-large-bne-sqac** model can be used for extractive question answering. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use ```python from transformers import pipeline nlp = pipeline("question-answering", model="PlanTL-GOB-ES/roberta-large-bne-sqac") text = "¿Dónde vivo?" context = "Me llamo Wolfgang y vivo en Berlin" qa_results = nlp(text, context) print(qa_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data We used the QA dataset in Spanish called [SQAC corpus](https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC) for training and evaluation. ### Training procedure The model was trained with a batch size of 16 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation results We evaluated the **roberta-large-bne-sqac** on the SQAC test set against standard multilingual and monolingual baselines: | Model | SQAC (F1) | | ------------|:----| | roberta-large-bne-sqac | **82.02** | | roberta-base-bne-sqac | 79.23| | BETO | 79.23 | | mBERT | 75.62 | | BERTIN | 76.78 | | ELECTRA | 73.83 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "qa", "question answering"], "datasets": ["PlanTL-GOB-ES/SQAC"], "metrics": ["f1", "exact match"], "model-index": [{"name": "roberta-large-bne-sqac", "results": [{"task": {"type": "question-answering"}, "dataset": {"name": "SQAC", "type": "PlanTL-GOB-ES/SQAC"}, "metrics": [{"type": "f1", "value": 0.8202, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-large-bne-sqac
null
[ "transformers", "pytorch", "roberta", "question-answering", "national library of spain", "spanish", "bne", "qa", "question answering", "es", "dataset:PlanTL-GOB-ES/SQAC", "arxiv:1907.11692", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #question-answering #national library of spain #spanish #bne #qa #question answering #es #dataset-PlanTL-GOB-ES/SQAC #arxiv-1907.11692 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
Spanish RoBERTa-large trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset. ==================================================================================================== Table of contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Limitations and bias * Training * Training + Training data + Training procedure * Evaluation * Evaluation + Variable and metrics + Evaluation results * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citing information + Disclaimer Model description ----------------- The roberta-large-bne-sqac is a Question Answering (QA) model for the Spanish language fine-tuned from the roberta-large-bne model, a RoBERTa large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Intended uses and limitations ----------------------------- roberta-large-bne-sqac model can be used for extractive question answering. The model is limited by its training dataset and may not generalize well for all use cases. How to use ---------- Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Training -------- ### Training data We used the QA dataset in Spanish called SQAC corpus for training and evaluation. ### Training procedure The model was trained with a batch size of 16 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. Evaluation results ------------------ We evaluated the roberta-large-bne-sqac on the SQAC test set against standard multilingual and monolingual baselines: For more details, check the fine-tuning and evaluation scripts in the official GitHub repository. Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our paper: ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training data\n\n\nWe used the QA dataset in Spanish called SQAC corpus for training and evaluation.", "### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-large-bne-sqac on the SQAC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #question-answering #national library of spain #spanish #bne #qa #question answering #es #dataset-PlanTL-GOB-ES/SQAC #arxiv-1907.11692 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "### Training data\n\n\nWe used the QA dataset in Spanish called SQAC corpus for training and evaluation.", "### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-large-bne-sqac on the SQAC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nApache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.", "### Citing information\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 81, 23, 148, 28, 40, 24, 12, 33, 16, 445 ]
[ "TAGS\n#transformers #pytorch #roberta #question-answering #national library of spain #spanish #bne #qa #question answering #es #dataset-PlanTL-GOB-ES/SQAC #arxiv-1907.11692 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n### Training data\n\n\nWe used the QA dataset in Spanish called SQAC corpus for training and evaluation.### Training procedure\n\n\nThe model was trained with a batch size of 16 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.\n\n\nEvaluation results\n------------------\n\n\nWe evaluated the roberta-large-bne-sqac on the SQAC test set against standard multilingual and monolingual baselines:\n\n\n\nFor more details, check the fine-tuning and evaluation scripts in the official GitHub repository.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nApache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.### Citing information\n\n\nIf you use this model, please cite our paper:### Disclaimer\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
fill-mask
transformers
# RoBERTa large trained with data from the National Library of Spain (BNE) ## Table of Contents <details> <summary>Click to expand</summary> - [Overview](#overview) - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation Information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Overview - **Architecture:** roberta-large - **Language:** Spanish - **Task:** fill-mask - **Data:** BNE ## Model description The **roberta-large-bne** is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations The **roberta-large-bne** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. You can use the raw model for fill mask or fine-tune it to a downstream task. ## How to use Here is how to use this model: ```python >>> from transformers import pipeline >>> from pprint import pprint >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-large-bne') >>> pprint(unmasker("Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje.")) [{'score': 0.0664491355419159, 'sequence': ' Gracias a los datos de la BNE se ha podido conocer este modelo del lenguaje.', 'token': 1910, 'token_str': ' conocer'}, {'score': 0.0492338091135025, 'sequence': ' Gracias a los datos de la BNE se ha podido realizar este modelo del lenguaje.', 'token': 2178, 'token_str': ' realizar'}, {'score': 0.03890657424926758, 'sequence': ' Gracias a los datos de la BNE se ha podido reconstruir este modelo del lenguaje.', 'token': 23368, 'token_str': ' reconstruir'}, {'score': 0.03662774711847305, 'sequence': ' Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje.', 'token': 3815, 'token_str': ' desarrollar'}, {'score': 0.030557377263903618, 'sequence': ' Gracias a los datos de la BNE se ha podido estudiar este modelo del lenguaje.', 'token': 6361, 'token_str': ' estudiar'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import RobertaTokenizer, RobertaModel >>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/roberta-large-bne') >>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/roberta-large-bne') >>> text = "Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje." >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 19, 1024]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ### Training procedure The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The **roberta-large-bne** pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa large. The training lasted a total of 96 hours with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Evaluation When fine-tuned on downstream tasks, this model achieves the following results: | Dataset | Metric | [**RoBERTa-large**](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) | |--------------|----------|------------| | MLDoc | F1 | 0.9702 | | CoNLL-NERC | F1 | 0.8823 | | CAPITEL-NERC | F1 | 0.9051 | | PAWS-X | F1 | 0.9150 | | UD-POS | F1 | 0.9904 | | CAPITEL-POS | F1 | 0.9856 | | SQAC | F1 | 0.8202 | | STS | Combined | 0.8411 | | XNLI | Accuracy | 0.8263 | For more evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish) or [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx) (2022) ### Licensing information This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx) within the framework of the Plan-TL. ### Citation information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "roberta-large-bne"], "datasets": ["bne"], "metrics": ["ppl"], "widget": [{"text": "Por la ventanilla del coche vi la Giralda y pens\u00e9 que bonita que es la ciudad de <mask>."}, {"text": "M\u00e1s vale <mask> que lamentar."}, {"text": "Caminante no hay camino, se hace camino al <mask>."}, {"text": "Tengo una pelota roja y otra amarilla. Si le doy la roja a Jose, s\u00f3lo me queda la <mask>."}, {"text": "Tengo una pelota roja y otra amarilla. Si le doy la amarilla a Jose, s\u00f3lo me queda la <mask>."}, {"text": "El <mask> es el pico m\u00e1s alto de Espa\u00f1a."}]}
PlanTL-GOB-ES/roberta-large-bne
null
[ "transformers", "pytorch", "roberta", "fill-mask", "national library of spain", "spanish", "bne", "roberta-large-bne", "es", "dataset:bne", "arxiv:1907.11692", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692" ]
[ "es" ]
TAGS #transformers #pytorch #roberta #fill-mask #national library of spain #spanish #bne #roberta-large-bne #es #dataset-bne #arxiv-1907.11692 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
RoBERTa large trained with data from the National Library of Spain (BNE) ======================================================================== Table of Contents ----------------- Click to expand * Overview * Model description * Intended uses and limitations * How to use * Limitations and bias * Training + Training data + Training procedure * Evaluation * Additional information + Author + Contact information + Copyright + Licensing information + Funding + Citation Information + Disclaimer Overview -------- * Architecture: roberta-large * Language: Spanish * Task: fill-mask * Data: BNE Model description ----------------- The roberta-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Intended uses and limitations ----------------------------- The roberta-large-bne model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. You can use the raw model for fill mask or fine-tune it to a downstream task. How to use ---------- Here is how to use this model: Here is how to use this model to get the features of a given text in PyTorch: Limitations and bias -------------------- At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Training -------- ### Training data The National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: ### Training procedure The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens. The roberta-large-bne pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa large. The training lasted a total of 96 hours with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. Evaluation ---------- When fine-tuned on downstream tasks, this model achieves the following results: Dataset: MLDoc, Metric: F1, RoBERTa-large: 0.9702 Dataset: CoNLL-NERC, Metric: F1, RoBERTa-large: 0.8823 Dataset: CAPITEL-NERC, Metric: F1, RoBERTa-large: 0.9051 Dataset: PAWS-X, Metric: F1, RoBERTa-large: 0.9150 Dataset: UD-POS, Metric: F1, RoBERTa-large: 0.9904 Dataset: CAPITEL-POS, Metric: F1, RoBERTa-large: 0.9856 Dataset: SQAC, Metric: F1, RoBERTa-large: 0.8202 Dataset: STS, Metric: Combined, RoBERTa-large: 0.8411 Dataset: XNLI, Metric: Accuracy, RoBERTa-large: 0.8263 For more evaluation details visit our GitHub repository or paper. Additional information ---------------------- ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) ### Contact information For further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL) ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information This work is licensed under a Apache License, Version 2.0 ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. information If you use this model, please cite our paper: ### Disclaimer Click to expand The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "### Training data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:", "### Training procedure\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens.\n\n\nThe roberta-large-bne pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa large. The training lasted a total of 96 hours with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nEvaluation\n----------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nDataset: MLDoc, Metric: F1, RoBERTa-large: 0.9702\nDataset: CoNLL-NERC, Metric: F1, RoBERTa-large: 0.8823\nDataset: CAPITEL-NERC, Metric: F1, RoBERTa-large: 0.9051\nDataset: PAWS-X, Metric: F1, RoBERTa-large: 0.9150\nDataset: UD-POS, Metric: F1, RoBERTa-large: 0.9904\nDataset: CAPITEL-POS, Metric: F1, RoBERTa-large: 0.9856\nDataset: SQAC, Metric: F1, RoBERTa-large: 0.8202\nDataset: STS, Metric: Combined, RoBERTa-large: 0.8411\nDataset: XNLI, Metric: Accuracy, RoBERTa-large: 0.8263\n\n\nFor more evaluation details visit our GitHub repository or paper.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #national library of spain #spanish #bne #roberta-large-bne #es #dataset-bne #arxiv-1907.11692 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:", "### Training procedure\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens.\n\n\nThe roberta-large-bne pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa large. The training lasted a total of 96 hours with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nEvaluation\n----------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nDataset: MLDoc, Metric: F1, RoBERTa-large: 0.9702\nDataset: CoNLL-NERC, Metric: F1, RoBERTa-large: 0.8823\nDataset: CAPITEL-NERC, Metric: F1, RoBERTa-large: 0.9051\nDataset: PAWS-X, Metric: F1, RoBERTa-large: 0.9150\nDataset: UD-POS, Metric: F1, RoBERTa-large: 0.9904\nDataset: CAPITEL-POS, Metric: F1, RoBERTa-large: 0.9856\nDataset: SQAC, Metric: F1, RoBERTa-large: 0.8202\nDataset: STS, Metric: Combined, RoBERTa-large: 0.8411\nDataset: XNLI, Metric: Accuracy, RoBERTa-large: 0.8263\n\n\nFor more evaluation details visit our GitHub repository or paper.\n\n\nAdditional information\n----------------------", "### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)", "### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)", "### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)", "### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0", "### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\n\n\nIf you use this model, please cite our paper:", "### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
[ 71, 147, 343, 28, 40, 24, 18, 45, 448 ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #national library of spain #spanish #bne #roberta-large-bne #es #dataset-bne #arxiv-1907.11692 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training data\n\n\nThe National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.\n\n\nTo obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.\n\n\nSome of the statistics of the corpus:### Training procedure\n\n\nThe training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens.\n\n\nThe roberta-large-bne pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa large. The training lasted a total of 96 hours with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.\n\n\nEvaluation\n----------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nDataset: MLDoc, Metric: F1, RoBERTa-large: 0.9702\nDataset: CoNLL-NERC, Metric: F1, RoBERTa-large: 0.8823\nDataset: CAPITEL-NERC, Metric: F1, RoBERTa-large: 0.9051\nDataset: PAWS-X, Metric: F1, RoBERTa-large: 0.9150\nDataset: UD-POS, Metric: F1, RoBERTa-large: 0.9904\nDataset: CAPITEL-POS, Metric: F1, RoBERTa-large: 0.9856\nDataset: SQAC, Metric: F1, RoBERTa-large: 0.8202\nDataset: STS, Metric: Combined, RoBERTa-large: 0.8411\nDataset: XNLI, Metric: Accuracy, RoBERTa-large: 0.8263\n\n\nFor more evaluation details visit our GitHub repository or paper.\n\n\nAdditional information\n----------------------### Author\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)### Contact information\n\n\nFor further information, send an email to [plantl-gob-es@URL](mailto:plantl-gob-es@URL)### Copyright\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)### Licensing information\n\n\nThis work is licensed under a Apache License, Version 2.0### Funding\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.\n\n\ninformation\n\n\nIf you use this model, please cite our paper:### Disclaimer\n\n\n\nClick to expand\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.\n\n\nWhen third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.\n\n\nIn no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.\n\n\nLos modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.\n\n\nCuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.\n\n\nEn ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos." ]
text-generation
transformers
#Homer DialoGPT Model
{"tags": ["conversational"]}
Plencers/DialoGPT-small-homer
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Homer DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
automatic-speech-recognition
transformers
## Model description This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 4.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.9827 | 0.29 | 1000 | inf | 0.2937 | | 1.0203 | 0.57 | 2000 | inf | 0.2711 | | 1.0048 | 0.86 | 3000 | inf | 0.2620 | | 0.9858 | 1.15 | 4000 | inf | 0.2522 | | 0.9709 | 1.43 | 5000 | inf | 0.2365 | | 0.9347 | 1.72 | 6000 | inf | 0.2332 | | 0.9256 | 2.01 | 7000 | inf | 0.2261 | | 0.8936 | 2.29 | 8000 | inf | 0.2203 | | 0.877 | 2.58 | 9000 | inf | 0.2096 | | 0.8393 | 2.87 | 10000 | inf | 0.2017 | | 0.8156 | 3.15 | 11000 | inf | 0.1936 | | 0.8015 | 3.44 | 12000 | inf | 0.1880 | | 0.774 | 3.73 | 13000 | inf | 0.1834 | It achieves the best result on the validation set on STEP 13000: - Wer: 0.1834 Some problem occurs when calculating the validation loss. ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3.dev0 - Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8` with split `test` ```bash python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "model-index": [{"name": "XLS-R-1B - French", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 18.33, "name": "Test WER"}, {"type": "cer", "value": 5.6, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 60.25, "name": "Test WER"}, {"type": "cer", "value": 15.68, "name": "Test CER"}]}]}]}
Plim/test_lm
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "fr", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #fr #license-apache-2.0 #model-index #endpoints_compatible #region-us
Model description ----------------- This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - FR dataset. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 4.0 * mixed\_precision\_training: Native AMP ### Training results It achieves the best result on the validation set on STEP 13000: * Wer: 0.1834 Some problem occurs when calculating the validation loss. ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.3.dev0 * Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_8' with split 'test' 2. To evaluate on 'speech-recognition-community-v2/dev\_data'
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 4.0\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\n\nIt achieves the best result on the validation set on STEP 13000:\n\n\n* Wer: 0.1834\n\n\nSome problem occurs when calculating the validation loss.", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0", "### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #fr #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 4.0\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\n\nIt achieves the best result on the validation set on STEP 13000:\n\n\n* Wer: 0.1834\n\n\nSome problem occurs when calculating the validation loss.", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0", "### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'" ]
[ 64, 155, 36, 50, 50 ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #fr #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 4.0\n* mixed\\_precision\\_training: Native AMP### Training results\n\n\n\nIt achieves the best result on the validation set on STEP 13000:\n\n\n* Wer: 0.1834\n\n\nSome problem occurs when calculating the validation loss.### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'" ]
automatic-speech-recognition
transformers
## Model description This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 6.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.9827 | 0.29 | 1000 | inf | 0.2937 | | 1.0203 | 0.57 | 2000 | inf | 0.2711 | | 1.0048 | 0.86 | 3000 | inf | 0.2620 | | 0.9858 | 1.15 | 4000 | inf | 0.2522 | | 0.9709 | 1.43 | 5000 | inf | 0.2365 | | 0.9347 | 1.72 | 6000 | inf | 0.2332 | | 0.9256 | 2.01 | 7000 | inf | 0.2261 | | 0.8936 | 2.29 | 8000 | inf | 0.2203 | | 0.877 | 2.58 | 9000 | inf | 0.2096 | | 0.8393 | 2.87 | 10000 | inf | 0.2017 | | 0.8156 | 3.15 | 11000 | inf | 0.1936 | | 0.8015 | 3.44 | 12000 | inf | 0.1880 | | 0.774 | 3.73 | 13000 | inf | 0.1834 | | 0.8372 | 4.01 | 14000 | inf | 0.1934 | | 0.8075 | 4.3 | 15000 | inf | 0.1923 | | 0.8069 | 4.59 | 16000 | inf | 0.1877 | | 0.8064 | 4.87 | 17000 | inf | 0.1955 | | 0.801 | 5.16 | 18000 | inf | 0.1891 | | 0.8022 | 5.45 | 19000 | inf | 0.1895 | | 0.792 | 5.73 | 20000 | inf | 0.1854 | It achieves the best result on the validation set on STEP 13000: - Wer: 0.1834 Some problem occurs when calculating the validation loss. ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3.dev0 - Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8` with split `test` ```bash python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ### Evaluation Results Without LM: | Dataset | WER | CER | |:----------:|:-----:|:-----:| | TEST CV | 18.33 | 5.60 | | DEV audio | 31.33 | 13.20 | | TEST audio | / | / | With LM: | Dataset | WER | CER | |:----------:|:-----:|:-----:| | TEST CV | 15.40 | 5.36 | | DEV audio | 25.05 | 12.45 | | TEST audio | / | / |
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-1B - French", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 15.4, "name": "Test WER (with LM)"}, {"type": "cer", "value": 5.36, "name": "Test CER (with LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 25.05, "name": "Test WER (with LM)"}, {"type": "cer", "value": 12.45, "name": "Test CER (with LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 27.1, "name": "Test WER"}]}]}]}
Plim/xls-r-1b-cv_8-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "fr", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
Model description ----------------- This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - FR dataset. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 6.0 * mixed\_precision\_training: Native AMP ### Training results It achieves the best result on the validation set on STEP 13000: * Wer: 0.1834 Some problem occurs when calculating the validation loss. ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.3.dev0 * Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_8' with split 'test' 2. To evaluate on 'speech-recognition-community-v2/dev\_data' ### Evaluation Results Without LM: With LM:
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 6.0\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\n\nIt achieves the best result on the validation set on STEP 13000:\n\n\n* Wer: 0.1834\n\n\nSome problem occurs when calculating the validation loss.", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0", "### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'", "### Evaluation Results\n\n\nWithout LM:\n\n\n\nWith LM:" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 6.0\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\n\nIt achieves the best result on the validation set on STEP 13000:\n\n\n* Wer: 0.1834\n\n\nSome problem occurs when calculating the validation loss.", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0", "### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'", "### Evaluation Results\n\n\nWithout LM:\n\n\n\nWith LM:" ]
[ 96, 155, 36, 50, 50, 13 ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 6.0\n* mixed\\_precision\\_training: Native AMP### Training results\n\n\n\nIt achieves the best result on the validation set on STEP 13000:\n\n\n* Wer: 0.1834\n\n\nSome problem occurs when calculating the validation loss.### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'### Evaluation Results\n\n\nWithout LM:\n\n\n\nWith LM:" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset. It achieves the following results on the evaluation set: - Loss: 0.2464 - Wer: 0.2220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0326 | 0.32 | 1000 | 0.3092 | 0.2718 | | 1.0828 | 0.65 | 2000 | 0.2843 | 0.2606 | | 1.0771 | 0.97 | 3000 | 0.2774 | 0.2488 | | 1.0306 | 1.3 | 4000 | 0.2588 | 0.2351 | | 1.0052 | 1.62 | 5000 | 0.2483 | 0.2284 | | 0.9865 | 1.94 | 6000 | 0.2464 | 0.2220 | | 0.978 | 2.27 | 7000 | 0.2514 | 0.2172 | | 1.7438 | 2.59 | 8000 | 0.7983 | 0.5072 | | 2.3309 | 2.92 | 9000 | 1.8917 | 0.9416 | | 2.1834 | 3.24 | 10000 | 1.7496 | 0.9030 | | 2.3047 | 3.56 | 11000 | 1.5377 | 0.8747 | | 2.1378 | 3.89 | 12000 | 1.3501 | 0.7923 | | 1.9812 | 4.21 | 13000 | 1.2662 | 0.7697 | | 2.6855 | 4.54 | 14000 | 2.4120 | 0.9902 | | 2.7482 | 4.86 | 15000 | 2.5341 | 0.9874 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "model-index": [{"name": "", "results": []}]}
Plim/xls-r-1b-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "fr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #fr #license-apache-2.0 #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - FR dataset. It achieves the following results on the evaluation set: * Loss: 0.2464 * Wer: 0.2220 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 5.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #fr #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
[ 60, 155, 5, 50 ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #fr #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
## Model description This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 (extended to 7.0 with training with checkpoint) - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.9114 | 0.29 | 1000 | inf | 0.9997 | | 1.2436 | 0.57 | 2000 | inf | 0.4310 | | 1.0552 | 0.86 | 3000 | inf | 0.3144 | | 1.0044 | 1.15 | 4000 | inf | 0.2814 | | 0.9718 | 1.43 | 5000 | inf | 0.2658 | | 0.9502 | 1.72 | 6000 | inf | 0.2566 | | 0.9418 | 2.01 | 7000 | inf | 0.2476 | | 0.9215 | 2.29 | 8000 | inf | 0.2420 | | 0.9236 | 2.58 | 9000 | inf | 0.2388 | | 0.9014 | 2.87 | 10000 | inf | 0.2354 | | 0.8814 | 3.15 | 11000 | inf | 0.2312 | | 0.8809 | 3.44 | 12000 | inf | 0.2285 | | 0.8717 | 3.73 | 13000 | inf | 0.2263 | | 0.8787 | 4.01 | 14000 | inf | 0.2218 | | 0.8567 | 4.3 | 15000 | inf | 0.2193 | | 0.8488 | 4.59 | 16000 | inf | 0.2187 | | 0.8359 | 4.87 | 17000 | inf | 0.2172 | Training continued with checkpoint from STEP 17000: | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | / | 5.16 | 18000 | inf | 0.2176 | | / | 5.45 | 19000 | inf | 0.2181 | | / | 5.73 | 20000 | inf | 0.2155 | | / | 6.02 | 21000 | inf | 0.2140 | | / | 6.31 | 22000 | inf | 0.2124 | | / | 6.59 | 23000 | inf | 0.2117 | | / | 6.88 | 24000 | inf | 0.2116 | It achieves the best result on the validation set on Step 24000: - Wer: 0.2116 Got some issue with validation loss calculation. ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3.dev0 - Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8` with split `test` ```bash python eval.py --model_id Plim/xls-r-300m-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id Plim/xls-r-300m-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "model-index": [{"name": "XLS-R-300m - French", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": "to recompute with STEP 24000", "name": "Test WER"}, {"type": "cer", "value": "to recompute with STEP 24000", "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 35.29, "name": "Test WER"}, {"type": "cer", "value": 13.94, "name": "Test CER"}]}]}]}
Plim/xls-r-300m-cv_8-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "fr", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #fr #license-apache-2.0 #model-index #endpoints_compatible #region-us
Model description ----------------- This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - FR dataset. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 5.0 (extended to 7.0 with training with checkpoint) * mixed\_precision\_training: Native AMP ### Training results Training continued with checkpoint from STEP 17000: It achieves the best result on the validation set on Step 24000: * Wer: 0.2116 Got some issue with validation loss calculation. ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.3.dev0 * Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_8' with split 'test' 2. To evaluate on 'speech-recognition-community-v2/dev\_data'
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 5.0 (extended to 7.0 with training with checkpoint)\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\n\nTraining continued with checkpoint from STEP 17000:\n\n\n\nIt achieves the best result on the validation set on Step 24000:\n\n\n* Wer: 0.2116\n\n\nGot some issue with validation loss calculation.", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0", "### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #fr #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 5.0 (extended to 7.0 with training with checkpoint)\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\n\nTraining continued with checkpoint from STEP 17000:\n\n\n\nIt achieves the best result on the validation set on Step 24000:\n\n\n* Wer: 0.2116\n\n\nGot some issue with validation loss calculation.", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0", "### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'" ]
[ 64, 166, 45, 50, 50 ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #fr #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 5.0 (extended to 7.0 with training with checkpoint)\n* mixed\\_precision\\_training: Native AMP### Training results\n\n\n\nTraining continued with checkpoint from STEP 17000:\n\n\n\nIt achieves the best result on the validation set on Step 24000:\n\n\n* Wer: 0.2116\n\n\nGot some issue with validation loss calculation.### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'" ]
automatic-speech-recognition
transformers
--- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ## Model description This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.495 | 0.16 | 500 | 3.3883 | 1.0 | | 2.9095 | 0.32 | 1000 | 2.9152 | 1.0000 | | 1.8434 | 0.49 | 1500 | 1.0473 | 0.7446 | | 1.4298 | 0.65 | 2000 | 0.5729 | 0.5130 | | 1.1937 | 0.81 | 2500 | 0.3795 | 0.3450 | | 1.1248 | 0.97 | 3000 | 0.3321 | 0.3052 | | 1.0835 | 1.13 | 3500 | 0.3038 | 0.2805 | | 1.0479 | 1.3 | 4000 | 0.2910 | 0.2689 | | 1.0413 | 1.46 | 4500 | 0.2798 | 0.2593 | | 1.014 | 1.62 | 5000 | 0.2727 | 0.2512 | | 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 | | 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 | It achieves the best result on STEP 6000 on the validation set: - Loss: 0.2619 - Wer: 0.2457 ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7` with split `test` ```bash python eval.py --model_id Plim/xls-r-300m-fr --dataset mozilla-foundation/common_voice_7_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id Plim/xls-r-300m-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - French", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 24.56, "name": "Test WER"}, {"type": "cer", "value": 7.3, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 63.62, "name": "Test WER"}, {"type": "cer", "value": 17.2, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 66.45, "name": "Test WER"}]}]}]}
Plim/xls-r-300m-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "fr", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
--- Model description ----------------- This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - FR dataset. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 2.0 * mixed\_precision\_training: Native AMP ### Training results It achieves the best result on STEP 6000 on the validation set: * Loss: 0.2619 * Wer: 0.2457 ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_7' with split 'test' 2. To evaluate on 'speech-recognition-community-v2/dev\_data'
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 2.0\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\n\nIt achieves the best result on STEP 6000 on the validation set:\n\n\n* Loss: 0.2619\n* Wer: 0.2457", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0", "### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 2.0\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\n\nIt achieves the best result on STEP 6000 on the validation set:\n\n\n* Loss: 0.2619\n* Wer: 0.2457", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0", "### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'" ]
[ 96, 155, 34, 50, 50 ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 2.0\n* mixed\\_precision\\_training: Native AMP### Training results\n\n\n\nIt achieves the best result on STEP 6000 on the validation set:\n\n\n* Loss: 0.2619\n* Wer: 0.2457### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [./checkpoint-6000](https://huggingface.co/./checkpoint-6000) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset. It achieves the following results on the evaluation set: - Loss: 0.2619 - Wer: 0.2457 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.495 | 0.16 | 500 | 3.3883 | 1.0 | | 2.9095 | 0.32 | 1000 | 2.9152 | 1.0000 | | 1.8434 | 0.49 | 1500 | 1.0473 | 0.7446 | | 1.4298 | 0.65 | 2000 | 0.5729 | 0.5130 | | 1.1937 | 0.81 | 2500 | 0.3795 | 0.3450 | | 1.1248 | 0.97 | 3000 | 0.3321 | 0.3052 | | 1.0835 | 1.13 | 3500 | 0.3038 | 0.2805 | | 1.0479 | 1.3 | 4000 | 0.2910 | 0.2689 | | 1.0413 | 1.46 | 4500 | 0.2798 | 0.2593 | | 1.014 | 1.62 | 5000 | 0.2727 | 0.2512 | | 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 | | 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["fr"], "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "model-index": [{"name": "", "results": []}]}
Plim/xls-r-300m-lm-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "fr", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #fr #endpoints_compatible #region-us
This model is a fine-tuned version of ./checkpoint-6000 on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - FR dataset. It achieves the following results on the evaluation set: * Loss: 0.2619 * Wer: 0.2457 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 2.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 2.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #fr #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 2.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
[ 52, 155, 5, 50 ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #fr #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 2.0\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 2.4285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5169 | 1.0 | 1642 | 1.6958 | | 1.1326 | 2.0 | 3284 | 2.0009 | | 0.8638 | 3.0 | 4926 | 2.4285 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
Plimpton/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-squad ======================================= This model is a fine-tuned version of distilbert-base-uncased on the squad\_v2 dataset. It achieves the following results on the evaluation set: * Loss: 2.4285 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ 50, 101, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
question-answering
transformers
[Google's mT5](https://github.com/google-research/multilingual-t5) This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus ```python from transformers import MT5Tokenizer, MT5ForConditionalGeneration tokenizer = MT5Tokenizer.from_pretrained("Pollawat/mt5-small-thai-qa-qg") model = MT5ForConditionalGeneration.from_pretrained("Pollawat/mt5-small-thai-qa-qg") text = "กรุงเทพมหานคร เป็นเมืองหลวงและนครที่มีประชากรมากที่สุดของประเทศไทย เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ เป็นเมืองที่มีชื่อยาวที่สุดในโลก ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 5 ล้านคน" input_ids = tokenizer.encode(text, return_tensors='pt') beam_output = model.generate( input_ids, max_length=50, num_beams=5, early_stopping=True ) print(tokenizer.decode(beam_output[0])) >> <pad> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด <ANS> ฝั่งพระนครและฝั่งธนบุรี</s> print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) >> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด ฝั่งพระนครและฝั่งธนบุรี ```
{"language": ["thai", "th"], "license": "mit", "tags": ["question-generation", "question-answering"], "datasets": ["NSC2018", "iapp-wiki-qa-dataset", "XQuAD"]}
Pollawat/mt5-small-thai-qa-qg
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "question-generation", "question-answering", "dataset:NSC2018", "dataset:iapp-wiki-qa-dataset", "dataset:XQuAD", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "thai", "th" ]
TAGS #transformers #pytorch #mt5 #text2text-generation #question-generation #question-answering #dataset-NSC2018 #dataset-iapp-wiki-qa-dataset #dataset-XQuAD #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Google's mT5 This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus
[]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #question-generation #question-answering #dataset-NSC2018 #dataset-iapp-wiki-qa-dataset #dataset-XQuAD #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 79 ]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #question-generation #question-answering #dataset-NSC2018 #dataset-iapp-wiki-qa-dataset #dataset-XQuAD #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]