pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0610 - Precision: 0.9275 - Recall: 0.9370 - F1: 0.9322 - Accuracy: 0.9836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2507 | 1.0 | 878 | 0.0714 | 0.9181 | 0.9243 | 0.9212 | 0.9813 | | 0.0516 | 2.0 | 1756 | 0.0617 | 0.9208 | 0.9325 | 0.9266 | 0.9828 | | 0.0306 | 3.0 | 2634 | 0.0610 | 0.9275 | 0.9370 | 0.9322 | 0.9836 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9274720407485328, "name": "Precision"}, {"type": "recall", "value": 0.9370175634858485, "name": "Recall"}, {"type": "f1", "value": 0.932220367278798, "name": "F1"}, {"type": "accuracy", "value": 0.9836370279759162, "name": "Accuracy"}]}]}]}
indridinn/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0610 * Precision: 0.9275 * Recall: 0.9370 * F1: 0.9322 * Accuracy: 0.9836 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.11.2 * Pytorch 1.9.0+cu102 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
Dummy Model New
{"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesian by cahya", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": 25.86, "name": "Test WER"}]}]}]}
inergi/wav2vec2-from-scratch-finetune-dummy
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "id", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
Dummy Model New
[]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Assamese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "as", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese") model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Assamese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "as", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese") model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\।]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub('’ ',' ',batch["sentence"]) batch["sentence"] = re.sub(' ‘',' ',batch["sentence"]) batch["sentence"] = re.sub('’|‘','\'',batch["sentence"]) batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 69.63 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "as", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Assamese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice as", "type": "common_voice", "args": "as"}, "metrics": [{"type": "wer", "value": 69.63, "name": "Test WER"}]}]}]}
infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "as", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "as" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #as #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Assamese Fine-tuned facebook/wav2vec2-large-xlsr-53 on Assamese using the Common Voice. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Assamese test data of Common Voice. Test Result: 69.63 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Assamese\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Assamese using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\nThe model can be used directly (without a language model) as follows:", "## Evaluation\nThe model can be evaluated as follows on the Assamese test data of Common Voice.\n\nTest Result: 69.63 %", "## Training\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #as #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Assamese\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Assamese using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\nThe model can be used directly (without a language model) as follows:", "## Evaluation\nThe model can be evaluated as follows on the Assamese test data of Common Voice.\n\nTest Result: 69.63 %", "## Training\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Odia Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Odia using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "or", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia") model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Odia test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "or", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia") model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\।\–]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub('’ ',' ',batch["sentence"]) batch["sentence"] = re.sub(' ‘',' ',batch["sentence"]) batch["sentence"] = re.sub('’|‘','\'',batch["sentence"]) batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 55.07 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "or", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Odia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice as", "type": "common_voice", "args": "or"}, "metrics": [{"type": "wer", "value": 55.07, "name": "Test WER"}]}]}]}
infinitejoy/Wav2Vec2-Large-XLSR-53-Odia
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "or", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "or" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #or #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Odia Fine-tuned facebook/wav2vec2-large-xlsr-53 on Odia using the Common Voice. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Odia test data of Common Voice. Test Result: 55.07 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Odia\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Odia using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\nThe model can be used directly (without a language model) as follows:", "## Evaluation\nThe model can be evaluated as follows on the Odia test data of Common Voice.\n\nTest Result: 55.07 %", "## Training\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #or #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Odia\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Odia using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\nThe model can be used directly (without a language model) as follows:", "## Evaluation\nThe model can be evaluated as follows on the Odia test data of Common Voice.\n\nTest Result: 55.07 %", "## Training\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Tamil Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ta", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil") model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Tamil test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ta", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil") model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub('’ ',' ',batch["sentence"]) batch["sentence"] = re.sub(' ‘',' ',batch["sentence"]) batch["sentence"] = re.sub('’|‘','\'',batch["sentence"]) batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 71.29 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "ta", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Tamil", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ta", "type": "common_voice", "args": "ta"}, "metrics": [{"type": "wer", "value": 71.29, "name": "Test WER"}]}]}]}
infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ta", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ta" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Tamil Fine-tuned facebook/wav2vec2-large-xlsr-53 on Tamil using the Common Voice. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Tamil test data of Common Voice. Test Result: 71.29 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Tamil\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Tamil using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\nThe model can be used directly (without a language model) as follows:", "## Evaluation\nThe model can be evaluated as follows on the Tamil test data of Common Voice.\n\nTest Result: 71.29 %", "## Training\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Tamil\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Tamil using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\nThe model can be used directly (without a language model) as follows:", "## Evaluation\nThe model can be evaluated as follows on the Tamil test data of Common Voice.\n\nTest Result: 71.29 %", "## Training\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-abkhaz-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 0.1614 - Wer: 0.2907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.2881 | 4.26 | 4000 | 0.3764 | 0.6461 | | 1.0767 | 8.53 | 8000 | 0.2657 | 0.5164 | | 0.9841 | 12.79 | 12000 | 0.2330 | 0.4445 | | 0.9274 | 17.06 | 16000 | 0.2134 | 0.3929 | | 0.8781 | 21.32 | 20000 | 0.1945 | 0.3886 | | 0.8381 | 25.59 | 24000 | 0.1840 | 0.3737 | | 0.8054 | 29.85 | 28000 | 0.1756 | 0.3523 | | 0.7763 | 34.12 | 32000 | 0.1745 | 0.3299 | | 0.7474 | 38.38 | 36000 | 0.1677 | 0.3074 | | 0.7298 | 42.64 | 40000 | 0.1649 | 0.2963 | | 0.7125 | 46.91 | 44000 | 0.1617 | 0.2931 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["ab"], "license": "apache-2.0", "tags": ["ab", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Abkhaz", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ab"}, "metrics": [{"type": "wer", "value": 27.6, "name": "Test WER"}, {"type": "cer", "value": 4.577, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-abkhaz-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ab", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ab" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #ab #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-abkhaz-cv8 ==================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - AB dataset. It achieves the following results on the evaluation set: * Loss: 0.1614 * Wer: 0.2907 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 4000 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 4000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #ab #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 4000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-abkhaz This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 0.5359 - Wer: 0.6192 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8617 | 22.73 | 500 | 2.6264 | 1.0013 | | 1.2716 | 45.45 | 1000 | 0.6218 | 0.6942 | | 1.049 | 68.18 | 1500 | 0.5442 | 0.6368 | | 0.9632 | 90.91 | 2000 | 0.5364 | 0.6242 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ab"], "license": "apache-2.0", "tags": ["ab", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Abkhaz", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ab"}, "metrics": [{"type": "wer", "value": 60.07, "name": "Test WER"}, {"type": "cer", "value": 12.5, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-abkhaz
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ab", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ab" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #ab #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-abkhaz ================================ This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - AB dataset. It achieves the following results on the evaluation set: * Loss: 0.5359 * Wer: 0.6192 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 200 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #ab #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLS-R-300m-SV This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AR dataset. It achieves the following results on the evaluation set: - Loss: NA - Wer: NA ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python eval.py \ --model_id infinitejoy/wav2vec2-large-xls-r-300m-arabic \ --dataset mozilla-foundation/common_voice_7_0 --config ar --split test --log_outputs ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py \ --model_id infinitejoy/wav2vec2-large-xls-r-300m-arabic --dataset speech-recognition-community-v2/dev_data \ --config ar --split validation --chunk_length_s 10 --stride_length_s 1 ``` ### Inference With LM ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "infinitejoy/wav2vec2-large-xls-r-300m-arabic" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "ar", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text ``` ### Eval results on Common Voice 7 "test" (WER): | Without LM | With LM (run `./eval.py`) | |---|---| | NA | NA |
{"language": ["ar"], "license": "apache-2.0", "tags": ["ar", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Arabic", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ar"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ar"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-arabic
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ar", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #ar #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
XLS-R-300m-SV ============= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - AR dataset. It achieves the following results on the evaluation set: * Loss: NA * Wer: NA Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.0+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_7\_0' with split 'test' 2. To evaluate on 'speech-recognition-community-v2/dev\_data' ### Inference With LM ### Eval results on Common Voice 7 "test" (WER):
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.10.3", "#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'", "### Inference With LM", "### Eval results on Common Voice 7 \"test\" (WER):" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #ar #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.10.3", "#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'", "### Inference With LM", "### Eval results on Common Voice 7 \"test\" (WER):" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-armenian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HY-AM dataset. It achieves the following results on the evaluation set: - Loss: 0.9669 - Wer: 0.6942 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 1.7294 | 27.78 | 500 | 0.8540 | 0.9944 | | 0.8863 | 55.56 | 1000 | 0.7282 | 0.7312 | | 0.5789 | 83.33 | 1500 | 0.8178 | 0.8102 | | 0.3899 | 111.11 | 2000 | 0.8034 | 0.7701 | | 0.2869 | 138.89 | 2500 | 0.9061 | 0.6999 | | 0.1934 | 166.67 | 3000 | 0.9400 | 0.7105 | | 0.1551 | 194.44 | 3500 | 0.9667 | 0.6955 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["hy-AM"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Armenian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "hy-AM"}, "metrics": [{"type": "wer", "value": 101.627, "name": "Test WER"}, {"type": "cer", "value": 158.767, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-armenian
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hy-AM" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-armenian ================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - HY-AM dataset. It achieves the following results on the evaluation set: * Loss: 0.9669 * Wer: 0.6942 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 200.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-assamese-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AS dataset. It achieves the following results on the evaluation set: - Loss: 0.9814 - Wer: 0.7402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 20.0 | 400 | 3.1447 | 1.0 | | No log | 40.0 | 800 | 1.0074 | 0.8556 | | 3.1278 | 60.0 | 1200 | 0.9507 | 0.7711 | | 3.1278 | 80.0 | 1600 | 0.9730 | 0.7630 | | 0.8247 | 100.0 | 2000 | 0.9814 | 0.7402 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["as"], "license": "apache-2.0", "tags": ["as", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Assamese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "as"}, "metrics": [{"type": "wer", "value": 65.966, "name": "Test WER"}, {"type": "cer", "value": 22.188, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-assamese-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "as", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "as" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #as #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-assamese-cv8 ====================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - AS dataset. It achieves the following results on the evaluation set: * Loss: 0.9814 * Wer: 0.7402 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 400 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #as #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
# wav2vec2-large-xls-r-300m-assamese This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_7_0 dataset. It achieves the following results on the evaluation set: - WER: 0.7954545454545454 - CER: 0.32341269841269843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data To compute the evaluation parameters ```bash cd wav2vec2-large-xls-r-300m-assamese; python eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config as --split test --log_outputs ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-4 - train_batch_size: 16 - eval_batch_size: 8 - seed: not given - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 400 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------: | | 1.584065 | NA | 400 | 1.584065 | 0.915512 | | 1.658865 | Na | 800 | 1.658865 | 0.805096 | | 1.882352 | NA | 1200 | 1.882352 | 0.820742 | | 1.881240 | NA | 1600 | 1.881240 | 0.810907 | | 2.159748 | NA | 2000 | 2.159748 | 0.804202 | | 1.992871 | NA | 2400 | 1.992871 | 0.803308 | | 2.201436 | NA | 2800 | 2.201436 | 0.802861 | | 2.165218 | NA | 3200 | 2.165218 | 0.793920 | | 2.253643 | NA | 3600 | 2.253643 | 0.796603 | | 2.265880 | NA | 4000 | 2.265880 | 0.790344 | | 2.293935 | NA | 4400 | 2.293935 | 0.797050 | | 2.288851 | NA | 4800 | 2.288851 | 0.784086 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.13.3 - Tokenizers 0.10.3
{"language": "as", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning", "as", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Assamese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "as"}, "metrics": [{"type": "wer", "value": 72.64, "name": "Test WER"}, {"type": "cer", "value": 27.35, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-assamese
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning", "as", "robust-speech-event", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "as" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning #as #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-assamese ================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice\_7\_0 dataset. It achieves the following results on the evaluation set: * WER: 0.7954545454545454 * CER: 0.32341269841269843 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- To compute the evaluation parameters Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-4 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: not given * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 400 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu113 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-4\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: not given\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 400\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu113\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning #as #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-4\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: not given\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 400\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu113\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-basaa-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset. It achieves the following results on the evaluation set: - Loss: 0.4648 - Wer: 0.5472 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9421 | 12.82 | 500 | 2.8894 | 1.0 | | 1.1872 | 25.64 | 1000 | 0.6688 | 0.7460 | | 0.8894 | 38.46 | 1500 | 0.4868 | 0.6516 | | 0.769 | 51.28 | 2000 | 0.4960 | 0.6507 | | 0.6936 | 64.1 | 2500 | 0.4781 | 0.5384 | | 0.624 | 76.92 | 3000 | 0.4643 | 0.5430 | | 0.5966 | 89.74 | 3500 | 0.4530 | 0.5591 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["bas"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "bas", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Basaa", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "bas"}, "metrics": [{"type": "wer", "value": 38.057, "name": "Test WER"}, {"type": "cer", "value": 11.233, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-basaa-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "bas", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "bas" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #bas #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-basaa-cv8 =================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - BAS dataset. It achieves the following results on the evaluation set: * Loss: 0.4648 * Wer: 0.5472 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #bas #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-basaa This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BAS dataset. It achieves the following results on the evaluation set: - Loss: 0.5975 - Wer: 0.4981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 2.9287 | 15.62 | 500 | 2.8774 | 1.0 | | 1.1182 | 31.25 | 1000 | 0.6248 | 0.7131 | | 0.8329 | 46.88 | 1500 | 0.5573 | 0.5792 | | 0.7109 | 62.5 | 2000 | 0.5420 | 0.5683 | | 0.6295 | 78.12 | 2500 | 0.5166 | 0.5395 | | 0.5715 | 93.75 | 3000 | 0.5487 | 0.5629 | | 0.5016 | 109.38 | 3500 | 0.5370 | 0.5471 | | 0.4661 | 125.0 | 4000 | 0.5621 | 0.5395 | | 0.423 | 140.62 | 4500 | 0.5658 | 0.5248 | | 0.3793 | 156.25 | 5000 | 0.5921 | 0.4981 | | 0.3651 | 171.88 | 5500 | 0.5987 | 0.4888 | | 0.3351 | 187.5 | 6000 | 0.6017 | 0.4948 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["bas"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Basaa", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "bas"}, "metrics": [{"type": "wer", "value": 104.08, "name": "Test WER"}, {"type": "cer", "value": 228.48, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-basaa
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "bas", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "bas" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #bas #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-basaa =============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - BAS dataset. It achieves the following results on the evaluation set: * Loss: 0.5975 * Wer: 0.4981 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 200.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #bas #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-bashkir This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset. It achieves the following results on the evaluation set: - Loss: 0.1892 - Wer: 0.2421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.4792 | 0.5 | 2000 | 0.4598 | 0.5404 | | 1.449 | 1.0 | 4000 | 0.4650 | 0.5610 | | 1.3742 | 1.49 | 6000 | 0.4001 | 0.4977 | | 1.3375 | 1.99 | 8000 | 0.3916 | 0.4894 | | 1.2961 | 2.49 | 10000 | 0.3641 | 0.4569 | | 1.2714 | 2.99 | 12000 | 0.3491 | 0.4488 | | 1.2399 | 3.48 | 14000 | 0.3151 | 0.3986 | | 1.2067 | 3.98 | 16000 | 0.3081 | 0.3923 | | 1.1842 | 4.48 | 18000 | 0.2875 | 0.3703 | | 1.1644 | 4.98 | 20000 | 0.2840 | 0.3670 | | 1.161 | 5.48 | 22000 | 0.2790 | 0.3597 | | 1.1303 | 5.97 | 24000 | 0.2552 | 0.3272 | | 1.0874 | 6.47 | 26000 | 0.2405 | 0.3142 | | 1.0613 | 6.97 | 28000 | 0.2352 | 0.3055 | | 1.0498 | 7.47 | 30000 | 0.2249 | 0.2910 | | 1.021 | 7.96 | 32000 | 0.2118 | 0.2752 | | 1.0002 | 8.46 | 34000 | 0.2046 | 0.2662 | | 0.9762 | 8.96 | 36000 | 0.1969 | 0.2530 | | 0.9568 | 9.46 | 38000 | 0.1917 | 0.2449 | | 0.953 | 9.96 | 40000 | 0.1893 | 0.2425 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ba"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Bashkir", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ba"}, "metrics": [{"type": "wer", "value": 24.2, "name": "Test WER"}, {"type": "cer", "value": 5.08, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-bashkir
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "ba", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ba" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #ba #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-bashkir ================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - BA dataset. It achieves the following results on the evaluation set: * Loss: 0.1892 * Wer: 0.2421 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 10.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #ba #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLS-R-300M - Breton This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset. It achieves the following results on the evaluation set: - Loss: NA - Wer: NA ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: ### Training results NA ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8 --dataset mozilla-foundation/common_voice_8_0 --config br --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8 --dataset speech-recognition-community-v2/dev_data --config br --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ### Inference With LM ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "br", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text ``` ### Eval results on Common Voice 7 "test" (WER): NA
{"language": ["br"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "br", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Breton", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "br"}, "metrics": [{"type": "wer", "value": 54.855, "name": "Test WER"}, {"type": "cer", "value": 17.865, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "br", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "br" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #br #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
# XLS-R-300M - Breton This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset. It achieves the following results on the evaluation set: - Loss: NA - Wer: NA ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: ### Training results NA ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test' 2. To evaluate on 'speech-recognition-community-v2/dev_data' ### Inference With LM ### Eval results on Common Voice 7 "test" (WER): NA
[ "# XLS-R-300M - Breton\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset.\nIt achieves the following results on the evaluation set:\n- Loss: NA\n- Wer: NA", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:", "### Training results\n\nNA", "### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.10.3", "#### Evaluation Commands\n\n1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'\n\n\n\n2. To evaluate on 'speech-recognition-community-v2/dev_data'", "### Inference With LM", "### Eval results on Common Voice 7 \"test\" (WER):\n\nNA" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #br #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# XLS-R-300M - Breton\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset.\nIt achieves the following results on the evaluation set:\n- Loss: NA\n- Wer: NA", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:", "### Training results\n\nNA", "### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.10.3", "#### Evaluation Commands\n\n1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'\n\n\n\n2. To evaluate on 'speech-recognition-community-v2/dev_data'", "### Inference With LM", "### Eval results on Common Voice 7 \"test\" (WER):\n\nNA" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-breton This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BR dataset. It achieves the following results on the evaluation set: - Loss: 0.6102 - Wer: 0.4455 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9205 | 3.33 | 500 | 2.8659 | 1.0 | | 1.6403 | 6.67 | 1000 | 0.9440 | 0.7593 | | 1.3483 | 10.0 | 1500 | 0.7580 | 0.6215 | | 1.2255 | 13.33 | 2000 | 0.6851 | 0.5722 | | 1.1139 | 16.67 | 2500 | 0.6409 | 0.5220 | | 1.0688 | 20.0 | 3000 | 0.6245 | 0.5055 | | 0.99 | 23.33 | 3500 | 0.6142 | 0.4874 | | 0.9345 | 26.67 | 4000 | 0.5946 | 0.4829 | | 0.9058 | 30.0 | 4500 | 0.6229 | 0.4704 | | 0.8683 | 33.33 | 5000 | 0.6153 | 0.4666 | | 0.8367 | 36.67 | 5500 | 0.5952 | 0.4542 | | 0.8162 | 40.0 | 6000 | 0.6030 | 0.4541 | | 0.8042 | 43.33 | 6500 | 0.5972 | 0.4485 | | 0.7836 | 46.67 | 7000 | 0.6070 | 0.4497 | | 0.7556 | 50.0 | 7500 | 0.6102 | 0.4455 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["br"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Breton", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "br"}, "metrics": [{"type": "wer", "value": 107.955, "name": "Test WER"}, {"type": "cer", "value": 379.33, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-breton
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "br", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "br" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #br #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-breton ================================ This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - BR dataset. It achieves the following results on the evaluation set: * Loss: 0.6102 * Wer: 0.4455 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #br #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-bulgarian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BG dataset. It achieves the following results on the evaluation set: - Loss: 0.4487 - Wer: 0.4674 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9774 | 6.33 | 500 | 2.9769 | 1.0 | | 1.3453 | 12.66 | 1000 | 0.6523 | 0.6980 | | 1.1658 | 18.99 | 1500 | 0.5636 | 0.6359 | | 1.0797 | 25.32 | 2000 | 0.5004 | 0.5759 | | 1.044 | 31.65 | 2500 | 0.4958 | 0.5569 | | 0.9915 | 37.97 | 3000 | 0.4971 | 0.5350 | | 0.9429 | 44.3 | 3500 | 0.4829 | 0.5229 | | 0.9266 | 50.63 | 4000 | 0.4515 | 0.5074 | | 0.8965 | 56.96 | 4500 | 0.4599 | 0.5039 | | 0.878 | 63.29 | 5000 | 0.4735 | 0.4954 | | 0.8494 | 69.62 | 5500 | 0.4460 | 0.4878 | | 0.8343 | 75.95 | 6000 | 0.4510 | 0.4795 | | 0.8236 | 82.28 | 6500 | 0.4538 | 0.4789 | | 0.8069 | 88.61 | 7000 | 0.4526 | 0.4748 | | 0.7958 | 94.94 | 7500 | 0.4496 | 0.4700 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["bg"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "bg", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Bulgarian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "bg"}, "metrics": [{"type": "wer", "value": 46.68, "name": "Test WER"}, {"type": "cer", "value": 10.75, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "bg"}, "metrics": [{"type": "wer", "value": 63.68, "name": "Test WER"}, {"type": "cer", "value": 19.88, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "bg"}, "metrics": [{"type": "wer", "value": 64.08, "name": "Test WER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-bulgarian
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "bg", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "bg" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #bg #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-bulgarian =================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - BG dataset. It achieves the following results on the evaluation set: * Loss: 0.4487 * Wer: 0.4674 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #bg #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-chuvash This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - CV dataset. It achieves the following results on the evaluation set: - Loss: 0.7651 - Wer: 0.6166 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.8032 | 8.77 | 500 | 0.8059 | 0.8352 | | 1.2608 | 17.54 | 1000 | 0.5828 | 0.6769 | | 1.1337 | 26.32 | 1500 | 0.6892 | 0.6908 | | 1.0457 | 35.09 | 2000 | 0.7077 | 0.6781 | | 0.97 | 43.86 | 2500 | 0.5993 | 0.6228 | | 0.8767 | 52.63 | 3000 | 0.7213 | 0.6604 | | 0.8223 | 61.4 | 3500 | 0.8161 | 0.6968 | | 0.7441 | 70.18 | 4000 | 0.7057 | 0.6184 | | 0.7011 | 78.95 | 4500 | 0.7027 | 0.6024 | | 0.6542 | 87.72 | 5000 | 0.7092 | 0.5979 | | 0.6081 | 96.49 | 5500 | 0.7917 | 0.6324 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["cv"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "cv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Chuvash", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "cv"}, "metrics": [{"type": "wer", "value": 60.31, "name": "Test WER"}, {"type": "cer", "value": 15.08, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-chuvash
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "cv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "cv" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #cv #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-chuvash ================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - CV dataset. It achieves the following results on the evaluation set: * Loss: 0.7651 * Wer: 0.6166 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #cv #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-finnish This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FI dataset. It achieves the following results on the evaluation set: - Loss: 0.2307 - Wer: 0.2984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 70.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9032 | 4.39 | 500 | 2.8768 | 1.0 | | 1.5724 | 8.77 | 1000 | 0.5638 | 0.6438 | | 1.1818 | 13.16 | 1500 | 0.3338 | 0.4759 | | 1.0798 | 17.54 | 2000 | 0.2876 | 0.4086 | | 1.0296 | 21.93 | 2500 | 0.2694 | 0.4248 | | 1.0014 | 26.32 | 3000 | 0.2626 | 0.3733 | | 0.9616 | 30.7 | 3500 | 0.2391 | 0.3294 | | 0.9303 | 35.09 | 4000 | 0.2352 | 0.3218 | | 0.9248 | 39.47 | 4500 | 0.2351 | 0.3207 | | 0.8837 | 43.86 | 5000 | 0.2341 | 0.3103 | | 0.8887 | 48.25 | 5500 | 0.2311 | 0.3115 | | 0.8529 | 52.63 | 6000 | 0.2230 | 0.3001 | | 0.8404 | 57.02 | 6500 | 0.2279 | 0.3054 | | 0.8242 | 61.4 | 7000 | 0.2298 | 0.3006 | | 0.8288 | 65.79 | 7500 | 0.2333 | 0.2997 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["fi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "fi", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Finnish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fi"}, "metrics": [{"type": "wer", "value": 29.97, "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-finnish
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "fi", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fi" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #fi #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-finnish ================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - FI dataset. It achieves the following results on the evaluation set: * Loss: 0.2307 * Wer: 0.2984 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 70.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 70.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #fi #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 70.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-galician This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GL dataset. It achieves the following results on the evaluation set: - Loss: 0.1525 - Wer: 0.1542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0067 | 4.35 | 500 | 2.9632 | 1.0 | | 1.4939 | 8.7 | 1000 | 0.5005 | 0.4157 | | 0.9982 | 13.04 | 1500 | 0.1967 | 0.1857 | | 0.8726 | 17.39 | 2000 | 0.1587 | 0.1564 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["gl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "gl", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Galician", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "gl"}, "metrics": [{"type": "wer", "value": 101.54, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "gl"}, "metrics": [{"type": "wer", "value": 105.69, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "gl"}, "metrics": [{"type": "wer", "value": 101.95, "name": "Test WER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-galician
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "gl", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "gl" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #gl #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-galician ================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - GL dataset. It achieves the following results on the evaluation set: * Loss: 0.1525 * Wer: 0.1542 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 20.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 20.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #gl #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 20.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-georgian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - KA dataset. It achieves the following results on the evaluation set: - Loss: 0.3666 - Wer: 0.4211 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.8805 | 5.95 | 500 | 0.7547 | 0.8438 | | 1.2123 | 11.9 | 1000 | 0.4732 | 0.6542 | | 1.0822 | 17.86 | 1500 | 0.4027 | 0.5778 | | 0.9938 | 23.81 | 2000 | 0.3847 | 0.5524 | | 0.9383 | 29.76 | 2500 | 0.3845 | 0.5204 | | 0.8932 | 35.71 | 3000 | 0.3833 | 0.5297 | | 0.8495 | 41.67 | 3500 | 0.3759 | 0.5036 | | 0.8201 | 47.62 | 4000 | 0.3616 | 0.4859 | | 0.7794 | 53.57 | 4500 | 0.3874 | 0.4938 | | 0.735 | 59.52 | 5000 | 0.3748 | 0.4782 | | 0.7082 | 65.48 | 5500 | 0.3615 | 0.4675 | | 0.669 | 71.43 | 6000 | 0.3797 | 0.4601 | | 0.6457 | 77.38 | 6500 | 0.3812 | 0.4515 | | 0.6098 | 83.33 | 7000 | 0.3660 | 0.4343 | | 0.5874 | 89.29 | 7500 | 0.3640 | 0.4257 | | 0.5627 | 95.24 | 8000 | 0.3661 | 0.4239 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ka"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ka", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Georgian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ka"}, "metrics": [{"type": "wer", "value": 42.09, "name": "Test WER"}, {"type": "cer", "value": 8.01, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ka"}, "metrics": [{"type": "wer", "value": 65.32, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ka"}, "metrics": [{"type": "wer", "value": 65.03, "name": "Test WER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-georgian
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ka", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ka" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ka #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-georgian ================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - KA dataset. It achieves the following results on the evaluation set: * Loss: 0.3666 * Wer: 0.4211 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ka #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-greek This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - EL dataset. It achieves the following results on the evaluation set: - Loss: 0.6592 - Wer: 0.4564 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.0928 | 4.42 | 500 | 3.0804 | 1.0073 | | 1.4505 | 8.85 | 1000 | 0.9038 | 0.7330 | | 1.2207 | 13.27 | 1500 | 0.7375 | 0.6045 | | 1.0695 | 17.7 | 2000 | 0.7119 | 0.5441 | | 1.0104 | 22.12 | 2500 | 0.6069 | 0.5296 | | 0.9299 | 26.55 | 3000 | 0.6168 | 0.5206 | | 0.8588 | 30.97 | 3500 | 0.6382 | 0.5171 | | 0.7942 | 35.4 | 4000 | 0.6048 | 0.4988 | | 0.7808 | 39.82 | 4500 | 0.6730 | 0.5084 | | 0.743 | 44.25 | 5000 | 0.6749 | 0.5012 | | 0.6652 | 48.67 | 5500 | 0.6491 | 0.4735 | | 0.6386 | 53.1 | 6000 | 0.6928 | 0.4954 | | 0.5945 | 57.52 | 6500 | 0.6359 | 0.4798 | | 0.5561 | 61.95 | 7000 | 0.6409 | 0.4799 | | 0.5464 | 66.37 | 7500 | 0.6452 | 0.4691 | | 0.5119 | 70.8 | 8000 | 0.6376 | 0.4657 | | 0.474 | 75.22 | 8500 | 0.6541 | 0.4700 | | 0.45 | 79.65 | 9000 | 0.6374 | 0.4571 | | 0.4315 | 84.07 | 9500 | 0.6568 | 0.4625 | | 0.3967 | 88.5 | 10000 | 0.6636 | 0.4605 | | 0.3937 | 92.92 | 10500 | 0.6537 | 0.4597 | | 0.3788 | 97.35 | 11000 | 0.6614 | 0.4589 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["el"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "el", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Greek", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "el"}, "metrics": [{"type": "wer", "value": 102.23963133640552, "name": "Test WER"}, {"type": "cer", "value": 146.28, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "el"}, "metrics": [{"type": "wer", "value": 99.92, "name": "Test WER"}, {"type": "cer", "value": 132.38, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-greek
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "el", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "el" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #el #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-greek =============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - EL dataset. It achieves the following results on the evaluation set: * Loss: 0.6592 * Wer: 0.4564 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #el #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hausa This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HA dataset. It achieves the following results on the evaluation set: - Loss: 0.5756 - Wer: 0.6014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.7064 | 11.36 | 500 | 2.7112 | 1.0 | | 1.3079 | 22.73 | 1000 | 0.7337 | 0.7776 | | 1.0919 | 34.09 | 1500 | 0.5938 | 0.7023 | | 0.9546 | 45.45 | 2000 | 0.5698 | 0.6133 | | 0.8895 | 56.82 | 2500 | 0.5739 | 0.6142 | | 0.8152 | 68.18 | 3000 | 0.5579 | 0.6091 | | 0.7703 | 79.55 | 3500 | 0.5813 | 0.6210 | | 0.732 | 90.91 | 4000 | 0.5756 | 0.5860 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ha"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ha", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Hausa", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ha"}, "metrics": [{"type": "wer", "value": 100, "name": "Test WER"}, {"type": "cer", "value": 132.32, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-hausa
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ha", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ha" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ha #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-hausa =============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - HA dataset. It achieves the following results on the evaluation set: * Loss: 0.5756 * Wer: 0.6014 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ha #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset. It achieves the following results on the evaluation set: - Loss: 0.5414 - Wer: 1.0194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.6095 | 3.38 | 500 | 4.5881 | 0.9999 | | 3.3396 | 6.76 | 1000 | 3.3301 | 1.0001 | | 2.0061 | 10.14 | 1500 | 1.2096 | 1.0063 | | 1.523 | 13.51 | 2000 | 0.7836 | 1.0051 | | 1.3868 | 16.89 | 2500 | 0.6837 | 1.0080 | | 1.2807 | 20.27 | 3000 | 0.6568 | 1.0112 | | 1.231 | 23.65 | 3500 | 0.6120 | 1.0105 | | 1.1673 | 27.03 | 4000 | 0.5972 | 1.0089 | | 1.1416 | 30.41 | 4500 | 0.5780 | 1.0132 | | 1.0738 | 33.78 | 5000 | 0.5806 | 1.0123 | | 1.0771 | 37.16 | 5500 | 0.5586 | 1.0067 | | 1.0287 | 40.54 | 6000 | 0.5464 | 1.0058 | | 1.0106 | 43.92 | 6500 | 0.5407 | 1.0062 | | 0.9538 | 47.3 | 7000 | 0.5334 | 1.0089 | | 0.9607 | 50.68 | 7500 | 0.5395 | 1.0110 | | 0.9108 | 54.05 | 8000 | 0.5502 | 1.0137 | | 0.9252 | 57.43 | 8500 | 0.5498 | 1.0062 | | 0.8943 | 60.81 | 9000 | 0.5448 | 1.0158 | | 0.8728 | 64.19 | 9500 | 0.5257 | 1.0113 | | 0.8577 | 67.57 | 10000 | 0.5550 | 1.0178 | | 0.8332 | 70.95 | 10500 | 0.5607 | 1.0166 | | 0.8174 | 74.32 | 11000 | 0.5429 | 1.0145 | | 0.8168 | 77.7 | 11500 | 0.5561 | 1.0116 | | 0.7872 | 81.08 | 12000 | 0.5478 | 1.0164 | | 0.7707 | 84.46 | 12500 | 0.5412 | 1.0216 | | 0.7742 | 87.84 | 13000 | 0.5391 | 1.0207 | | 0.7594 | 91.22 | 13500 | 0.5379 | 1.0208 | | 0.7678 | 94.59 | 14000 | 0.5415 | 1.0198 | | 0.7502 | 97.97 | 14500 | 0.5409 | 1.0191 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hi", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Hindi", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 100, "name": "Test WER"}, {"type": "cer", "value": 92.98, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-hindi
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hi", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hi" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hi #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-hindi =============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - HI dataset. It achieves the following results on the evaluation set: * Loss: 0.5414 * Wer: 1.0194 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hi #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hungarian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HU dataset. It achieves the following results on the evaluation set: - Loss: 0.2562 - Wer: 0.3112 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.3964 | 3.52 | 1000 | 1.2251 | 0.8781 | | 1.3176 | 7.04 | 2000 | 0.3872 | 0.4462 | | 1.1999 | 10.56 | 3000 | 0.3244 | 0.3922 | | 1.1633 | 14.08 | 4000 | 0.3014 | 0.3704 | | 1.1132 | 17.61 | 5000 | 0.2913 | 0.3623 | | 1.0888 | 21.13 | 6000 | 0.2864 | 0.3498 | | 1.0487 | 24.65 | 7000 | 0.2821 | 0.3435 | | 1.0431 | 28.17 | 8000 | 0.2739 | 0.3308 | | 0.9896 | 31.69 | 9000 | 0.2629 | 0.3243 | | 0.9839 | 35.21 | 10000 | 0.2806 | 0.3308 | | 0.9586 | 38.73 | 11000 | 0.2650 | 0.3235 | | 0.9501 | 42.25 | 12000 | 0.2585 | 0.3173 | | 0.938 | 45.77 | 13000 | 0.2561 | 0.3117 | | 0.921 | 49.3 | 14000 | 0.2559 | 0.3115 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["hu"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hu", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Hungarian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "hu"}, "metrics": [{"type": "wer", "value": 31.099, "name": "Test WER"}, {"type": "cer", "value": 6.737, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hu"}, "metrics": [{"type": "wer", "value": 45.469, "name": "Test WER"}, {"type": "cer", "value": 15.727, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "hu"}, "metrics": [{"type": "wer", "value": 48.2, "name": "Test WER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-hungarian
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hu", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hu" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hu #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-hungarian =================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - HU dataset. It achieves the following results on the evaluation set: * Loss: 0.2562 * Wer: 0.3112 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hu #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-indonesian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - ID dataset. It achieves the following results on the evaluation set: - Loss: 0.2759 - Wer: 0.3256 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.0387 | 4.72 | 1000 | 3.0892 | 1.0 | | 1.7911 | 9.43 | 2000 | 0.8451 | 0.6702 | | 1.2826 | 14.15 | 3000 | 0.4211 | 0.4166 | | 1.1802 | 18.87 | 4000 | 0.3508 | 0.4690 | | 1.1065 | 23.58 | 5000 | 0.3319 | 0.4662 | | 1.0921 | 28.3 | 6000 | 0.3056 | 0.3880 | | 1.0366 | 33.02 | 7000 | 0.2997 | 0.3665 | | 0.9988 | 37.74 | 8000 | 0.2972 | 0.3653 | | 0.9864 | 42.45 | 9000 | 0.2697 | 0.3371 | | 0.9558 | 47.17 | 10000 | 0.2739 | 0.3141 | | 0.9094 | 51.89 | 11000 | 0.2657 | 0.3533 | | 0.9034 | 56.6 | 12000 | 0.2699 | 0.3397 | | 0.8907 | 61.32 | 13000 | 0.2765 | 0.3470 | | 0.8631 | 66.04 | 14000 | 0.2774 | 0.3346 | | 0.8389 | 70.75 | 15000 | 0.2743 | 0.3365 | | 0.8214 | 75.47 | 16000 | 0.2778 | 0.3201 | | 0.8195 | 80.19 | 17000 | 0.2725 | 0.3286 | | 0.7994 | 84.91 | 18000 | 0.2782 | 0.3315 | | 0.7816 | 89.62 | 19000 | 0.2775 | 0.3363 | | 0.7816 | 94.34 | 20000 | 0.2731 | 0.3278 | | 0.7635 | 99.06 | 21000 | 0.2767 | 0.3259 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["id"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-indonesian", "results": []}]}
infinitejoy/wav2vec2-large-xls-r-300m-indonesian
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "id", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #id #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-indonesian ==================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - ID dataset. It achieves the following results on the evaluation set: * Loss: 0.2759 * Wer: 0.3256 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 4000 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 4000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #id #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 4000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-irish This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GA-IE dataset. It achieves the following results on the evaluation set: - Loss: 1.1647 - Wer: 0.7296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 300.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 2.9022 | 124.94 | 500 | 2.7763 | 0.9824 | | 1.5112 | 249.94 | 1000 | 1.1736 | 0.7405 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ga-IE"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ga-IE", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Irish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ga-IE"}, "metrics": [{"type": "wer", "value": 103.54, "name": "Test WER"}, {"type": "cer", "value": 326.923, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-irish
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ga-IE", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ga-IE" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ga-IE #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-irish =============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - GA-IE dataset. It achieves the following results on the evaluation set: * Loss: 1.1647 * Wer: 0.7296 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 64 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 256 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 200 * num\_epochs: 300.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 300.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ga-IE #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 300.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-kurdish This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - KMR dataset. It achieves the following results on the evaluation set: - Loss: 0.2548 - Wer: 0.2688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.3161 | 12.27 | 2000 | 0.4199 | 0.4797 | | 1.0643 | 24.54 | 4000 | 0.2982 | 0.3721 | | 0.9718 | 36.81 | 6000 | 0.2762 | 0.3333 | | 0.8772 | 49.08 | 8000 | 0.2586 | 0.3051 | | 0.8236 | 61.35 | 10000 | 0.2575 | 0.2865 | | 0.7745 | 73.62 | 12000 | 0.2603 | 0.2816 | | 0.7297 | 85.89 | 14000 | 0.2539 | 0.2727 | | 0.7079 | 98.16 | 16000 | 0.2554 | 0.2681 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["kmr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "kmr", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Kurmanji Kurdish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "kmr"}, "metrics": [{"type": "wer", "value": 102.308, "name": "Test WER"}, {"type": "cer", "value": 538.748, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-kurdish
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "kmr", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "kmr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #kmr #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
wav2vec2-large-xls-r-300m-kurdish ================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - KMR dataset. It achieves the following results on the evaluation set: * Loss: 0.2548 * Wer: 0.2688 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #kmr #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-kyrgyz This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - KY dataset. It achieves the following results on the evaluation set: - Loss: 0.5817 - Wer: 0.4096 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.5412 | 18.69 | 2000 | 0.6161 | 0.5747 | | 1.311 | 37.38 | 4000 | 0.5707 | 0.5070 | | 1.1367 | 56.07 | 6000 | 0.5372 | 0.4664 | | 0.9696 | 74.77 | 8000 | 0.5443 | 0.4328 | | 0.8163 | 93.46 | 10000 | 0.5916 | 0.4124 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ky"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ky", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Kyrgyz", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ky"}, "metrics": [{"type": "wer", "value": 40.908, "name": "Test WER"}, {"type": "cer", "value": 10.999, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-kyrgyz
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ky", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ky" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ky #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-kyrgyz ================================ This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - KY dataset. It achieves the following results on the evaluation set: * Loss: 0.5817 * Wer: 0.4096 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ky #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-latvian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - LV dataset. It achieves the following results on the evaluation set: - Loss: 0.1892 - Wer: 0.1698 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.4235 | 12.82 | 2000 | 0.4475 | 0.4551 | | 0.9383 | 25.64 | 4000 | 0.2235 | 0.2328 | | 0.8359 | 38.46 | 6000 | 0.2004 | 0.2098 | | 0.7633 | 51.28 | 8000 | 0.1960 | 0.1882 | | 0.7001 | 64.1 | 10000 | 0.1902 | 0.1809 | | 0.652 | 76.92 | 12000 | 0.1979 | 0.1775 | | 0.6025 | 89.74 | 14000 | 0.1866 | 0.1696 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["lv"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "lv", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Latvian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "lv"}, "metrics": [{"type": "wer", "value": 16.977, "name": "Test WER"}, {"type": "cer", "value": 4.23, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "lv"}, "metrics": [{"type": "wer", "value": 45.247, "name": "Test WER"}, {"type": "cer", "value": 16.924, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "lv"}, "metrics": [{"type": "wer", "value": 56.16, "name": "Test WER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-latvian
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "lv", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "lv" ]
TAGS #transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #lv #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-latvian ================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - LV dataset. It achieves the following results on the evaluation set: * Loss: 0.1892 * Wer: 0.1698 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #lv #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-lithuanian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - LT dataset. It achieves the following results on the evaluation set: - Loss: 0.1722 - Wer: 0.2486 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.6837 | 8.0 | 2000 | 0.6649 | 0.7515 | | 1.1105 | 16.0 | 4000 | 0.2386 | 0.3436 | | 1.0069 | 24.0 | 6000 | 0.2008 | 0.2968 | | 0.9417 | 32.0 | 8000 | 0.1915 | 0.2774 | | 0.887 | 40.0 | 10000 | 0.1819 | 0.2616 | | 0.8563 | 48.0 | 12000 | 0.1729 | 0.2475 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["lt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "lt", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Lithuanian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "lt"}, "metrics": [{"type": "wer", "value": 24.859, "name": "Test WER"}, {"type": "cer", "value": 4.764, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-lithuanian
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "lt", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "lt" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #lt #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-lithuanian ==================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - LT dataset. It achieves the following results on the evaluation set: * Loss: 0.1722 * Wer: 0.2486 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #lt #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-maltese This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - MT dataset. It achieves the following results on the evaluation set: - Loss: 0.2005 - Wer: 0.1897 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.2238 | 18.02 | 2000 | 0.3911 | 0.4310 | | 0.7871 | 36.04 | 4000 | 0.2063 | 0.2309 | | 0.6653 | 54.05 | 6000 | 0.1960 | 0.2091 | | 0.5861 | 72.07 | 8000 | 0.1986 | 0.2000 | | 0.5283 | 90.09 | 10000 | 0.1993 | 0.1909 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["mt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "mt", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Maltese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "mt"}, "metrics": [{"type": "wer", "value": 23.503, "name": "Test WER"}, {"type": "cer", "value": 5.065, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-maltese
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "mt", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "mt" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #mt #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-maltese ================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - MT dataset. It achieves the following results on the evaluation set: * Loss: 0.2005 * Wer: 0.1897 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #mt #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-marathi-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset. It achieves the following results on the evaluation set: - Loss: 0.6483 - Wer: 0.6049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.671 | 22.73 | 500 | 1.3618 | 0.9499 | | 1.1599 | 45.45 | 1000 | 0.6330 | 0.6627 | | 0.8252 | 68.18 | 1500 | 0.6226 | 0.6426 | | 0.6424 | 90.91 | 2000 | 0.6359 | 0.6041 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["mr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "mr", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Marathi", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "mr"}, "metrics": [{"type": "wer", "value": 55.716, "name": "Test WER"}, {"type": "cer", "value": 13.842, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-marathi-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "mr", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "mr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #mr #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-marathi-cv8 ===================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - MR dataset. It achieves the following results on the evaluation set: * Loss: 0.6483 * Wer: 0.6049 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #mr #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-mongolian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - MN dataset. It achieves the following results on the evaluation set: - Loss: 0.6003 - Wer: 0.4473 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.3677 | 15.87 | 2000 | 0.6432 | 0.6198 | | 1.1379 | 31.75 | 4000 | 0.6196 | 0.5592 | | 1.0093 | 47.62 | 6000 | 0.5828 | 0.5117 | | 0.8888 | 63.49 | 8000 | 0.5754 | 0.4822 | | 0.7985 | 79.37 | 10000 | 0.5987 | 0.4690 | | 0.697 | 95.24 | 12000 | 0.6014 | 0.4471 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["mn"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mn", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Mongolian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "mn"}, "metrics": [{"type": "wer", "value": 44.709, "name": "Test WER"}, {"type": "cer", "value": 13.532, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "mn"}, "metrics": [{"type": "wer", "value": 76.643, "name": "Test WER"}, {"type": "cer", "value": 36.997, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "mn"}, "metrics": [{"type": "wer", "value": 78.45, "name": "Test WER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-mongolian
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mn", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "mn" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mn #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-mongolian =================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - MN dataset. It achieves the following results on the evaluation set: * Loss: 0.6003 * Wer: 0.4473 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mn #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-odia-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - OR dataset. It achieves the following results on the evaluation set: - Loss: 0.8176 - Wer: 0.5818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.3957 | 20.83 | 500 | 1.0925 | 0.8111 | | 1.0351 | 41.67 | 1000 | 0.7837 | 0.6574 | | 0.7396 | 62.5 | 1500 | 0.7674 | 0.6083 | | 0.5385 | 83.33 | 2000 | 0.8015 | 0.5812 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["or"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-odia-cv8", "results": []}]}
infinitejoy/wav2vec2-large-xls-r-300m-odia-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "or", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "or" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #or #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-odia-cv8 ================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - OR dataset. It achieves the following results on the evaluation set: * Loss: 0.8176 * Wer: 0.5818 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #or #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-odia This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - OR dataset. It achieves the following results on the evaluation set: ``` python eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config as --split test --log_outputs ``` - WER: 1.0921052631578947 - CER: 2.5547945205479454 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Training machine details - Platform: Linux-5.11.0-37-generic-x86_64-with-glibc2.10 - CPU cores: 60 - Python version: 3.8.8 - PyTorch version: 1.10.1+cu102 - GPU is visible: True - Transformers version: 4.16.0.dev0 - Datasets version: 1.17.1.dev0 - soundfile version: 0.10.3 Training script ```bash python run_speech_recognition_ctc.py \ --dataset_name="mozilla-foundation/common_voice_7_0" \ --model_name_or_path="facebook/wav2vec2-xls-r-300m" \ --dataset_config_name="or" \ --output_dir="./wav2vec2-large-xls-r-300m-odia" \ --overwrite_output_dir \ --num_train_epochs="120" \ --per_device_train_batch_size="16" \ --per_device_eval_batch_size="16" \ --gradient_accumulation_steps="2" \ --learning_rate="7.5e-5" \ --warmup_steps="500" \ --length_column_name="input_length" \ --evaluation_strategy="steps" \ --text_column_name="sentence" \ --chars_to_ignore , ? . ! \- \; \: \" “ % ‘ ” � — \’ … \– \' \’ \– \ --save_steps="500" \ --eval_steps="500" \ --logging_steps="100" \ --layerdrop="0.0" \ --activation_dropout="0.1" \ --save_total_limit="3" \ --freeze_feature_encoder \ --feat_proj_dropout="0.0" \ --mask_time_prob="0.75" \ --mask_time_length="10" \ --mask_feature_prob="0.25" \ --mask_feature_length="64" \ --gradient_checkpointing \ --use_auth_token \ --fp16 \ --group_by_length \ --do_train --do_eval \ --push_to_hub ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 120.0 - mixed_precision_training: Native AMP ### Training results | | eval_loss | eval_wer | eval_runtime | eval_samples_per_second | eval_steps_per_second | epoch | |---:|------------:|-----------:|---------------:|--------------------------:|------------------------:|--------:| | 0 | 3.35224 | 0.998972 | 5.0475 | 22.189 | 1.387 | 29.41 | | 1 | 1.33679 | 0.938335 | 5.0633 | 22.12 | 1.382 | 58.82 | | 2 | 0.737202 | 0.957862 | 5.0913 | 21.998 | 1.375 | 88.24 | | 3 | 0.658212 | 0.96814 | 5.0953 | 21.981 | 1.374 | 117.65 | | 4 | 0.658 | 0.9712 | 5.0953 | 22.115 | 1.382 | 120 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["or"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "or", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Odia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "or"}, "metrics": [{"type": "wer", "value": 97.91, "name": "Test WER"}, {"type": "cer", "value": 247.09, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-odia
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "or", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "or" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #or #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
wav2vec2-large-xls-r-300m-odia ============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - OR dataset. It achieves the following results on the evaluation set: * WER: 1.0921052631578947 * CER: 2.5547945205479454 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- Training machine details * Platform: Linux-5.11.0-37-generic-x86\_64-with-glibc2.10 * CPU cores: 60 * Python version: 3.8.8 * PyTorch version: 1.10.1+cu102 * GPU is visible: True * Transformers version: 4.16.0.dev0 * Datasets version: 1.17.1.dev0 * soundfile version: 0.10.3 Training script Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 120.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 120.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #or #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 120.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-romanian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RO dataset. It achieves the following results on the evaluation set: - Loss: 0.1167 - Wer: 0.1421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.1973 | 8.89 | 2000 | 0.4481 | 0.4849 | | 0.6005 | 17.78 | 4000 | 0.1420 | 0.1777 | | 0.5248 | 26.67 | 6000 | 0.1303 | 0.1651 | | 0.4871 | 35.56 | 8000 | 0.1207 | 0.1523 | | 0.4428 | 44.44 | 10000 | 0.1143 | 0.1425 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ro"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "ro", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Romanian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ro"}, "metrics": [{"type": "wer", "value": 14.194, "name": "Test WER"}, {"type": "cer", "value": 3.288, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ro"}, "metrics": [{"type": "wer", "value": 40.869, "name": "Test WER"}, {"type": "cer", "value": 12.049, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ro"}, "metrics": [{"type": "wer", "value": 47.2, "name": "Test WER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-romanian
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "ro", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ro" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #ro #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-romanian ================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - RO dataset. It achieves the following results on the evaluation set: * Loss: 0.1167 * Wer: 0.1421 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #ro #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-romansh-sursilvan This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RM-SURSILV dataset. It achieves the following results on the evaluation set: - Loss: 0.2163 - Wer: 0.1981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 120.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 1.1004 | 23.81 | 2000 | 0.3710 | 0.4191 | | 0.7002 | 47.62 | 4000 | 0.2342 | 0.2562 | | 0.5573 | 71.43 | 6000 | 0.2175 | 0.2177 | | 0.4799 | 95.24 | 8000 | 0.2109 | 0.1987 | | 0.4511 | 119.05 | 10000 | 0.2164 | 0.1975 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["rm-sursilv"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "rm-sursilv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Romansh Sursilvan", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "rm-sursilv"}, "metrics": [{"type": "wer", "value": 19.816, "name": "Test WER"}, {"type": "cer", "value": 4.153, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-romansh-sursilvan
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "rm-sursilv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "rm-sursilv" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #rm-sursilv #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-romansh-sursilvan =========================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - RM-SURSILV dataset. It achieves the following results on the evaluation set: * Loss: 0.2163 * Wer: 0.1981 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 120.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 120.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #rm-sursilv #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 120.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-romansh-vallader This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RM-VALLADER dataset. It achieves the following results on the evaluation set: - Loss: 0.3155 - Wer: 0.3162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9556 | 15.62 | 500 | 2.9300 | 1.0 | | 1.7874 | 31.25 | 1000 | 0.7566 | 0.6509 | | 1.0131 | 46.88 | 1500 | 0.3671 | 0.3828 | | 0.8439 | 62.5 | 2000 | 0.3350 | 0.3416 | | 0.7502 | 78.12 | 2500 | 0.3155 | 0.3296 | | 0.7093 | 93.75 | 3000 | 0.3182 | 0.3186 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["rm-vallader"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "rm-vallader", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Romansh Vallader", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "rm-vallader"}, "metrics": [{"type": "wer", "value": 31.689, "name": "Test WER"}, {"type": "cer", "value": 7.202, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-romansh-vallader
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "rm-vallader", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "rm-vallader" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #rm-vallader #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-romansh-vallader ========================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - RM-VALLADER dataset. It achieves the following results on the evaluation set: * Loss: 0.3155 * Wer: 0.3162 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #rm-vallader #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-sakha This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SAH dataset. It achieves the following results on the evaluation set: - Loss: 0.4995 - Wer: 0.4421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.8597 | 8.47 | 500 | 0.7731 | 0.7211 | | 1.2508 | 16.95 | 1000 | 0.5368 | 0.5989 | | 1.1066 | 25.42 | 1500 | 0.5034 | 0.5533 | | 1.0064 | 33.9 | 2000 | 0.4686 | 0.5114 | | 0.9324 | 42.37 | 2500 | 0.4927 | 0.5056 | | 0.876 | 50.85 | 3000 | 0.4734 | 0.4795 | | 0.8082 | 59.32 | 3500 | 0.4748 | 0.4799 | | 0.7604 | 67.8 | 4000 | 0.4949 | 0.4691 | | 0.7241 | 76.27 | 4500 | 0.5090 | 0.4627 | | 0.6739 | 84.75 | 5000 | 0.4967 | 0.4452 | | 0.6447 | 93.22 | 5500 | 0.5071 | 0.4437 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["sah"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "sah", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Sakha", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "sah"}, "metrics": [{"type": "wer", "value": 44.196, "name": "Test WER"}, {"type": "cer", "value": 10.271, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-sakha
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "sah", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sah" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #sah #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-sakha =============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - SAH dataset. It achieves the following results on the evaluation set: * Loss: 0.4995 * Wer: 0.4421 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #sah #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-slovak This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SK dataset. It achieves the following results on the evaluation set: - Loss: 0.2915 - Wer: 0.2481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 3000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0076 | 19.74 | 3000 | 0.3274 | 0.3806 | | 0.6889 | 39.47 | 6000 | 0.2824 | 0.2942 | | 0.5863 | 59.21 | 9000 | 0.2700 | 0.2735 | | 0.4798 | 78.95 | 12000 | 0.2844 | 0.2602 | | 0.4399 | 98.68 | 15000 | 0.2907 | 0.2489 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["sk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "sk", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Slovak", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "sk"}, "metrics": [{"type": "wer", "value": 24.852, "name": "Test WER"}, {"type": "cer", "value": 5.09, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sk"}, "metrics": [{"type": "wer", "value": 56.388, "name": "Test WER"}, {"type": "cer", "value": 20.654, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sk"}, "metrics": [{"type": "wer", "value": 59.25, "name": "Test WER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-slovak
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "sk", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sk" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #sk #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-slovak ================================ This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - SK dataset. It achieves the following results on the evaluation set: * Loss: 0.2915 * Wer: 0.2481 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 3000 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 3000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #sk #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 3000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-slovenian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SL dataset. It achieves the following results on the evaluation set: - Loss: 0.2093 - Wer: 0.1907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.785 | 12.5 | 1000 | 0.7465 | 0.6812 | | 0.8989 | 25.0 | 2000 | 0.2495 | 0.2732 | | 0.7118 | 37.5 | 3000 | 0.2126 | 0.2284 | | 0.6367 | 50.0 | 4000 | 0.2049 | 0.2049 | | 0.5763 | 62.5 | 5000 | 0.2116 | 0.2055 | | 0.5196 | 75.0 | 6000 | 0.2111 | 0.1910 | | 0.4949 | 87.5 | 7000 | 0.2131 | 0.1931 | | 0.4797 | 100.0 | 8000 | 0.2093 | 0.1907 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["sl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "sl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Slovenian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "sl"}, "metrics": [{"type": "wer", "value": 18.97, "name": "Test WER"}, {"type": "cer", "value": 4.534, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 55.048, "name": "Test WER"}, {"type": "cer", "value": 22.739, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 54.81, "name": "Test WER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-slovenian
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "sl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sl" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #sl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-slovenian =================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - SL dataset. It achieves the following results on the evaluation set: * Loss: 0.2093 * Wer: 0.1907 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 100.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #sl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-tatar This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TT dataset. It achieves the following results on the evaluation set: - Loss: 0.1959 - Wer: 0.2454 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.173 | 9.66 | 4000 | 0.2920 | 0.3608 | | 0.9433 | 19.32 | 8000 | 0.2336 | 0.3026 | | 0.8552 | 28.99 | 12000 | 0.2221 | 0.2799 | | 0.7863 | 38.65 | 16000 | 0.1953 | 0.2479 | | 0.7365 | 48.31 | 20000 | 0.1968 | 0.2449 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["tt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "tt", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Tatar", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "tt"}, "metrics": [{"type": "wer", "value": 24.392, "name": "Test WER"}, {"type": "cer", "value": 5.024, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-tatar
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "tt", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "tt" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #tt #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-tatar =============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - TT dataset. It achieves the following results on the evaluation set: * Loss: 0.1959 * Wer: 0.2454 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 4000 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 4000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #tt #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 4000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> infinitejoy/wav2vec2-large-xls-r-300m-urdu This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - -UR dataset. It achieves the following results on the evaluation set: - Loss: NA - Wer: NA ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python eval.py \ --model_id infinitejoy/wav2vec2-large-xls-r-300m-urdu --dataset speech-recognition-community-v2/dev_data \ --config ur --split validation --chunk_length_s 10 --stride_length_s 1 ``` ### Inference ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "infinitejoy/wav2vec2-large-xls-r-300m-urdu" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "ur", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text ``` ### Eval results on Common Voice 7 "test" (WER):
{"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "ur"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Urdu", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 105.66, "name": "Test WER"}, {"type": "cer", "value": 434.011, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-urdu
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "ur", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ur" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #ur #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
infinitejoy/wav2vec2-large-xls-r-300m-urdu This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - -UR dataset. It achieves the following results on the evaluation set: - Loss: NA - Wer: NA ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common_voice_7_0' with split 'test' ### Inference ### Eval results on Common Voice 7 "test" (WER):
[ "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7.5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 50.0\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.10.3", "#### Evaluation Commands\n\n1. To evaluate on 'mozilla-foundation/common_voice_7_0' with split 'test'", "### Inference", "### Eval results on Common Voice 7 \"test\" (WER):" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #ur #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7.5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 50.0\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.10.3", "#### Evaluation Commands\n\n1. To evaluate on 'mozilla-foundation/common_voice_7_0' with split 'test'", "### Inference", "### Eval results on Common Voice 7 \"test\" (WER):" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-welsh This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - CY dataset. It achieves the following results on the evaluation set: - Loss: 0.2650 - Wer: 0.2702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 3000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.3454 | 8.2 | 3000 | 0.4926 | 0.5703 | | 1.1202 | 16.39 | 6000 | 0.3529 | 0.3944 | | 1.0058 | 24.59 | 9000 | 0.3143 | 0.3341 | | 0.9287 | 32.79 | 12000 | 0.2896 | 0.2980 | | 0.8849 | 40.98 | 15000 | 0.2727 | 0.2798 | | 0.8665 | 49.18 | 18000 | 0.2662 | 0.2696 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["cy"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "cy", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Welsh", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "cy"}, "metrics": [{"type": "wer", "value": 31.003, "name": "Test WER"}, {"type": "cer", "value": 7.775, "name": "Test CER"}]}]}]}
infinitejoy/wav2vec2-large-xls-r-300m-welsh
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "cy", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "cy" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #cy #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-welsh =============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - CY dataset. It achieves the following results on the evaluation set: * Loss: 0.2650 * Wer: 0.2702 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 3000 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 3000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #cy #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 3000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-fi-to-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset. It achieves the following results on the evaluation set: - Loss: 3.3598 - Bleu: 1.618 - Gen Len: 17.3223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 3.3627 | 1.0 | 6250 | 3.5122 | 1.2882 | 17.1803 | | 3.2162 | 2.0 | 12500 | 3.4442 | 1.4329 | 17.2617 | | 3.1304 | 3.0 | 18750 | 3.3872 | 1.4862 | 17.296 | | 3.0832 | 4.0 | 25000 | 3.3648 | 1.5795 | 17.3047 | | 3.0623 | 5.0 | 31250 | 3.3598 | 1.618 | 17.3223 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt19"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-fi-to-en", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt19", "type": "wmt19", "args": "fi-en"}, "metrics": [{"type": "bleu", "value": 1.618, "name": "Bleu"}]}]}]}
ingridnc/t5-small-finetuned-fi-to-en
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt19", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt19 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-fi-to-en =========================== This model is a fine-tuned version of t5-small on the wmt19 dataset. It achieves the following results on the evaluation set: * Loss: 3.3598 * Bleu: 1.618 * Gen Len: 17.3223 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.9.1 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt19 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
sentence-similarity
sentence-transformers
# inokufu/bertheo A [sentence-transformers](https://www.SBERT.net) model fine-tuned on course sentences. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Details This model is based on the French flaubert-base-uncased pre-trained model [1, 2]. It was first fine-tuned on our learning object (LO) sentences dataset. This dataset consists of a sample of 500k sentences of course descriptions. We used standard parameter settings for fine-tuning as mentioned in the original BERT paper [3]. This allows the model to improve its performance on the target task (Masked Language Model) for domain-specific sentences. It was then fine-tuned on a natural language inference task (XNLI) [4]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication). It was then fine-tuned on a text semantic similarity task (on STS-fr data) [5]. This task consists in training the model to estimate the similarity between two sentences. This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Apprendre le python", "Devenir expert en comptabilité"] model = SentenceTransformer('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Apprendre le python", "Devenir expert en comptabilité"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education') model = AutoModel.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results STS (fr) score: 83.05% ## Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: FlaubertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## References [1] https://hal.archives-ouvertes.fr/hal-02784776v3/document <br> [2] https://huggingface.co/flaubert/flaubert_base_uncased <br> [3] https://arxiv.org/abs/1810.04805 <br> [4] https://arxiv.org/abs/1809.05053 <br> [5] https://huggingface.co/datasets/stsb_multi_mt <br>
{"language": "fr", "tags": ["sentence-similarity", "transformers", "Education", "fr", "flaubert", "sentence-transformers", "feature-extraction", "xnli", "stsb_multi_mt"], "datasets": ["xnli", "stsb_multi_mt"], "pipeline_tag": "sentence-similarity"}
inokufu/flaubert-base-uncased-xnli-sts-finetuned-education
null
[ "sentence-transformers", "pytorch", "flaubert", "feature-extraction", "sentence-similarity", "transformers", "Education", "fr", "xnli", "stsb_multi_mt", "dataset:xnli", "dataset:stsb_multi_mt", "arxiv:1810.04805", "arxiv:1809.05053", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1810.04805", "1809.05053" ]
[ "fr" ]
TAGS #sentence-transformers #pytorch #flaubert #feature-extraction #sentence-similarity #transformers #Education #fr #xnli #stsb_multi_mt #dataset-xnli #dataset-stsb_multi_mt #arxiv-1810.04805 #arxiv-1809.05053 #endpoints_compatible #region-us
# inokufu/bertheo A sentence-transformers model fine-tuned on course sentences. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Details This model is based on the French flaubert-base-uncased pre-trained model [1, 2]. It was first fine-tuned on our learning object (LO) sentences dataset. This dataset consists of a sample of 500k sentences of course descriptions. We used standard parameter settings for fine-tuning as mentioned in the original BERT paper [3]. This allows the model to improve its performance on the target task (Masked Language Model) for domain-specific sentences. It was then fine-tuned on a natural language inference task (XNLI) [4]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication). It was then fine-tuned on a text semantic similarity task (on STS-fr data) [5]. This task consists in training the model to estimate the similarity between two sentences. This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ## Evaluation Results STS (fr) score: 83.05% ## Model Architecture ## References [1] URL <br> [2] URL <br> [3] URL <br> [4] URL <br> [5] URL <br>
[ "# inokufu/bertheo\n\nA sentence-transformers model fine-tuned on course sentences. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Details\n\nThis model is based on the French flaubert-base-uncased pre-trained model [1, 2]. \n\nIt was first fine-tuned on our learning object (LO) sentences dataset. This dataset consists of a sample of 500k sentences of course descriptions. We used standard parameter settings for fine-tuning as mentioned in the original BERT paper [3]. This allows the model to improve its performance on the target task (Masked Language Model) for domain-specific sentences.\n\nIt was then fine-tuned on a natural language inference task (XNLI) [4]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication).\n\nIt was then fine-tuned on a text semantic similarity task (on STS-fr data) [5]. This task consists in training the model to estimate the similarity between two sentences.\n\nThis fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\nSTS (fr) score: 83.05%", "## Model Architecture", "## References\n\n[1] URL <br>\n[2] URL <br>\n[3] URL <br>\n[4] URL <br>\n[5] URL <br>" ]
[ "TAGS\n#sentence-transformers #pytorch #flaubert #feature-extraction #sentence-similarity #transformers #Education #fr #xnli #stsb_multi_mt #dataset-xnli #dataset-stsb_multi_mt #arxiv-1810.04805 #arxiv-1809.05053 #endpoints_compatible #region-us \n", "# inokufu/bertheo\n\nA sentence-transformers model fine-tuned on course sentences. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Details\n\nThis model is based on the French flaubert-base-uncased pre-trained model [1, 2]. \n\nIt was first fine-tuned on our learning object (LO) sentences dataset. This dataset consists of a sample of 500k sentences of course descriptions. We used standard parameter settings for fine-tuning as mentioned in the original BERT paper [3]. This allows the model to improve its performance on the target task (Masked Language Model) for domain-specific sentences.\n\nIt was then fine-tuned on a natural language inference task (XNLI) [4]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication).\n\nIt was then fine-tuned on a text semantic similarity task (on STS-fr data) [5]. This task consists in training the model to estimate the similarity between two sentences.\n\nThis fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\nSTS (fr) score: 83.05%", "## Model Architecture", "## References\n\n[1] URL <br>\n[2] URL <br>\n[3] URL <br>\n[4] URL <br>\n[5] URL <br>" ]
sentence-similarity
sentence-transformers
# inokufu/flaubert-base-uncased-xnli-sts This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Details This model is based on the French flaubert-base-uncased pre-trained model [1, 2]. It was then fine-tuned on a natural language inference task (XNLI) [3]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication). It was then fine-tuned on a text semantic similarity task (on STS-fr data) [4]. This task consists in training the model to estimate the similarity between two sentences. This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Apprendre le python", "Devenir expert en comptabilité"] model = SentenceTransformer('inokufu/flaubert-base-uncased-xnli-sts') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Apprendre le python", "Devenir expert en comptabilité"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts') model = AutoModel.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results STS (fr) score: 83.07% ## Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: FlaubertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## References [1] https://hal.archives-ouvertes.fr/hal-02784776v3/document <br> [2] https://huggingface.co/flaubert/flaubert_base_uncased <br> [3] https://arxiv.org/abs/1809.05053 <br> [4] https://huggingface.co/datasets/stsb_multi_mt <br>
{"language": "fr", "tags": ["sentence-similarity", "transformers", "fr", "flaubert", "sentence-transformers", "feature-extraction", "xnli", "stsb_multi_mt"], "datasets": ["xnli", "stsb_multi_mt"], "pipeline_tag": "sentence-similarity"}
inokufu/flaubert-base-uncased-xnli-sts
null
[ "sentence-transformers", "pytorch", "flaubert", "feature-extraction", "sentence-similarity", "transformers", "fr", "xnli", "stsb_multi_mt", "dataset:xnli", "dataset:stsb_multi_mt", "arxiv:1809.05053", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1809.05053" ]
[ "fr" ]
TAGS #sentence-transformers #pytorch #flaubert #feature-extraction #sentence-similarity #transformers #fr #xnli #stsb_multi_mt #dataset-xnli #dataset-stsb_multi_mt #arxiv-1809.05053 #endpoints_compatible #region-us
# inokufu/flaubert-base-uncased-xnli-sts This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Details This model is based on the French flaubert-base-uncased pre-trained model [1, 2]. It was then fine-tuned on a natural language inference task (XNLI) [3]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication). It was then fine-tuned on a text semantic similarity task (on STS-fr data) [4]. This task consists in training the model to estimate the similarity between two sentences. This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ## Evaluation Results STS (fr) score: 83.07% ## Model Architecture ## References [1] URL <br> [2] URL <br> [3] URL <br> [4] URL <br>
[ "# inokufu/flaubert-base-uncased-xnli-sts\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Details\n\nThis model is based on the French flaubert-base-uncased pre-trained model [1, 2].\n\nIt was then fine-tuned on a natural language inference task (XNLI) [3]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication).\n\nIt was then fine-tuned on a text semantic similarity task (on STS-fr data) [4]. This task consists in training the model to estimate the similarity between two sentences.\n\nThis fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\n\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\nSTS (fr) score: 83.07%", "## Model Architecture", "## References\n\n[1] URL <br>\n[2] URL <br>\n[3] URL <br>\n[4] URL <br>" ]
[ "TAGS\n#sentence-transformers #pytorch #flaubert #feature-extraction #sentence-similarity #transformers #fr #xnli #stsb_multi_mt #dataset-xnli #dataset-stsb_multi_mt #arxiv-1809.05053 #endpoints_compatible #region-us \n", "# inokufu/flaubert-base-uncased-xnli-sts\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Details\n\nThis model is based on the French flaubert-base-uncased pre-trained model [1, 2].\n\nIt was then fine-tuned on a natural language inference task (XNLI) [3]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication).\n\nIt was then fine-tuned on a text semantic similarity task (on STS-fr data) [4]. This task consists in training the model to estimate the similarity between two sentences.\n\nThis fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\n\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\nSTS (fr) score: 83.07%", "## Model Architecture", "## References\n\n[1] URL <br>\n[2] URL <br>\n[3] URL <br>\n[4] URL <br>" ]
text-classification
transformers
# Multi2ConvAI-Corona: finetuned Bert for German This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "de", "license": "mit", "tags": ["text-classification", "pytorch", "transformers"], "widget": [{"text": "Muss ich eine Maske tragen?"}]}
inovex/multi2convai-corona-de-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #bert #text-classification #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Corona: finetuned Bert for German This model was developed in the Multi2ConvAI project: - domain: Corona (more details about our use cases: (en, de)) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Corona: finetuned Bert for German \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Corona (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Corona: finetuned Bert for German \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Corona (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
null
# Multi2ConvAI-Corona: German logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-corona-de-logreg-ft >>> Create pipeline for config: multi2convai-corona-de-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'corona' and language 'de'. >>> >>> Enter your text (type 'stop' to end execution): Muss ich eine Maske tragen? >>> 'Muss ich eine Maske tragen?' was classified as 'corona.masks' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "de" domain = "corona" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Muss ich eine Maske tragen?") label >>> Label(string='corona.masks', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/de curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "de", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Hosted inference API not supported"}]}
inovex/multi2convai-corona-de-logreg-ft
null
[ "text-classification", "de", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "de" ]
TAGS #text-classification #de #license-mit #region-us
# Multi2ConvAI-Corona: German logistic regression model using fasttext embeddings This model was developed in the Multi2ConvAI project: - domain: Corona (more details about our use cases: (en, de)) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - multi2convai - serialized fastText embeddings (see last section of this readme or these instructions) ### Run with one line of code After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### How to run model using multi2convai After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### Download and serialize fastText ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Corona: German logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Corona (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#text-classification #de #license-mit #region-us \n", "# Multi2ConvAI-Corona: German logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Corona (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Corona: finetuned Bert for English This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-en-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-en-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "en", "license": "mit", "tags": ["text-classification", "pytorch", "transformers"], "widget": [{"text": "Do I need to wear a mask?"}]}
inovex/multi2convai-corona-en-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Corona: finetuned Bert for English This model was developed in the Multi2ConvAI project: - domain: Corona (more details about our use cases: (en, de)) - language: English (en) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Corona: finetuned Bert for English \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Corona (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Corona: finetuned Bert for English \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Corona (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
null
# Multi2ConvAI-Corona: English logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-corona-en-logreg-ft >>> Create pipeline for config: multi2convai-corona-en-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'corona' and language 'en'. >>> >>> Enter your text (type 'stop' to end execution): Do I need to wear a mask? >>> 'Do I need to wear a mask?' was classified as 'corona.masks' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "en" domain = "corona" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/en/wiki.200k.en.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/en/wiki.200k.en.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Do I need to wear a mask?") label >>> Label(string='corona.masks', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/en curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.vec --output models/fasttext/en/wiki.en.vec python scripts/serialize_fasttext.py -r fasttext/wiki.en.vec -v fasttext/en/wiki.200k.en.vocab -e fasttext/en/wiki.200k.en.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "en", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Hosted inference API not supported"}]}
inovex/multi2convai-corona-en-logreg-ft
null
[ "text-classification", "en", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #text-classification #en #license-mit #region-us
# Multi2ConvAI-Corona: English logistic regression model using fasttext embeddings This model was developed in the Multi2ConvAI project: - domain: Corona (more details about our use cases: (en, de)) - language: English (en) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - multi2convai - serialized fastText embeddings (see last section of this readme or these instructions) ### Run with one line of code After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### How to run model using multi2convai After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### Download and serialize fastText ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Corona: English logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Corona (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#text-classification #en #license-mit #region-us \n", "# Multi2ConvAI-Corona: English logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Corona (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Corona: finetuned Bert for French This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2conv.ai/en/blog/use-cases), [de](https://multi2conv.ai/en/blog/use-cases))) - language: French (fr) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-fr-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-fr-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "fr", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Dois-je porter un masque?"}]}
inovex/multi2convai-corona-fr-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #bert #text-classification #fr #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Corona: finetuned Bert for French This model was developed in the Multi2ConvAI project: - domain: Corona (more details about our use cases: (en, de)) - language: French (fr) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Corona: finetuned Bert for French \n\nThis model was developed in the Multi2ConvAI project:\n- domain: Corona (more details about our use cases: (en, de))\n- language: French (fr)\n- model type: finetuned Bert", "## How to run\n\nRequires: \n- Huggingface transformers", "### Run with Huggingface Transformers\n\n'", "## Further information on Multi2ConvAI:\n- URL\n- URL\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #fr #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Corona: finetuned Bert for French \n\nThis model was developed in the Multi2ConvAI project:\n- domain: Corona (more details about our use cases: (en, de))\n- language: French (fr)\n- model type: finetuned Bert", "## How to run\n\nRequires: \n- Huggingface transformers", "### Run with Huggingface Transformers\n\n'", "## Further information on Multi2ConvAI:\n- URL\n- URL\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Corona: finetuned Bert for Italian This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Italian (it) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-it-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-it-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "it", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Devo indossare una maschera?"}]}
inovex/multi2convai-corona-it-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "it", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #bert #text-classification #it #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Corona: finetuned Bert for Italian This model was developed in the Multi2ConvAI project: - domain: Corona (more details about our use cases: (en, de)) - language: Italian (it) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Corona: finetuned Bert for Italian \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Corona (more details about our use cases: (en, de))\r\n- language: Italian (it)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #it #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Corona: finetuned Bert for Italian \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Corona (more details about our use cases: (en, de))\r\n- language: Italian (it)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Logistics: finetuned Bert for German This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "de", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Wo kann ich das Paket ablegen?"}]}
inovex/multi2convai-logistics-de-bert
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Logistics: finetuned Bert for German This model was developed in the Multi2ConvAI project: - domain: Logistics (more details about our use cases: (en, de)) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Logistics: finetuned Bert for German \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Logistics: finetuned Bert for German \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
null
# Multi2ConvAI-Logistics: German logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-logistics-de-logreg-ft >>> Create pipeline for config: multi2convai-logistics-de-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'logistics' and language 'de'. >>> >>> Enter your text (type 'stop' to end execution): Muss ich eine Maske tragen? >>> 'Wo kann ich das Paket ablegen?' was classified as 'details.safeplace' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "de" domain = "logistics" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Wo kann ich das Paket ablegen?") label >>> Label(string='details.safeplace', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/de curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "de", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Hosted inference API not supported"}]}
inovex/multi2convai-logistics-de-logreg-ft
null
[ "text-classification", "de", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "de" ]
TAGS #text-classification #de #license-mit #region-us
# Multi2ConvAI-Logistics: German logistic regression model using fasttext embeddings This model was developed in the Multi2ConvAI project: - domain: Logistics (more details about our use cases: (en, de)) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - multi2convai - serialized fastText embeddings (see last section of this readme or these instructions) ### Run with one line of code After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### How to run model using multi2convai After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### Download and serialize fastText ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Logistics: German logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#text-classification #de #license-mit #region-us \n", "# Multi2ConvAI-Logistics: German logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Logistics: finetuned Bert for English This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-en-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-en-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "en", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Where can I put the parcel?"}]}
inovex/multi2convai-logistics-en-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Logistics: finetuned Bert for English This model was developed in the Multi2ConvAI project: - domain: Logistics (more details about our use cases: (en, de)) - language: English (en) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Logistics: finetuned Bert for English \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Logistics: finetuned Bert for English \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
null
# Multi2ConvAI-Logistics: English logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-logistics-en-logreg-ft >>> Create pipeline for config: multi2convai-logistics-en-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'logistics' and language 'en'. >>> >>> Enter your text (type 'stop' to end execution): Muss ich eine Maske tragen? >>> 'Where can I put the parcel?' was classified as 'details.safeplace' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "en" domain = "logistics" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/en/wiki.200k.en.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/en/wiki.200k.en.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Where can I put the parcel?") label >>> Label(string='details.safeplace', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/en curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.vec --output models/fasttext/en/wiki.en.vec python scripts/serialize_fasttext.py -r fasttext/wiki.en.vec -v fasttext/en/wiki.200k.en.vocab -e fasttext/en/wiki.200k.en.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "en", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Hosted inference API not supported"}]}
inovex/multi2convai-logistics-en-logreg-ft
null
[ "text-classification", "en", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #text-classification #en #license-mit #region-us
# Multi2ConvAI-Logistics: English logistic regression model using fasttext embeddings This model was developed in the Multi2ConvAI project: - domain: Logistics (more details about our use cases: (en, de)) - language: English (en) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - multi2convai - serialized fastText embeddings (see last section of this readme or these instructions) ### Run with one line of code After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### How to run model using multi2convai After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### Download and serialize fastText ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Logistics: English logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#text-classification #en #license-mit #region-us \n", "# Multi2ConvAI-Logistics: English logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Logistics: finetuned Bert for Croatian This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Croatian (hr) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-hr-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-hr-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "hr", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "gdje mogu staviti paket?"}]}
inovex/multi2convai-logistics-hr-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "hr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hr" ]
TAGS #transformers #pytorch #bert #text-classification #hr #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Logistics: finetuned Bert for Croatian This model was developed in the Multi2ConvAI project: - domain: Logistics (more details about our use cases: (en, de)) - language: Croatian (hr) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Logistics: finetuned Bert for Croatian \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: Croatian (hr)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #hr #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Logistics: finetuned Bert for Croatian \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: Croatian (hr)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Logistics: finetuned Bert for Polish This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Polish (pl) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-pl-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-pl-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "pl", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "gdzie mog\u0119 umie\u015bci\u0107 paczk\u0119?"}]}
inovex/multi2convai-logistics-pl-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "pl", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "pl" ]
TAGS #transformers #pytorch #bert #text-classification #pl #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Logistics: finetuned Bert for Polish This model was developed in the Multi2ConvAI project: - domain: Logistics (more details about our use cases: (en, de)) - language: Polish (pl) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Logistics: finetuned Bert for Polish \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: Polish (pl)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #pl #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Logistics: finetuned Bert for Polish \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: Polish (pl)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Logistics: finetuned Bert for Turkish This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Turkish (tr) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-tr-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-tr-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "tr", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "paketi nereye koyabilirim?"}]}
inovex/multi2convai-logistics-tr-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "tr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "tr" ]
TAGS #transformers #pytorch #bert #text-classification #tr #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Logistics: finetuned Bert for Turkish This model was developed in the Multi2ConvAI project: - domain: Logistics (more details about our use cases: (en, de)) - language: Turkish (tr) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Logistics: finetuned Bert for Turkish \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: Turkish (tr)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #tr #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Logistics: finetuned Bert for Turkish \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Logistics (more details about our use cases: (en, de))\r\n- language: Turkish (tr)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Quality: finetuned Bert for German This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "de", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Starte das Programm"}]}
inovex/multi2convai-quality-de-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #bert #text-classification #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Quality: finetuned Bert for German This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: finetuned Bert for German \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Quality: finetuned Bert for German \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
null
# Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-quality-de-logreg-ft >>> Create pipeline for config: multi2convai-quality-de-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'de'. >>> >>> Enter your text (type 'stop' to end execution): Starte das Programm >>> 'Starte das Programm' was classified as 'no.start' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "de" domain = "quality" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Starte das Programm") label >>> Label(string='neo.start', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/de curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "de", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Hosted inference API not supported"}]}
inovex/multi2convai-quality-de-logreg-ft
null
[ "text-classification", "de", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "de" ]
TAGS #text-classification #de #license-mit #region-us
# Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - multi2convai - serialized fastText embeddings (see last section of this readme or these instructions) ### Run with one line of code After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### How to run model using multi2convai After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### Download and serialize fastText ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#text-classification #de #license-mit #region-us \n", "# Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Quality: finetuned MBert for German This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-mbert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-mbert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "de", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Starte das Programm"}]}
inovex/multi2convai-quality-de-mbert
null
[ "transformers", "pytorch", "bert", "text-classification", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #bert #text-classification #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Quality: finetuned MBert for German This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: German (de) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: finetuned MBert for German \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: finetuned MBert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Quality: finetuned MBert for German \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: German (de)\r\n- model type: finetuned MBert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Quality: finetuned Bert for English This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-en-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-en-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "en", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Start the program"}]}
inovex/multi2convai-quality-en-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Quality: finetuned Bert for English This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: English (en) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: finetuned Bert for English \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Quality: finetuned Bert for English \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
null
# Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-quality-en-logreg-ft >>> Create pipeline for config: multi2convai-quality-en-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'en'. >>> >>> Enter your text (type 'stop' to end execution): Start the program >>> 'Start the program' was classified as 'neo.start' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "en" domain = "quality" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/en/wiki.200k.en.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/en/wiki.200k.en.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Start the program") label >>> Label(string='neo.start', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/en curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.vec --output models/fasttext/en/wiki.en.vec python scripts/serialize_fasttext.py -r fasttext/wiki.en.vec -v fasttext/en/wiki.200k.en.vocab -e fasttext/en/wiki.200k.en.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "en", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Hosted inference API not supported"}]}
inovex/multi2convai-quality-en-logreg-ft
null
[ "text-classification", "en", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #text-classification #en #license-mit #region-us
# Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: English (en) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - multi2convai - serialized fastText embeddings (see last section of this readme or these instructions) ### Run with one line of code After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### How to run model using multi2convai After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### Download and serialize fastText ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#text-classification #en #license-mit #region-us \n", "# Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Quality: finetuned MBert for English This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-en-mbert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-en-mbert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "en", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Start the program"}]}
inovex/multi2convai-quality-en-mbert
null
[ "transformers", "pytorch", "bert", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Quality: finetuned MBert for English This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: English (en) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: finetuned MBert for English \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: finetuned MBert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Quality: finetuned MBert for English \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: English (en)\r\n- model type: finetuned MBert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Quality: finetuned Bert for French This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: French (fr) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-fr-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-fr-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "fr", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Lancer le programme"}]}
inovex/multi2convai-quality-fr-bert
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #fr #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Quality: finetuned Bert for French This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: French (fr) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: finetuned Bert for French \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: French (fr)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #fr #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Quality: finetuned Bert for French \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: French (fr)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
null
# Multi2ConvAI-Quality: French logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: French (fr) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-quality-fr-logreg-ft >>> Create pipeline for config: multi2convai-quality-fr-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'fr'. >>> >>> Enter your text (type 'stop' to end execution): Lancer le programme >>> 'Lancer le programme' was classified as 'neo.start' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "fr" domain = "quality" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/fr/wiki.200k.fr.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/fr/wiki.200k.fr.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Lancer le programme") label >>> Label(string='neo.start', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/fr curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.fr.vec --output models/fasttext/fr/wiki.fr.vec python scripts/serialize_fasttext.py -r fasttext/wiki.fr.vec -v fasttext/fr/wiki.200k.fr.vocab -e fasttext/fr/wiki.200k.fr.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "fr", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Hosted inference API not supported"}]}
inovex/multi2convai-quality-fr-logreg-ft
null
[ "text-classification", "fr", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fr" ]
TAGS #text-classification #fr #license-mit #region-us
# Multi2ConvAI-Quality: French logistic regression model using fasttext embeddings This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: French (fr) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - multi2convai - serialized fastText embeddings (see last section of this readme or these instructions) ### Run with one line of code After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### How to run model using multi2convai After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### Download and serialize fastText ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: French logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: French (fr)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#text-classification #fr #license-mit #region-us \n", "# Multi2ConvAI-Quality: French logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: French (fr)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Quality: finetuned MBert for French This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: French (fr) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-fr-mbert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-fr-mbert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "fr", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Lancer le programme"}]}
inovex/multi2convai-quality-fr-mbert
null
[ "transformers", "pytorch", "bert", "text-classification", "fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #bert #text-classification #fr #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Quality: finetuned MBert for French This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: French (fr) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: finetuned MBert for French \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: French (fr)\r\n- model type: finetuned MBert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #fr #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Quality: finetuned MBert for French \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: French (fr)\r\n- model type: finetuned MBert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Quality: finetuned Bert for Italian This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Italian (it) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-it-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-it-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "it", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Avviare il programma"}]}
inovex/multi2convai-quality-it-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "it", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #bert #text-classification #it #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Quality: finetuned Bert for Italian This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: Italian (it) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: finetuned Bert for Italian \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: Italian (it)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #it #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Quality: finetuned Bert for Italian \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: Italian (it)\r\n- model type: finetuned Bert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
null
# Multi2ConvAI-Quality: Italian logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Italian (ml) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-quality-it-logreg-ft >>> Create pipeline for config: multi2convai-quality-it-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'it'. >>> >>> Enter your text (type 'stop' to end execution): Avviare il programma >>> 'Avviare il programma' was classified as 'neo.start' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "it" domain = "quality" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/it/wiki.200k.it.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/it/wiki.200k.it.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Avviare il programma") label >>> Label(string='neo.start', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/it curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.it.vec --output models/fasttext/it/wiki.it.vec python scripts/serialize_fasttext.py -r fasttext/wiki.it.vec -v fasttext/it/wiki.200k.it.vocab -e fasttext/it/wiki.200k.it.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "it", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Hosted inference API not supported"}]}
inovex/multi2convai-quality-it-logreg-ft
null
[ "text-classification", "it", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "it" ]
TAGS #text-classification #it #license-mit #region-us
# Multi2ConvAI-Quality: Italian logistic regression model using fasttext embeddings This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: Italian (ml) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - multi2convai - serialized fastText embeddings (see last section of this readme or these instructions) ### Run with one line of code After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### How to run model using multi2convai After installing 'multi2convai' and locally available fastText embeddings you can run: ' ### Download and serialize fastText ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: Italian logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: Italian (ml)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#text-classification #it #license-mit #region-us \n", "# Multi2ConvAI-Quality: Italian logistic regression model using fasttext embeddings\r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: Italian (ml)\r\n- model type: logistic regression\r\n- embeddings: fastText embeddings", "## How to run\r\n\r\nRequires: \r\n- multi2convai\r\n- serialized fastText embeddings (see last section of this readme or these instructions)", "### Run with one line of code\r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### How to run model using multi2convai \r\n\r\nAfter installing 'multi2convai' and locally available fastText embeddings you can run:\r\n\r\n'", "### Download and serialize fastText\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-classification
transformers
# Multi2ConvAI-Quality: finetuned MBert for Italian This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Italian (it) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-it-mbert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-it-mbert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
{"language": "it", "license": "mit", "tags": ["text-classification"], "widget": [{"text": "Avviare il programma"}]}
inovex/multi2convai-quality-it-mbert
null
[ "transformers", "pytorch", "bert", "text-classification", "it", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #bert #text-classification #it #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Multi2ConvAI-Quality: finetuned MBert for Italian This model was developed in the Multi2ConvAI project: - domain: Quality (more details about our use cases: (en, de)) - language: Italian (it) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ' ## Further information on Multi2ConvAI: - URL - URL - mailto: info@URL
[ "# Multi2ConvAI-Quality: finetuned MBert for Italian \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: Italian (it)\r\n- model type: finetuned MBert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #it #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Multi2ConvAI-Quality: finetuned MBert for Italian \r\n\r\nThis model was developed in the Multi2ConvAI project:\r\n- domain: Quality (more details about our use cases: (en, de))\r\n- language: Italian (it)\r\n- model type: finetuned MBert", "## How to run\r\n\r\nRequires: \r\n- Huggingface transformers", "### Run with Huggingface Transformers\r\n\r\n'", "## Further information on Multi2ConvAI:\r\n- URL\r\n- URL\r\n- mailto: info@URL" ]
text-generation
transformers
hello
{}
inspectorsolaris/gpt2_french
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
hello
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
null
hello
{}
insub/vectorizing_BART
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
hello
[]
[ "TAGS\n#region-us \n" ]
text-generation
null
# ettengiv DialoGPT Model
{"tags": ["conversational"]}
myynirew/DialoGPT-medium-ettengiv
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #conversational #region-us
# ettengiv DialoGPT Model
[ "# ettengiv DialoGPT Model" ]
[ "TAGS\n#conversational #region-us \n", "# ettengiv DialoGPT Model" ]
text-generation
transformers
# leirbag DialoGPT Model
{"tags": ["conversational"]}
myynirew/DialoGPT-medium-leirbag
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# leirbag DialoGPT Model
[ "# leirbag DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# leirbag DialoGPT Model" ]
text-generation
transformers
# awazimuruk DialoGPT Model
{"tags": ["conversational"]}
myynirew/DialoGPT-small-awazimuruk
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# awazimuruk DialoGPT Model
[ "# awazimuruk DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# awazimuruk DialoGPT Model" ]
text-generation
transformers
# Sh0rtiAI v2 DialoGPT Model
{"tags": ["conversational"]}
ionite/DialoGPT-large-Sh0rtiAI-v2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Sh0rtiAI v2 DialoGPT Model
[ "# Sh0rtiAI v2 DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Sh0rtiAI v2 DialoGPT Model" ]
text-generation
transformers
# IoniteAI DialoGPT Model
{"tags": ["conversational"]}
ionite/DialoGPT-medium-IoniteAI
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# IoniteAI DialoGPT Model
[ "# IoniteAI DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# IoniteAI DialoGPT Model" ]
text-generation
transformers
# McKayAI DialoGPT Model
{"tags": ["conversational"]}
ionite/DialoGPT-medium-McKayAI-v2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# McKayAI DialoGPT Model
[ "# McKayAI DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# McKayAI DialoGPT Model" ]
text-generation
transformers
# McKayAI DialoGPT Model
{"tags": ["conversational"]}
ionite/DialoGPT-medium-McKayAI
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# McKayAI DialoGPT Model
[ "# McKayAI DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# McKayAI DialoGPT Model" ]
text-generation
transformers
# Sh0rtiAI DialoGPT Model
{"tags": ["conversational"]}
ionite/DialoGPT-medium-Sh0rtiAI
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Sh0rtiAI DialoGPT Model
[ "# Sh0rtiAI DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Sh0rtiAI DialoGPT Model" ]
text-generation
transformers
# mohnjilesAI DialoGPT Model
{"tags": ["conversational"]}
ionite/DialoGPT-medium-mohnjilesAI
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# mohnjilesAI DialoGPT Model
[ "# mohnjilesAI DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# mohnjilesAI DialoGPT Model" ]
text-generation
transformers
# orangeAI DialoGPT Model
{"tags": ["conversational"]}
ionite/DialoGPT-medium-orangeAI
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# orangeAI DialoGPT Model
[ "# orangeAI DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# orangeAI DialoGPT Model" ]
text-classification
transformers
## FinBERT Code for importing and using this model is available [here](https://github.com/ipuneetrathore/BERT_models)
{}
ipuneetrathore/bert-base-cased-finetuned-finBERT
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
## FinBERT Code for importing and using this model is available here
[ "## FinBERT\n\nCode for importing and using this model is available here" ]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "## FinBERT\n\nCode for importing and using this model is available here" ]
text-generation
transformers
#Harry Potter DialoGPT Model
{"tags": ["conversational"]}
ironman123/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Harry Potter DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-classification
transformers
# bert-base-uncased finetuned on MNLI ## Model Details and Training Data We used the pretrained model from [bert-base-uncased](https://huggingface.co/bert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset. The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32). ## Evaluation Results The evaluation results are mentioned in the table below. | Test Corpus | Accuracy | |:---:|:---------:| | Matched | 0.8456 | | Mismatched | 0.8484 |
{"language": "en", "tags": ["pytorch", "text-classification"], "datasets": ["MNLI"]}
ishan/bert-base-uncased-mnli
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "en", "dataset:MNLI", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #jax #bert #text-classification #en #dataset-MNLI #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased finetuned on MNLI =================================== Model Details and Training Data ------------------------------- We used the pretrained model from bert-base-uncased and finetuned it on MultiNLI dataset. The training parameters were kept the same as Devlin et al., 2019 (learning rate = 2e-5, training epochs = 3, max\_sequence\_len = 128 and batch\_size = 32). Evaluation Results ------------------ The evaluation results are mentioned in the table below.
[]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #en #dataset-MNLI #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
# distilbert-base-uncased finetuned on MNLI ## Model Details and Training Data We used the pretrained model from [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset. The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32). ## Evaluation Results The evaluation results are mentioned in the table below. | Test Corpus | Accuracy | |:---:|:---------:| | Matched | 0.8223 | | Mismatched | 0.8216 |
{"language": "en", "tags": ["pytorch", "text-classification"], "datasets": ["MNLI"]}
ishan/distilbert-base-uncased-mnli
null
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "dataset:MNLI", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #distilbert #text-classification #en #dataset-MNLI #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased finetuned on MNLI ========================================= Model Details and Training Data ------------------------------- We used the pretrained model from distilbert-base-uncased and finetuned it on MultiNLI dataset. The training parameters were kept the same as Devlin et al., 2019 (learning rate = 2e-5, training epochs = 3, max\_sequence\_len = 128 and batch\_size = 32). Evaluation Results ------------------ The evaluation results are mentioned in the table below.
[]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #en #dataset-MNLI #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Hrry Potter DialoGPT Model
{"tags": ["conversational"]}
ishraaqparvez/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Hrry Potter DialoGPT Model
[ "# Hrry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Hrry Potter DialoGPT Model" ]
text-classification
transformers
Este es el primer modelo de prueba BETO_3D
{}
ismaelardo/BETO_3d
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
Este es el primer modelo de prueba BETO_3D
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# GTP2-Poems Generator, English This model is part of the Poems+AI experiment more info https://poems-ai.github.io/art/ # Original Dataset - https://www.kaggle.com/michaelarman/poemsdataset - Marcos de la Fuente's poems
{"language": "en", "license": "mit", "tags": ["GPT"]}
ismaelfaro/gpt2-poems.en
null
[ "transformers", "pytorch", "gpt2", "text-generation", "GPT", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #gpt2 #text-generation #GPT #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# GTP2-Poems Generator, English This model is part of the Poems+AI experiment more info URL # Original Dataset - URL - Marcos de la Fuente's poems
[ "# GTP2-Poems Generator, English \n\nThis model is part of the Poems+AI experiment\n\nmore info URL", "# Original Dataset\n\n - URL\n - Marcos de la Fuente's poems" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #GPT #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# GTP2-Poems Generator, English \n\nThis model is part of the Poems+AI experiment\n\nmore info URL", "# Original Dataset\n\n - URL\n - Marcos de la Fuente's poems" ]
text-generation
transformers
# GTP2-Poems Spanish This model is part of the Poems+AI experiment more info https://poems-ai.github.io/art/ # Original Dataset - https://www.kaggle.com/andreamorgar/spanish-poetry-dataset - Marcos de la Fuente's poems
{"language": "es", "license": "mit", "tags": ["GPT"]}
ismaelfaro/gpt2-poems.es
null
[ "transformers", "pytorch", "gpt2", "text-generation", "GPT", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "es" ]
TAGS #transformers #pytorch #gpt2 #text-generation #GPT #es #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# GTP2-Poems Spanish This model is part of the Poems+AI experiment more info URL # Original Dataset - URL - Marcos de la Fuente's poems
[ "# GTP2-Poems Spanish\n\nThis model is part of the Poems+AI experiment\n\nmore info URL", "# Original Dataset\n\n- URL\n- Marcos de la Fuente's poems" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #GPT #es #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# GTP2-Poems Spanish\n\nThis model is part of the Poems+AI experiment\n\nmore info URL", "# Original Dataset\n\n- URL\n- Marcos de la Fuente's poems" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-cats-vs-dogs This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset. It achieves the following results on the evaluation set: - Loss: 0.0182 - Accuracy: 0.9937 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1177 | 1.0 | 622 | 0.0473 | 0.9832 | | 0.057 | 2.0 | 1244 | 0.0362 | 0.9883 | | 0.0449 | 3.0 | 1866 | 0.0261 | 0.9886 | | 0.066 | 4.0 | 2488 | 0.0248 | 0.9923 | | 0.0328 | 5.0 | 3110 | 0.0182 | 0.9937 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.8.1+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "datasets": ["cats_vs_dogs"], "metrics": ["accuracy"], "model-index": [{"name": "vit-base-cats-vs-dogs", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "cats_vs_dogs", "type": "cats_vs_dogs", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9937357630979499, "name": "Accuracy"}]}]}]}
ismgar01/vit-base-cats-vs-dogs
null
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:cats_vs_dogs", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #vit #image-classification #generated_from_trainer #dataset-cats_vs_dogs #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
vit-base-cats-vs-dogs ===================== This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the cats\_vs\_dogs dataset. It achieves the following results on the evaluation set: * Loss: 0.0182 * Accuracy: 0.9937 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 1337 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.13.0.dev0 * Pytorch 1.8.1+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 1337\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.8.1+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #vit #image-classification #generated_from_trainer #dataset-cats_vs_dogs #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 1337\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.8.1+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
# IT5 Base for Formal-to-informal Style Transfer 🤗 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines f2i = pipeline("text2text-generation", model='it5/it5-base-formal-to-informal') f2i("Vi ringrazio infinitamente per vostra disponibilità") >>> [{"generated_text": "e grazie per la vostra disponibilità!"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-formal-to-informal") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-formal-to-informal") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
{"language": ["it"], "license": "apache-2.0", "tags": ["italian", "sequence-to-sequence", "style-transfer", "formality-style-transfer"], "datasets": ["yahoo/xformal_it"], "metrics": ["rouge", "bertscore"], "widget": [{"text": "Questa performance \u00e8 a dir poco spiacevole."}, {"text": "In attesa di un Suo cortese riscontro, Le auguriamo un piacevole proseguimento di giornata."}, {"text": "Questa visione mi procura una goduria indescrivibile."}, {"text": "qualora ci\u00f2 possa interessarti, ti pregherei di contattarmi."}], "co2_eq_emissions": {"emissions": "17g", "source": "Google Cloud Platform Carbon Footprint", "training_type": "fine-tuning", "geographical_location": "Eemshaven, Netherlands, Europe", "hardware_used": "1 TPU v3-8 VM"}, "model-index": [{"name": "it5-base-formal-to-informal", "results": [{"task": {"type": "formality-style-transfer", "name": "Formal-to-informal Style Transfer"}, "dataset": {"name": "XFORMAL (Italian Subset)", "type": "xformal_it"}, "metrics": [{"type": "rouge1", "value": 0.652, "name": "Avg. Test Rouge1"}, {"type": "rouge2", "value": 0.446, "name": "Avg. Test Rouge2"}, {"type": "rougeL", "value": 0.632, "name": "Avg. Test RougeL"}, {"type": "bertscore", "value": 0.665, "name": "Avg. Test BERTScore", "args": [{"model_type": "dbmdz/bert-base-italian-xxl-uncased"}, {"lang": "it"}, {"num_layers": 10}, {"rescale_with_baseline": true}, {"baseline_path": "bertscore_baseline_ita.tsv"}]}]}]}]}
it5/it5-base-formal-to-informal
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "style-transfer", "formality-style-transfer", "it", "dataset:yahoo/xformal_it", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2203.03759" ]
[ "it" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #style-transfer #formality-style-transfer #it #dataset-yahoo/xformal_it #arxiv-2203.03759 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# IT5 Base for Formal-to-informal Style Transfer This repository contains the checkpoint for the IT5 Base model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. A comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: or loaded using autoclasses: If you use this model in your research, please cite our work as:
[ "# IT5 Base for Formal-to-informal Style Transfer \n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nModel checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #style-transfer #formality-style-transfer #it #dataset-yahoo/xformal_it #arxiv-2203.03759 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# IT5 Base for Formal-to-informal Style Transfer \n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nModel checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
text2text-generation
transformers
# IT5 Base for News Headline Generation 🗞️ 🇮🇹 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines hg = pipeline("text2text-generation", model='it5/it5-base-headline-generation') hg("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-headline-generation") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-headline-generation") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
{"language": ["it"], "license": "apache-2.0", "tags": ["italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "headline-generation"], "datasets": ["gsarti/change_it"], "metrics": ["rouge", "bertscore"], "widget": [{"text": "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sar\u00e0 formalizzata oggi dal dipartimento di stato e sar\u00e0 accompagnata da nuove e pi\u00f9 severe sanzioni. 'Il livello pi\u00f9 alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilit\u00e0 dell'attuale crisi sull'amministrazione Obama. Poi si \u00e8 scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento \u00e8 all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord \u00e8 gi\u00e0 pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson \u00e8 solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perch\u00e9 gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo \u00e8 un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servir\u00e0 a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che \u00e8 vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."}, {"text": "ROMA - Una nuova droga killer \u00e8 stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto pi\u00f9 economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle pu\u00f2 provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto pi\u00f9 devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una citt\u00e0 del centro Italia: \u00e8 stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina \u00e8 quasi 'acqua fresca', anzi, proprio per la sua economicit\u00e0, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attivit\u00e0 investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficolt\u00e0 di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicit\u00e0 \u00e8 molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verr\u00e0 ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."}, {"text": "Fragile come il burro. Il nostro territorio \u00e8 precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all\u201982% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta \u00e8 stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l\u2019area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all\u2019anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima \u00e8 la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l\u2019Umbria, la Valle d\u2019Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c\u2019\u00e8 l\u2019azione dell\u2019uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."}, {"text": "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverr\u00e0 nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si \u00e8 trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perch\u00e9 dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi \u00e8 stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perch\u00e9 rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , Mar\u00eda Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."}], "co2_eq_emissions": {"emissions": "17g", "source": "Google Cloud Platform Carbon Footprint", "training_type": "fine-tuning", "geographical_location": "Eemshaven, Netherlands, Europe", "hardware_used": "1 TPU v3-8 VM"}, "model-index": [{"name": "it5-base-headline-generation", "results": [{"task": {"type": "headline-generation", "name": "Headline generation"}, "dataset": {"name": "HeadGen-IT", "type": "headgen_it"}, "metrics": [{"type": "rouge1", "value": 0.31, "name": "Test Rouge1"}, {"type": "rouge2", "value": 0.112, "name": "Test Rouge2"}, {"type": "rougeL", "value": 0.27, "name": "Test RougeL"}, {"type": "bertscore", "value": 0.433, "name": "Test BERTScore", "args": [{"model_type": "dbmdz/bert-base-italian-xxl-uncased"}, {"lang": "it"}, {"num_layers": 10}, {"rescale_with_baseline": true}, {"baseline_path": "bertscore_baseline_ita.tsv"}]}]}]}]}
it5/it5-base-headline-generation
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "headline-generation", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2203.03759" ]
[ "it" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #newspaper #ilgiornale #repubblica #headline-generation #it #dataset-gsarti/change_it #arxiv-2203.03759 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# IT5 Base for News Headline Generation ️ 🇮🇹 This repository contains the checkpoint for the IT5 Base model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. A comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: or loaded using autoclasses: If you use this model in your research, please cite our work as:
[ "# IT5 Base for News Headline Generation ️ 🇮🇹\n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nModel checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #newspaper #ilgiornale #repubblica #headline-generation #it #dataset-gsarti/change_it #arxiv-2203.03759 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# IT5 Base for News Headline Generation ️ 🇮🇹\n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nModel checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
text2text-generation
transformers
# IT5 Base for News Headline Style Transfer (Il Giornale to Repubblica) 🗞️➡️🗞️ 🇮🇹 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model The model is trained to generate an headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines g2r = pipeline("text2text-generation", model='it5/it5-base-ilgiornale-to-repubblica') g2r("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-ilgiornale-to-repubblica") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-ilgiornale-to-repubblica") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
{"language": ["it"], "license": "apache-2.0", "tags": ["italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "style-transfer"], "datasets": ["gsarti/change_it"], "metrics": ["rouge", "bertscore", "headline-headline-consistency-classifier", "headline-article-consistency-classifier"], "widget": [{"text": "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sar\u00e0 formalizzata oggi dal dipartimento di stato e sar\u00e0 accompagnata da nuove e pi\u00f9 severe sanzioni. 'Il livello pi\u00f9 alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilit\u00e0 dell'attuale crisi sull'amministrazione Obama. Poi si \u00e8 scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento \u00e8 all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord \u00e8 gi\u00e0 pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson \u00e8 solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perch\u00e9 gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo \u00e8 un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servir\u00e0 a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che \u00e8 vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."}, {"text": "ROMA - Una nuova droga killer \u00e8 stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto pi\u00f9 economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle pu\u00f2 provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto pi\u00f9 devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una citt\u00e0 del centro Italia: \u00e8 stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina \u00e8 quasi 'acqua fresca', anzi, proprio per la sua economicit\u00e0, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attivit\u00e0 investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficolt\u00e0 di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicit\u00e0 \u00e8 molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verr\u00e0 ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."}, {"text": "Fragile come il burro. Il nostro territorio \u00e8 precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all\u201982% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta \u00e8 stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l\u2019area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all\u2019anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima \u00e8 la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l\u2019Umbria, la Valle d\u2019Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c\u2019\u00e8 l\u2019azione dell\u2019uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."}, {"text": "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverr\u00e0 nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si \u00e8 trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perch\u00e9 dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi \u00e8 stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perch\u00e9 rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , Mar\u00eda Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."}], "co2_eq_emissions": {"emissions": "17g", "source": "Google Cloud Platform Carbon Footprint", "training_type": "fine-tuning", "geographical_location": "Eemshaven, Netherlands, Europe", "hardware_used": "1 TPU v3-8 VM"}, "thumbnail": "https://gsarti.com/publication/it5/featured.png", "model-index": [{"name": "it5-base-ilgiornale-to-repubblica", "results": [{"task": {"type": "headline-style-transfer-ilgiornale-to-repubblica", "name": "Headline style transfer (Il Giornale to Repubblica)"}, "dataset": {"name": "CHANGE-IT", "type": "gsarti/change_it"}, "metrics": [{"type": "rouge1", "value": 0.297, "name": "Test Rouge1"}, {"type": "rouge2", "value": 0.104, "name": "Test Rouge2"}, {"type": "rougeL", "value": 0.259, "name": "Test RougeL"}, {"type": "bertscore", "value": 0.425, "name": "Test BERTScore", "args": [{"model_type": "dbmdz/bert-base-italian-xxl-uncased"}, {"lang": "it"}, {"num_layers": 10}, {"rescale_with_baseline": true}, {"baseline_path": "bertscore_baseline_ita.tsv"}]}, {"type": "headline-headline-consistency-classifier", "value": 0.925, "name": "Test Headline-Headline Consistency Accuracy"}, {"type": "headline-article-consistency-classifier", "value": 0.852, "name": "Test Headline-Article Consistency Accuracy"}]}]}]}
it5/it5-base-ilgiornale-to-repubblica
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "style-transfer", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2203.03759" ]
[ "it" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #newspaper #ilgiornale #repubblica #style-transfer #it #dataset-gsarti/change_it #arxiv-2203.03759 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# IT5 Base for News Headline Style Transfer (Il Giornale to Repubblica) ️️️ 🇮🇹 This repository contains the checkpoint for the IT5 Base model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. A comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model The model is trained to generate an headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: or loaded using autoclasses: If you use this model in your research, please cite our work as:
[ "# IT5 Base for News Headline Style Transfer (Il Giornale to Repubblica) ️️️ 🇮🇹\n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nThe model is trained to generate an headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #newspaper #ilgiornale #repubblica #style-transfer #it #dataset-gsarti/change_it #arxiv-2203.03759 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# IT5 Base for News Headline Style Transfer (Il Giornale to Repubblica) ️️️ 🇮🇹\n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nThe model is trained to generate an headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
text2text-generation
transformers
# IT5 Base for Informal-to-formal Style Transfer 🧐 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines i2f = pipeline("text2text-generation", model='it5/it5-base-informal-to-formal') i2f("nn capisco xke tt i ragazzi lo fanno") >>> [{"generated_text": "non comprendo perché tutti i ragazzi agiscono così"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-informal-to-formal") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-informal-to-formal") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
{"language": ["it"], "license": "apache-2.0", "tags": ["italian", "sequence-to-sequence", "style-transfer", "formality-style-transfer"], "datasets": ["yahoo/xformal_it"], "metrics": ["rouge", "bertscore"], "widget": [{"text": "maronn qualcuno mi spieg' CHECCOSA SUCCEDE?!?!"}, {"text": "wellaaaaaaa, ma frat\u00e9 sei proprio troppo simpatiko, grazieeee!!"}, {"text": "nn capisco xke tt i ragazzi lo fanno"}, {"text": "IT5 \u00e8 SUPERMEGA BRAVISSIMO a capire tt il vernacolo italiano!!!"}], "co2_eq_emissions": {"emissions": "17g", "source": "Google Cloud Platform Carbon Footprint", "training_type": "fine-tuning", "geographical_location": "Eemshaven, Netherlands, Europe", "hardware_used": "1 TPU v3-8 VM"}, "model-index": [{"name": "it5-base-informal-to-formal", "results": [{"task": {"type": "formality-style-transfer", "name": "Informal-to-formal Style Transfer"}, "dataset": {"name": "XFORMAL (Italian Subset)", "type": "xformal_it"}, "metrics": [{"type": "rouge1", "value": 0.583, "name": "Avg. Test Rouge1"}, {"type": "rouge2", "value": 0.403, "name": "Avg. Test Rouge2"}, {"type": "rougeL", "value": 0.561, "name": "Avg. Test RougeL"}, {"type": "bertscore", "value": 0.641, "name": "Avg. Test BERTScore", "args": [{"model_type": "dbmdz/bert-base-italian-xxl-uncased"}, {"lang": "it"}, {"num_layers": 10}, {"rescale_with_baseline": true}, {"baseline_path": "bertscore_baseline_ita.tsv"}]}]}]}]}
it5/it5-base-informal-to-formal
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "style-transfer", "formality-style-transfer", "it", "dataset:yahoo/xformal_it", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2203.03759" ]
[ "it" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #style-transfer #formality-style-transfer #it #dataset-yahoo/xformal_it #arxiv-2203.03759 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# IT5 Base for Informal-to-formal Style Transfer This repository contains the checkpoint for the IT5 Base model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. A comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: or loaded using autoclasses: If you use this model in your research, please cite our work as:
[ "# IT5 Base for Informal-to-formal Style Transfer \n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nModel checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #style-transfer #formality-style-transfer #it #dataset-yahoo/xformal_it #arxiv-2203.03759 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# IT5 Base for Informal-to-formal Style Transfer \n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nModel checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
summarization
transformers
# IT5 Base for News Summarization ✂️🗞️ 🇮🇹 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines newsum = pipeline("summarization", model='it5/it5-base-news-summarization') newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.") >>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-news-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-news-summarization") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
{"language": ["it"], "license": "apache-2.0", "tags": ["italian", "sequence-to-sequence", "fanpage", "ilpost", "summarization"], "datasets": ["ARTeLab/fanpage", "ARTeLab/ilpost"], "metrics": ["rouge"], "widget": [{"text": "Non lo vuole sposare. E\u2019 quanto emerge all\u2019interno dell\u2019ultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda l\u2019idea del matrimonio per qualche anno ancora. La soubrette, che \u00e8 stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perch\u00e9 \u00e8 sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi pi\u00f9 desiderati al mondo, l\u2019ex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l\u2019ultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, per\u00f2, si \u00e8 sbagliato. A mettere le cose bene in chiaro \u00e8 la Fico che, intervistata dall\u2019emittente radiofonica Rtl 102.5, dice: \u00c8 presto per sposarsi, siamo ancora molto giovani. \u00c8 giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perch\u00e9 no, ci si pu\u00f2 anche pensare. Quando si \u00e8 giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di pi\u00f9 di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si \u00e8 giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: pi\u00f9 per la sua vita privata che come giocatore. Per me pu\u00f2 anche andare in uno strip club, se non fa niente di male, con gli amici, per\u00f2 devo dire che alla fine torna sempre da me, sono la sua preferita."}, {"text": "Valerio \u00e8 giovanissimo ma gi\u00e0 una star. Fuori dall\u2019Ariston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui \u00e8 forte del suo talento e sicuro. Partecipa in gara tra i \u201cbig\u201d di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu \u00e8 stato eliminato. Ma non \u00e8 detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa \u00e8 successo alla giuria visto che sei stato eliminato anche se l\u2019esibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sar\u00e0 il ballottaggio\u2026 Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivare\u2026 ho fatto pi\u00f9 di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. L\u2019umilt\u00e0? Prima di tutto. Senn\u00f2 non sarei qui."}, {"text": "L\u2019azienda statunitense Broadcom, uno dei pi\u00f9 grandi produttori di semiconduttori al mondo, ha presentato un\u2019offerta per acquisire Qualcomm, altra grande societ\u00e0 degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se l\u2019operazione dovesse essere approvata, sarebbe una delle pi\u00f9 grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe gi\u00e0 preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi all\u2019acquisizione perch\u00e9 il prezzo offerto \u00e8 di poco superiore a quello dell\u2019attuale valore delle azioni dell\u2019azienda. Ci potrebbero essere inoltre complicazioni sul piano dell\u2019antitrust da valutare, prima di un\u2019eventuale acquisizione."}, {"text": "Dal 31 maggio \u00e8 infine partita la piattaforma ITsART, a pi\u00f9 di un anno da quando \u2013 durante il primo lockdown \u2013 il ministro della Cultura Dario Franceschini ne aveva parlato come di \u00abuna sorta di Netflix della cultura\u00bb, pensata per \u00aboffrire a tutto il mondo la cultura italiana a pagamento\u00bb. \u00c8 presto per dare giudizi definitivi sulla piattaforma, e di certo sar\u00e0 difficile farlo anche pi\u00f9 avanti senza numeri precisi. Al momento, l\u2019unica cosa che si pu\u00f2 fare \u00e8 guardare com\u2019\u00e8 fatto il sito, contare quanti contenuti ci sono (circa 700 \u201ctitoli\u201d, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro variet\u00e0. Intanto, una cosa notata da pi\u00f9 parti \u00e8 che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."}], "co2_eq_emissions": {"emissions": 17, "source": "Google Cloud Platform Carbon Footprint", "training_type": "fine-tuning", "geographical_location": "Eemshaven, Netherlands, Europe", "hardware_used": "1 TPU v3-8 VM"}, "thumbnail": "https://gsarti.com/publication/it5/featured.png", "model-index": [{"name": "it5-base-news-summarization", "results": [{"task": {"type": "news-summarization", "name": "News Summarization"}, "dataset": {"name": "NewsSum-IT", "type": "newssum-it"}, "metrics": [{"type": "rouge1", "value": 0.339, "name": "Test Rouge1"}, {"type": "rouge2", "value": 0.16, "name": "Test Rouge2"}, {"type": "rougeL", "value": 0.263, "name": "Test RougeL"}]}]}]}
it5/it5-base-news-summarization
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "fanpage", "ilpost", "summarization", "it", "dataset:ARTeLab/fanpage", "dataset:ARTeLab/ilpost", "arxiv:2203.03759", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2203.03759" ]
[ "it" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #fanpage #ilpost #summarization #it #dataset-ARTeLab/fanpage #dataset-ARTeLab/ilpost #arxiv-2203.03759 #license-apache-2.0 #model-index #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# IT5 Base for News Summarization ️️ 🇮🇹 This repository contains the checkpoint for the IT5 Base model fine-tuned on news summarization on the Fanpage and Il Post corpora as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. A comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: or loaded using autoclasses: If you use this model in your research, please cite our work as:
[ "# IT5 Base for News Summarization ️️ 🇮🇹\n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on news summarization on the Fanpage and Il Post corpora as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nModel checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #fanpage #ilpost #summarization #it #dataset-ARTeLab/fanpage #dataset-ARTeLab/ilpost #arxiv-2203.03759 #license-apache-2.0 #model-index #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# IT5 Base for News Summarization ️️ 🇮🇹\n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on news summarization on the Fanpage and Il Post corpora as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nModel checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
text2text-generation
transformers
# IT5 Base for Question Answering ⁉️ 🇮🇹 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qa = pipeline("text2text-generation", model='it5/it5-base-question-answering') qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?") >>> [{"generated_text": "ultimo massimo glaciale"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-question-answering") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-question-answering") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
{"language": ["it"], "license": "apache-2.0", "tags": ["italian", "sequence-to-sequence", "squad_it", "text2text-question-answering", "text2text-generation"], "datasets": ["squad_it"], "metrics": ["f1", "exact-match"], "widget": [{"text": "In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45\u00b0. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale \u00e8 riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia variet\u00e0 di specie. Domanda: La foresta pluviale amazzonica \u00e8 diventata per lo pi\u00f9 una foresta interna intorno a quale evento globale?"}, {"text": "L' embargo non era uniforme in tutta Europa. Dei nove membri della Comunit\u00e0 Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poich\u00e8 si sono rifiutati di consentire all' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribalt\u00f2 questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. Domanda: Il Regno Unito e la Francia non hanno avuto interruzioni dell' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto?"}, {"text": "Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma pi\u00f9 conosciuta (e attuale) con le lettere minuscole \"abc\" racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet pi\u00f9 simile. La semplicit\u00e0 del logo ha reso pi\u00f9 facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell' avvento della computer grafica). Domanda: Di quale carattere tipografico ricordano le lettere dell' iconico logo ABC?"}, {"text": "La fotorespirazione pu\u00f2 verificarsi quando la concentrazione di ossigeno \u00e8 troppo elevata. Rubisco non \u00e8 in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi pu\u00f2 accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Pu\u00f2 sprecare fino alla met\u00e0 del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all' ossigeno all' interno del cloroplasto, aumentando l' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. Domanda: Che cosa pu\u00f2 fare rubisco per errore?"}], "co2_eq_emissions": {"emissions": "17g", "source": "Google Cloud Platform Carbon Footprint", "training_type": "fine-tuning", "geographical_location": "Eemshaven, Netherlands, Europe", "hardware_used": "1 TPU v3-8 VM"}, "thumbnail": "https://gsarti.com/publication/it5/featured.png", "model-index": [{"name": "it5-base-question-answering", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "SQuAD-IT", "type": "squad_it"}, "metrics": [{"type": "f1", "value": 0.761, "name": "Test F1"}, {"type": "exact-match", "value": 0.663, "name": "Test Exact Match"}]}]}]}
it5/it5-base-question-answering
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "squad_it", "text2text-question-answering", "it", "dataset:squad_it", "arxiv:2203.03759", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2203.03759" ]
[ "it" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #squad_it #text2text-question-answering #it #dataset-squad_it #arxiv-2203.03759 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# IT5 Base for Question Answering ⁉️ 🇮🇹 This repository contains the checkpoint for the IT5 Base model fine-tuned on extractive question answering on the SQuAD-IT corpus as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. A comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: or loaded using autoclasses: If you use this model in your research, please cite our work as:
[ "# IT5 Base for Question Answering ⁉️ 🇮🇹\n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on extractive question answering on the SQuAD-IT corpus as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nModel checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #t5 #text2text-generation #italian #sequence-to-sequence #squad_it #text2text-question-answering #it #dataset-squad_it #arxiv-2203.03759 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# IT5 Base for Question Answering ⁉️ 🇮🇹\n\nThis repository contains the checkpoint for the IT5 Base model fine-tuned on extractive question answering on the SQuAD-IT corpus as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim. \n\nA comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.", "## Using the model\n\nModel checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:\n\n\n\nor loaded using autoclasses:\n\n\n\nIf you use this model in your research, please cite our work as:" ]