modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 06:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 06:28:51
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
vionwinnie/t5-reddit
|
vionwinnie
| 2021-07-07T08:15:48Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
This T5 small model finetuned on Reddit data.
It has two subtasks:
1. title generation
2. tag classification
|
minsik-oh/dummy-model
|
minsik-oh
| 2021-07-07T05:58:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
# Dummy Model
This be a dummmmmy
|
liam168/c4-zh-distilbert-base-uncased
|
liam168
| 2021-07-07T03:21:34Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"exbert",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: zh
tags:
- exbert
license: apache-2.0
widget:
- text: "女人做得越纯粹,皮肤和身材就越好"
- text: "我喜欢篮球"
---
# liam168/c4-zh-distilbert-base-uncased
## Model description
用 ["女性","体育","文学","校园"]4类数据训练的分类模型。
## Overview
- **Language model**: DistilBERT
- **Model size**: 280M
- **Language**: Chinese
## Example
```python
>>> from transformers import DistilBertForSequenceClassification , AutoTokenizer, pipeline
>>> model_name = "liam168/c4-zh-distilbert-base-uncased"
>>> class_num = 4
>>> ts_texts = ["女人做得越纯粹,皮肤和身材就越好", "我喜欢篮球"]
>>> model = DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=class_num)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> classifier(ts_texts[0])
>>> classifier(ts_texts[1])
[{'label': 'Female', 'score': 0.9137857556343079}]
[{'label': 'Sports', 'score': 0.8206522464752197}]
```
|
liam168/gen-gpt2-medium-chinese
|
liam168
| 2021-07-07T02:26:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"zh",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: zh
widget:
- text: "晓日千红"
- text: "长街躞蹀"
---
# gen-gpt2-medium-chinese
# Overview
- **Language model**: GPT2-Medium
- **Model size**: 68M
- **Language**: Chinese
# Example
```python
from transformers import TFGPT2LMHeadModel,AutoTokenizer
from transformers import TextGenerationPipeline
mode_name = 'liam168/gen-gpt2-medium-chinese'
tokenizer = AutoTokenizer.from_pretrained(mode_name)
model = TFGPT2LMHeadModel.from_pretrained(mode_name)
text_generator = TextGenerationPipeline(model, tokenizer)
print(text_generator("晓日千红", max_length=64, do_sample=True))
print(text_generator("加餐小语", max_length=50, do_sample=False))
```
输出
```text
[{'generated_text': '晓日千红 独 远 客 。 孤 夜 云 云 梦 到 冷 。 著 剩 笑 、 人 远 。 灯 啼 鸦 最 回 吟 。 望 , 枕 付 孤 灯 、 客 。 对 梅 残 照 偏 相 思 , 玉 弦 语 。 翠 台 新 妆 、 沉 、 登 临 水 。 空'}]
[{'generated_text': '加餐小语 有 有 骨 , 有 人 诗 成 自 远 诗 。 死 了 自 喜 乐 , 独 撑 天 下 诗 事 小 诗 柴 。 桃 花 谁 知 何 处 何 处 高 吟 诗 从 今 死 火 , 此 事'}]
```
|
junnyu/wobert_chinese_plus_base
|
junnyu
| 2021-07-07T01:18:40Z | 21,751 | 5 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"wobert",
"zh",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: zh
tags:
- wobert
inference: False
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/WoBERT
### pytorch版本
https://github.com/JunnYu/WoBERT_pytorch
## 安装(主要为了安装WoBertTokenizer)
```bash
pip install git+https://github.com/JunnYu/WoBERT_pytorch.git
```
## 使用
```python
import torch
from transformers import BertForMaskedLM as WoBertForMaskedLM
from wobert import WoBertTokenizer
pretrained_model_or_path_list = [
"junnyu/wobert_chinese_plus_base", "junnyu/wobert_chinese_base"
]
for path in pretrained_model_or_path_list:
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = WoBertTokenizer.from_pretrained(path)
model = WoBertForMaskedLM.from_pretrained(path)
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits[0]
outputs_sentence = ""
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(outputs[i].topk(k=5)[1])
outputs_sentence += "[" + "||".join(tokens) + "]"
else:
outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id],
skip_special_tokens=True))
print(outputs_sentence)
# RoFormer 今天[天气||天||心情||阳光||空气]很好,我[想||要||打算||准备||喜欢]去公园玩。
# PLUS WoBERT 今天[天气||阳光||天||心情||空气]很好,我[想||要||打算||准备||就]去公园玩。
# WoBERT 今天[天气||阳光||天||心情||空气]很好,我[想||要||就||准备||也]去公园玩。
```
## 引用
Bibtex:
```tex
@techreport{zhuiyiwobert,
title={WoBERT: Word-based Chinese BERT model - ZhuiyiAI},
author={Jianlin Su},
year={2020},
url="https://github.com/ZhuiyiTechnology/WoBERT",
}
```
|
huggingtweets/alice333ai-alicecweam
|
huggingtweets
| 2021-07-06T22:53:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/alice333ai-alicecweam/1625611976936/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1410515009252302852/sah1ksNb_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1393311358293356546/tXc-X9fx_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Alice Cream 🐇🍓 Vtuber & 👁️⃤ lison</div>
<div style="text-align: center; font-size: 14px;">@alice333ai-alicecweam</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Alice Cream 🐇🍓 Vtuber & 👁️⃤ lison.
| Data | Alice Cream 🐇🍓 Vtuber | 👁️⃤ lison |
| --- | --- | --- |
| Tweets downloaded | 3244 | 3216 |
| Retweets | 359 | 1062 |
| Short tweets | 463 | 200 |
| Tweets kept | 2422 | 1954 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4cfpc23c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alice333ai-alicecweam's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2r62epp4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2r62epp4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alice333ai-alicecweam')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
victor/autonlp-imdb-reviews-sentiment-329982
|
victor
| 2021-07-06T19:26:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:victor/autonlp-data-imdb-reviews-sentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- victor/autonlp-data-imdb-reviews-sentiment
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 329982
## Validation Metrics
- Loss: 0.24620144069194794
- Accuracy: 0.9300053431035799
- Precision: 0.9299029425358188
- Recall: 0.9289012003693444
- AUC: 0.9795001637755057
- F1: 0.9294018015243667
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/victor/autonlp-imdb-reviews-sentiment-329982
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("victor/autonlp-imdb-reviews-sentiment-329982", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("victor/autonlp-imdb-reviews-sentiment-329982", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
accelotron/rugpt3-ficbook-bts
|
accelotron
| 2021-07-06T18:08:59Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
ruGPT-3 fine-tuned on russian fanfiction about Bangatan Boys (BTS).
|
huggingtweets/kpnsecurity
|
huggingtweets
| 2021-07-06T14:20:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/kpnsecurity/1625581202096/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1399739976724668425/sU9HGxX7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">KPN Security</div>
<div style="text-align: center; font-size: 14px;">@kpnsecurity</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from KPN Security.
| Data | KPN Security |
| --- | --- |
| Tweets downloaded | 507 |
| Retweets | 109 |
| Short tweets | 9 |
| Tweets kept | 389 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34p5iycs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kpnsecurity's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1r2x39u7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1r2x39u7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kpnsecurity')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mrshu/wav2vec2-large-xlsr-slovene
|
mrshu
| 2021-07-06T13:25:51Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"sl",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: sl
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Slovene
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sl
type: common_voice
args: sl
metrics:
- name: Test WER
type: wer
value: 36.97
---
# Wav2Vec2-Large-XLSR-53-Slovene
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Slovene using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sl", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene")
model = Wav2Vec2ForCTC.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Slovene test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sl", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene")
model = Wav2Vec2ForCTC.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\«\»\)\(\„\'\–\’\—]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 36.97 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/14uahdilysnFsiYniHxY9fyKjFGuYQe7p)
|
mrm8488/wav2vec2-large-xlsr-53-ukrainian
|
mrm8488
| 2021-07-06T13:20:13Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"uk",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: uk
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Ukrainian Manuel Romero
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice uk
type: common_voice
args: uk
metrics:
- name: Test WER
type: wer
value: 41.82
---
# Wav2Vec2-Large-XLSR-53-ukrainian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Ukrainian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "uk", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "uk", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 41.82 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ???
|
mrm8488/wav2vec2-large-xlsr-53-spanish
|
mrm8488
| 2021-07-06T13:14:39Z | 20 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"es",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: es
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Spanish Manuel Romero
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice es
type: common_voice
args: es
metrics:
- name: Test WER
type: wer
value: ???
---
# Wav2Vec2-Large-XLSR-53-Spanish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "es", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "es", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ???
|
mrm8488/wav2vec2-large-xlsr-53-esperanto
|
mrm8488
| 2021-07-06T13:02:46Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"eo",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: eo
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Esperanto Manuel Romero
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice eo
type: common_voice
args: eo
metrics:
- name: Test WER
type: wer
value: 15.86
---
# Wav2Vec2-Large-XLSR-53-esperanto
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eo", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "eo", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 15.86 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ???
|
mohammed/arabic-speech-recognition
|
mohammed
| 2021-07-06T12:47:52Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ar",
"dataset:common_voice",
"dataset:arabic_speech_corpus",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ar
datasets:
- common_voice
- arabic_speech_corpus
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Mohammed XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 36.69
- name: Validation WER
type: wer
value: 36.69
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on Arabic using the `train` splits of [Common Voice](https://huggingface.co/datasets/common_voice)
and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
%%capture
!pip install datasets
!pip install transformers==4.4.0
!pip install torchaudio
!pip install jiwer
!pip install tnkeeh
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("mohammed/ar")
model = Wav2Vec2ForCTC.from_pretrained("mohammed/ar")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("The predicted sentence is: ", processor.batch_decode(predicted_ids))
print("The original sentence is:", test_dataset["sentence"][:2])
```
The output is:
```
The predicted sentence is : ['ألديك قلم', 'ليست نارك مكسافة على هذه الأرض أبعد من يوم أمس']
The original sentence is: ['ألديك قلم ؟', 'ليست هناك مسافة على هذه الأرض أبعد من يوم أمس.']
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# creating a dictionary with all diacritics
dict = {
'ِ': '',
'ُ': '',
'ٓ': '',
'ٰ': '',
'ْ': '',
'ٌ': '',
'ٍ': '',
'ً': '',
'ّ': '',
'َ': '',
'~': '',
',': '',
'ـ': '',
'—': '',
'.': '',
'!': '',
'-': '',
';': '',
':': '',
'\'': '',
'"': '',
'☭': '',
'«': '',
'»': '',
'؛': '',
'ـ': '',
'_': '',
'،': '',
'“': '',
'%': '',
'‘': '',
'”': '',
'�': '',
'_': '',
',': '',
'?': '',
'#': '',
'‘': '',
'.': '',
'؛': '',
'get': '',
'؟': '',
' ': ' ',
'\'ۖ ': '',
'\'': '',
'\'ۚ' : '',
' \'': '',
'31': '',
'24': '',
'39': ''
}
# replacing multiple diacritics using dictionary (stackoverflow is amazing)
def remove_special_characters(batch):
# Create a regular expression from the dictionary keys
regex = re.compile("(%s)" % "|".join(map(re.escape, dict.keys())))
# For each match, look-up corresponding value in dictionary
batch["sentence"] = regex.sub(lambda mo: dict[mo.string[mo.start():mo.end()]], batch["sentence"])
return batch
test_dataset = load_dataset("common_voice", "ar", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mohammed/ar")
model = Wav2Vec2ForCTC.from_pretrained("mohammed/ar")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
test_dataset = test_dataset.map(remove_special_characters)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 36.69%
## Future Work
One can use *data augmentation*, *transliteration*, or *attention_mask* to increase the accuracy.
|
mbsouksu/wav2vec2-large-xlsr-turkish-large
|
mbsouksu
| 2021-07-06T12:43:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Turkish by Mehmet Berk Souksu
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 29.80
---
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("mbsouksu/wav2vec2-large-xlsr-turkish-large")
model = Wav2Vec2ForCTC.from_pretrained("mbsouksu/wav2vec2-large-xlsr-turkish-large")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mbsouksu/wav2vec2-large-xlsr-turkish-large")
model = Wav2Vec2ForCTC.from_pretrained("mbsouksu/wav2vec2-large-xlsr-turkish-large")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\`\…\\]\\[\\&\\’\»«]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.80 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/mbsouksu/wav2vec2-turkish)
|
mbien/fma2vec2popularity
|
mbien
| 2021-07-06T12:36:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Predicting music popularity using DNNs
This is a model fine-tuned for music popularity classification, created as part of DH-401: Digital Musicology class on EPFL
## Team
* Elisa (elisa.michelet@epfl.ch)
* Michał (michal.bien@epfl.ch)
* Noé (noe.durandard@epfl.ch)
## Milestone 3
Main notebook presenting out results is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone3.ipynb)
Notebook describing the details of Wav2Vec2.0 pre-training and fine-tuning for the task is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone3-wav2vec2.ipynb)
## Milestone 2
Exploratory data analysis notebook is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone2.ipynb)
## Milestone 1
Refined project proposal is available [here](https://github.com/Glorf/DH-401/blob/main/milestone0.md)
## Milestone 0
Original project proposal is available in git history [here](https://github.com/Glorf/DH-401/blob/bb14813ff2bbbd9cdc6b6eecf34c9e3c160598eb/milestone0.md)
|
marma/wav2vec2-large-xlsr-swedish
|
marma
| 2021-07-06T12:28:48Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"sv",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: sv
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Swedish by Marma
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sv-SE
type: common_voice
args: sv
metrics:
- name: Test WER
type: wer
value: 23.33
---
# Wav2Vec2-Large-XLSR-53-Swedish
This model has moved [here](https://huggingface.co/KBLab/wav2vec2-large-xlsr-53-swedish)
|
marcel/wav2vec2-large-xlsr-german-demo
|
marcel
| 2021-07-06T12:09:00Z | 24 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"de",
"dataset:common_voice",
"dataset:wer",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: de
datasets:
- common_voice
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice de
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 29.35
---
# Wav2Vec2-Large-XLSR-53-German
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on German using 3% of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "de", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo")
model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "de", split="test[:10%]")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo")
model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\�\カ\æ\無\ན\カ\臣\ѹ\…\«\»\ð\ı\„\幺\א\ב\比\ш\ע\)\ứ\в\œ\ч\+\—\ш\‚\נ\м\ń\乡\$\=\ש\ф\支\(\°\и\к\̇]'
substitutions = {
'e' : '[\ə\é\ě\ę\ê\ế\ế\ë\ė\е]',
'o' : '[\ō\ô\ô\ó\ò\ø\ọ\ŏ\õ\ő\о]',
'a' : '[\á\ā\ā\ă\ã\å\â\à\ą\а]',
'c' : '[\č\ć\ç\с]',
'l' : '[\ł]',
'u' : '[\ú\ū\ứ\ů]',
'und' : '[\&]',
'r' : '[\ř]',
'y' : '[\ý]',
's' : '[\ś\š\ș\ş]',
'i' : '[\ī\ǐ\í\ï\î\ï]',
'z' : '[\ź\ž\ź\ż]',
'n' : '[\ñ\ń\ņ]',
'g' : '[\ğ]',
'ss' : '[\ß]',
't' : '[\ț\ť]',
'd' : '[\ď\đ]',
"'": '[\ʿ\་\’\`\´\ʻ\`\‘]',
'p': '\р'
}
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for x in substitutions:
batch["sentence"] = re.sub(substitutions[x], x, batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.35 %
## Training
The first 3% of the Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found TODO
|
marcel/wav2vec2-large-xlsr-53-german
|
marcel
| 2021-07-06T11:55:02Z | 22 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"de",
"dataset:common_voice",
"dataset:wer",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: de
datasets:
- common_voice
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice de
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 15.80
---
# Wav2Vec2-Large-XLSR-53-German
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on German using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "de", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-53-german")
model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-53-german")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "de", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-53-german")
model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-53-german")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\�\カ\æ\無\ན\カ\臣\ѹ\…\«\»\ð\ı\„\幺\א\ב\比\ш\ע\)\ứ\в\œ\ч\+\—\ш\‚\נ\м\ń\乡\$\=\ש\ф\支\(\°\и\к\̇]'
substitutions = {
'e' : '[\ə\é\ě\ę\ê\ế\ế\ë\ė\е]',
'o' : '[\ō\ô\ô\ó\ò\ø\ọ\ŏ\õ\ő\о]',
'a' : '[\á\ā\ā\ă\ã\å\â\à\ą\а]',
'c' : '[\č\ć\ç\с]',
'l' : '[\ł]',
'u' : '[\ú\ū\ứ\ů]',
'und' : '[\&]',
'r' : '[\ř]',
'y' : '[\ý]',
's' : '[\ś\š\ș\ş]',
'i' : '[\ī\ǐ\í\ï\î\ï]',
'z' : '[\ź\ž\ź\ż]',
'n' : '[\ñ\ń\ņ]',
'g' : '[\ğ]',
'ss' : '[\ß]',
't' : '[\ț\ť]',
'd' : '[\ď\đ]',
"'": '[\ʿ\་\’\`\´\ʻ\`\‘]',
'p': '\р'
}
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for x in substitutions:
batch["sentence"] = re.sub(substitutions[x], x, batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
The model can also be evaluated with in 10% chunks which needs less ressources (to be tested).
```
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import jiwer
lang_id = "de"
processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-53-german")
model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-53-german")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\�\カ\æ\無\ན\カ\臣\ѹ\…\«\»\ð\ı\„\幺\א\ב\比\ш\ע\)\ứ\в\œ\ч\+\—\ш\‚\נ\м\ń\乡\$\=\ש\ф\支\(\°\и\к\̇]'
substitutions = {
'e' : '[\ə\é\ě\ę\ê\ế\ế\ë\ė\е]',
'o' : '[\ō\ô\ô\ó\ò\ø\ọ\ŏ\õ\ő\о]',
'a' : '[\á\ā\ā\ă\ã\å\â\à\ą\а]',
'c' : '[\č\ć\ç\с]',
'l' : '[\ł]',
'u' : '[\ú\ū\ứ\ů]',
'und' : '[\&]',
'r' : '[\ř]',
'y' : '[\ý]',
's' : '[\ś\š\ș\ş]',
'i' : '[\ī\ǐ\í\ï\î\ï]',
'z' : '[\ź\ž\ź\ż]',
'n' : '[\ñ\ń\ņ]',
'g' : '[\ğ]',
'ss' : '[\ß]',
't' : '[\ț\ť]',
'd' : '[\ď\đ]',
"'": '[\ʿ\་\’\`\´\ʻ\`\‘]',
'p': '\р'
}
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for x in substitutions:
batch["sentence"] = re.sub(substitutions[x], x, batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
H, S, D, I = 0, 0, 0, 0
for i in range(10):
print("test["+str(10*i)+"%:"+str(10*(i+1))+"%]")
test_dataset = load_dataset("common_voice", "de", split="test["+str(10*i)+"%:"+str(10*(i+1))+"%]")
test_dataset = test_dataset.map(speech_file_to_array_fn)
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = result["pred_strings"]
targets = result["sentence"]
chunk_metrics = jiwer.compute_measures(targets, predictions)
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
WER = float(S + D + I) / float(H + S + D)
print("WER: {:2f}".format(WER*100))
```
**Test Result**: 15.80 %
## Training
The first 50% of the Common Voice `train`, and 12% of the `validation` datasets were used for training (30 epochs on first 12% and 3 epochs on the remainder).
|
manandey/wav2vec2-large-xlsr-mongolian
|
manandey
| 2021-07-06T11:37:29Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mn",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: mn
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: mn
metrics:
- name: Test WER
type: wer
value: 43.08
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-mongolian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 43.08%
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
yobi/klue-roberta-base-sts
|
yobi
| 2021-07-06T11:36:08Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
## Usage
```
from sentence_transformers import SentenceTransformer, models
embedding_model = models.Transformer("yobi/klue-roberta-base-sts")
pooling_model = models.Pooling(
embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
)
model = SentenceTransformer(modules=[embedding_model, pooling_model])
model.encode("안녕하세요.", convert_to_tensor=True)
```
|
manandey/wav2vec2-large-xlsr-estonian
|
manandey
| 2021-07-06T11:32:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"et",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: et
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Estonian by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice et
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 37.36
---
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Estonian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "et", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-estonian")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-estonian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "et", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-estonian")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-estonian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\|\।\–\’\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 37.36%
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
manandey/wav2vec2-large-xlsr-assamese
|
manandey
| 2021-07-06T11:22:54Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"as",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: as
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Assamese by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice as
type: common_voice
args: as
metrics:
- name: Test WER
type: wer
value: 74.25
---
# Wav2Vec2-Large-XLSR-53-Assamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "as", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-assamese")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-assamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "as", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-assamese")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-assamese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\।]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 74.25%
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
m3hrdadfi/wav2vec2-xlsr-greek-speech-emotion-recognition
|
m3hrdadfi
| 2021-07-06T11:11:59Z | 126 | 9 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"audio",
"automatic-speech-recognition",
"speech",
"speech-emotion-recognition",
"el",
"dataset:aesdd",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: el
datasets:
- aesdd
tags:
- audio
- automatic-speech-recognition
- speech
- speech-emotion-recognition
license: apache-2.0
---
# Emotion Recognition in Greek (el) Speech using Wav2Vec 2.0
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/wav2vec2-xlsr-greek-speech-emotion-recognition"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Emotion": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "/path/to/disgust.wav"
outputs = predict(path, sampling_rate)
```
```bash
[
{'Emotion': 'anger', 'Score': '0.0%'},
{'Emotion': 'disgust', 'Score': '99.2%'},
{'Emotion': 'fear', 'Score': '0.1%'},
{'Emotion': 'happiness', 'Score': '0.3%'},
{'Emotion': 'sadness', 'Score': '0.5%'}
]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| Emotions | precision | recall | f1-score | accuracy |
|-----------|-----------|--------|----------|----------|
| anger | 0.92 | 1.00 | 0.96 | |
| disgust | 0.85 | 0.96 | 0.90 | |
| fear | 0.88 | 0.88 | 0.88 | |
| happiness | 0.94 | 0.71 | 0.81 | |
| sadness | 0.96 | 1.00 | 0.98 | |
| | | | Overal | 0.91 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues).
|
m3hrdadfi/wav2vec2-large-xlsr-turkish
|
m3hrdadfi
| 2021-07-06T11:07:44Z | 211 | 8 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tr
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- label: Common Voice sample 1378
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-turkish/resolve/main/sample1378.flac
- label: Common Voice sample 1589
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-turkish/resolve/main/sample1589.flac
model-index:
- name: XLSR Wav2Vec2 Turkish by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 27.51
---
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Turkish using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
```
**Prediction**
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import numpy as np
import re
import string
import IPython.display as ipd
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"',
"“", "%", "‘", "�", "–", "…", "_", "”", '“', '„'
]
chars_to_mapping = {
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = text.replace("\u0307", " ").strip()
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish").to(device)
dataset = load_dataset("common_voice", "et", split="test[:1%]")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
max_items = np.random.randint(0, len(result), 10).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
**Output:**
```text
reference: ülke şu anda iki federasyona üye
predicted: ülke şu anda iki federasyona üye
---
reference: foruma dört yüzde fazla kişi katıldı
predicted: soruma dört yüzden fazla kişi katıldı
---
reference: mobi altmış üç çalışanları da mutsuz
predicted: mobia haltmış üç çalışanları da mutsur
---
reference: kentin mali esnekliğinin düşük olduğu bildirildi
predicted: kentin mali esnekleğinin düşük olduğu bildirildi
---
reference: fouere iki ülkeyi sorunu abartmamaya çağırdı
predicted: foor iki ülkeyi soruna abartmamaya çanayordı
---
reference: o ülkeden herhangi bir tepki geldi mi
predicted: o ülkeden herhayın bir tepki geldi mi
---
reference: bunlara asla sırtımızı dönmeyeceğiz
predicted: bunlara asla sırtımızı dönmeyeceğiz
---
reference: sizi ayakta tutan nedir
predicted: sizi ayakta tutan nedir
---
reference: artık insanlar daha bireysel yaşıyor
predicted: artık insanlar daha bir eyselli yaşıyor
---
reference: her ikisi de diyaloga hazır olduğunu söylüyor
predicted: her ikisi de diyaloğa hazır olduğunu söylüyor
---
reference: merkez bankasının başlıca amacı düşük enflasyon
predicted: merkez bankasının başlrıca anatı güşükyen flasyon
---
reference: firefox
predicted: fair foks
---
reference: ülke halkı çok misafirsever ve dışa dönük
predicted: ülke halktı çok isatirtever ve dışa dönük
---
reference: ancak kamuoyu bu durumu pek de affetmiyor
predicted: ancak kamuonyulgukirmu pek deafıf etmiyor
---
reference: i ki madende iki bin beş yüzden fazla kişi çalışıyor
predicted: i ki madende iki bin beş yüzden fazla kişi çalışıyor
---
reference: sunnyside park dışarıdan oldukça iyi görünüyor
predicted: sani sahip park dışarıdan oldukça iyi görünüyor
---
reference: büyük ödül on beş bin avro
predicted: büyük ödül on beş bin avro
---
reference: köyümdeki camiler depoya dönüştürüldü
predicted: küyümdeki camiler depoya dönüştürüldü
---
reference: maç oldukça diplomatik bir sonuçla birbir bitti
predicted: maç oldukça diplomatik bir sonuçla bir birbitti
---
reference: kuşların ikisi de karantinada öldüler
predicted: kuşların ikiste karantinada özdüler
---
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import numpy as np
import re
import string
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"',
"“", "%", "‘", "�", "–", "…", "_", "”", '“', '„'
]
chars_to_mapping = {
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
"\u0307": " "
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = text.replace("\u0307", " ").strip()
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
text = re.sub(" +", " ", text)
text = text.strip() + " "
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish").to(device)
dataset = load_dataset("common_voice", "tr", split="test")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
]
**Test Result**:
- WER: 27.51%
## Training & Report
The Common Voice `train`, `validation` datasets were used for training.
You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_turkish/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Turkish--Vmlldzo1Njc1MDc?accessToken=02vm5cwbi7d342vyt7h9w9859zex0enltdmjoreyjt3bd5qwv0vs0g3u93iv92q0)
The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
|
m3hrdadfi/wav2vec2-large-xlsr-persian-v2
|
m3hrdadfi
| 2021-07-06T10:55:39Z | 369 | 6 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fa",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fa
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- label: Common Voice sample 4024
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v2/resolve/main/sample4024.flac
- label: Common Voice sample 4084
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v2/resolve/main/sample4084.flac
model-index:
- name: XLSR Wav2Vec2 Persian (Farsi) V2 by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fa
type: common_voice
args: fa
metrics:
- name: Test WER
type: wer
value: 31.92
---
# Wav2Vec2-Large-XLSR-53-Persian V2
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Persian (Farsi) using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
!pip install hazm
```
**Prediction**
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import numpy as np
import hazm
import re
import string
import IPython.display as ipd
_normalizer = hazm.Normalizer()
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?",
".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„',
'ā', 'š',
# "ء",
]
# In case of farsi
chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits)
chars_to_mapping = {
'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی',
'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی",
"ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع",
"ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه",
'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش",
'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ",
# "ها": " ها", "ئ": "ی",
"a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ",
"g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ",
"m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ",
"s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ",
"y": " وای ", "z": " زد ",
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = _normalizer.normalize(text)
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
text = re.sub(" +", " ", text)
text = text.strip() + " "
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2").to(device)
dataset = load_dataset("common_voice", "fa", split="test[:1%]")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
max_items = np.random.randint(0, len(result), 20).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
**Output:**
```text
reference: عجم زنده کردم بدین پارسی
predicted: عجم زنده کردم بدین پارسی
---
reference: لباس هایم کی آماده خواهند شد
predicted: لباس خایم کی آماده خواهند شد
---
reference: با مهان همنشین شدم
predicted: با مهان همنشین شدم
---
reference: یکی از بهترین فیلم هایی بود که در این سال ها دیدم
predicted: یکی از بهترین فیلمهایی بود که در این سالها دیدم
---
reference: اون خیلی بد ماساژ میده
predicted: اون خیلی بد ماساژ میده
---
reference: هنوزم بزرگترین دستاورد دولت روحانی اینه که رییسی رییسجمهور نشد
predicted: هنوزم بزرگترین دستآوردار دولت روانیاینه که ریسی ریسیومرو نشد
---
reference: واسه بدنسازی آماده ای
predicted: واسه بعدنسافی آماده ای
---
reference: خدای من شماها سالمین
predicted: خدای من شما ها سالمین
---
reference: بهشون ثابت میشه که دروغ نگفتم
predicted: بهشون ثابت میشه که دروغ مگفتم
---
reference: آیا ممکن است یک پتو برای من بیاورید
predicted: سف کمیتخ لظا
---
reference: نزدیک جلو
predicted: رزیک جلو
---
reference: شایعه پراکن دربارهاش دروغ و شایعه می سازد
predicted: شایه پراکن دربارهاش دروغ و شایعه می سازد
---
reference: وقتی نیاز است که یک چهره دوستانه بیابند
predicted: وقتی نیاز است یک چهره دوستانه بیابند
---
reference: ممکنه رادیواکتیوی چیزی باشه
predicted: ممکنه به آدیوتیوی چیزی باشه
---
reference: دهنتون رو ببندید
predicted: دهن جن رو ببندید
---
reference: پاشیم بریم قند و شکر و روغنمون رو بگیریم تا تموم نشده
predicted: پاشین بریم قند و شکر و روغنمون رو بگیریم تا تموم نشده
---
reference: اما قبل از تمام کردن بحث تاریخی باید ذکری هم از ناپیکس بکنیم
predicted: اما قبل از تمام کردن بحث تاریخی باید ذکری هم از نایپکس بکنیم
---
reference: لطفا کپی امضا شده قرارداد را بازگردانید
predicted: لطفا کپی امضال شده قرار داد را باز گردانید
---
reference: خیلی هم چیز مهمی نیست
predicted: خیلی هم چیز مهمی نیست
---
reference: شایعه پراکن دربارهاش دروغ و شایعه می سازد
predicted: شایه پراکن دربارهاش دروغ و شایعه می سازد
---
```
## Evaluation
The model can be evaluated as follows on the Persian (Farsi) test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import numpy as np
import hazm
import re
import string
_normalizer = hazm.Normalizer()
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?",
".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„',
'ā', 'š',
# "ء",
]
# In case of farsi
chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits)
chars_to_mapping = {
'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی',
'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی",
"ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع",
"ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه",
'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش",
'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ",
# "ها": " ها", "ئ": "ی",
"a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ",
"g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ",
"m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ",
"s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ",
"y": " وای ", "z": " زد ",
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = _normalizer.normalize(text)
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
text = re.sub(" +", " ", text)
text = text.strip() + " "
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2").to(device)
dataset = load_dataset("common_voice", "fa", split="test")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
**Test Result:**
- WER: 31.92%
## Training
The Common Voice `train`, `validation` datasets were used for training.
You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_persian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Persian--Vmlldzo1NjY1NjU?accessToken=pspukt0liicopnwe93wo1ipetqk0gzkuv8669g00wc6hcesk1fh0rfkbd0h46unk)
The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Persian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
|
m3hrdadfi/wav2vec2-large-xlsr-estonian
|
m3hrdadfi
| 2021-07-06T10:28:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"et",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: et
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- label: Common Voice sample 1123
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-estonian/resolve/main/sample1123.flac
- label: Common Voice sample 910
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-estonian/resolve/main/sample910.flac
model-index:
- name: XLSR Wav2Vec2 Estonian by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice et
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 33.93
---
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Estonian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
```
**Prediction**
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import numpy as np
import re
import string
import IPython.display as ipd
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"',
"“", "%", "‘", "�", "–", "…", "_", "”", '“', '„'
]
chars_to_mapping = {
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = text.replace("\u0307", " ").strip()
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device)
dataset = load_dataset("common_voice", "et", split="test[:1%]")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
max_items = np.random.randint(0, len(result), 10).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
**Output:**
```text
reference: õhulossid lagunevad ning ees ootab maapind
predicted: õhulassid lagunevad ning ees ootab maapind
---
reference: milliseks kiievisse pääsemise nimel võistlev muusik soome muusikamaastiku hetkeseisu hindab ning kas ta ka ennast sellel tulevikus tegutsemas näeb kuuled videost
predicted: milliseks gievisse pääsemise nimel võitlev muusiks soome muusikama aastiku hetke seisu hindab ning kas ta ennast selle tulevikus tegutsemast näeb kuulad videost
---
reference: näiteks kui pool seina on tehtud tekib tunne et tahaks tegelikult natuke teistsugust ja hakkame otsast peale
predicted: näiteks kui pool seine on tehtud tekib tunnetahaks tegelikult matuka teistsugust jahappanna otsast peane
---
reference: neuroesteetilised katsed näitavad et just nägude vaatlemine aktiveerib inimese aju esteetilist keskust
predicted: neuroaisteetiliselt katsed näitaval et just nägude vaatlemine aptiveerid inimese aju est eedilist keskust
---
reference: paljud inimesed kindlasti kadestavad teid kuid ei julge samamoodi vabalt võtta
predicted: paljud inimesed kindlasti kadestavadteid kuid ei julge sama moodi vabalt võtta
---
reference: parem on otsida pileteid inkognito veebi kaudu
predicted: parem on otsida pileteid ning kognitu veebikaudu
---
reference: ja vot siin ma jäin vaikseks
predicted: ja vat siisma ja invaikseks
---
reference: mida sa iseendale juubeli puhul soovid
predicted: mida saise endale jubeli puhul soovid
---
reference: kuumuse ja kõrge temperatuuri tõttu kuivas tühjadel karjamaadel rohi mis muutus kergesti süttivaks
predicted: kuumuse ja kõrge temperatuuri tõttu kuivast ühjadal karjamaadel rohi mis muutus kergesti süttivaks
---
reference: ilmselt on inimesi kelle jaoks on see hea lahendus
predicted: ilmselt on inimesi kelle jaoks on see hea lahendus
---
```
## Evaluation
The model can be evaluated as follows on the Estonian test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import numpy as np
import re
import string
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"',
"“", "%", "‘", "�", "–", "…", "_", "”", '“', '„'
]
chars_to_mapping = {
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = text.replace("\u0307", " ").strip()
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device)
dataset = load_dataset("common_voice", "et", split="test")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
**Test Result**:
- WER: 33.93%
## Training & Report
The Common Voice `train`, `validation` datasets were used for training.
You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_estonian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Estonian--Vmlldzo1NjA1MTI?accessToken=k2b2g3a2i12m1sdwf13q8b226pplmmyw12joxo6vk38eb4djellfzmn9fp2725fw)
The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Estonian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
|
m3hrdadfi/wav2vec2-base-100k-eating-sound-collection
|
m3hrdadfi
| 2021-07-06T10:26:03Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio",
"automatic-speech-recognition",
"audio-classification",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- audio
- automatic-speech-recognition
- audio-classification
---
# Eating Sound Classification using Wav2Vec 2.0
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/wav2vec2-base-100k-eating-sound-collection"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "clips_rd/gummies/gummies_6_04.wav"
outputs = predict(path, sampling_rate)
```
```bash
[
{'Label': 'aloe', 'Score': '0.0%'},
{'Label': 'burger', 'Score': '0.0%'},
{'Label': 'cabbage', 'Score': '0.0%'},
{'Label': 'candied_fruits', 'Score': '0.0%'},
{'Label': 'carrots', 'Score': '0.0%'},
{'Label': 'chips', 'Score': '0.0%'},
{'Label': 'chocolate', 'Score': '0.0%'},
{'Label': 'drinks', 'Score': '0.0%'},
{'Label': 'fries', 'Score': '0.0%'},
{'Label': 'grapes', 'Score': '0.0%'},
{'Label': 'gummies', 'Score': '99.8%'},
{'Label': 'ice-cream', 'Score': '0.0%'},
{'Label': 'jelly', 'Score': '0.1%'},
{'Label': 'noodles', 'Score': '0.0%'},
{'Label': 'pickles', 'Score': '0.0%'},
{'Label': 'pizza', 'Score': '0.0%'},
{'Label': 'ribs', 'Score': '0.0%'},
{'Label': 'salmon', 'Score': '0.0%'},
{'Label': 'soup', 'Score': '0.0%'},
{'Label': 'wings', 'Score': '0.0%'}
]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| label | precision | recall | f1-score | support |
|:--------------:|:---------:|:------:|:--------:|:-------:|
| aloe | 0.989 | 0.807 | 0.889 | 109 |
| burger | 1.000 | 0.471 | 0.640 | 119 |
| cabbage | 0.907 | 0.970 | 0.937 | 100 |
| candied_fruits | 0.952 | 0.988 | 0.970 | 161 |
| carrots | 0.970 | 0.992 | 0.981 | 132 |
| chips | 0.993 | 0.951 | 0.972 | 144 |
| chocolate | 0.828 | 0.914 | 0.869 | 58 |
| drinks | 0.982 | 0.948 | 0.965 | 58 |
| fries | 0.935 | 0.783 | 0.852 | 129 |
| grapes | 0.965 | 0.940 | 0.952 | 116 |
| gummies | 0.880 | 0.971 | 0.923 | 136 |
| ice-cream | 0.953 | 0.972 | 0.962 | 145 |
| jelly | 0.906 | 0.875 | 0.890 | 88 |
| noodles | 0.817 | 0.817 | 0.817 | 82 |
| pickles | 0.933 | 0.960 | 0.946 | 174 |
| pizza | 0.704 | 0.934 | 0.803 | 122 |
| ribs | 0.796 | 0.755 | 0.775 | 98 |
| salmon | 0.647 | 0.970 | 0.776 | 100 |
| soup | 0.941 | 0.857 | 0.897 | 56 |
| wings | 0.842 | 0.792 | 0.816 | 101 |
| accuracy | 0.890 | 0.890 | 0.890 | 0 |
| macro avg | 0.897 | 0.883 | 0.882 | 2228 |
| weighted avg | 0.903 | 0.890 | 0.888 | 2228 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues).
|
kmfoda/wav2vec2-large-xlsr-arabic
|
kmfoda
| 2021-07-06T09:45:10Z | 23 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ar",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ar
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Arabic by Othmane Rifki
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 46.77
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
test_dataset = test_dataset.map(prepare_example)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\؟\_\؛\ـ\—]'
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
test_dataset = test_dataset.map(prepare_example)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 52.53
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://huggingface.co/kmfoda/wav2vec2-large-xlsr-arabic/tree/main)
|
joaoalvarenga/wav2vec2-large-xlsr-portuguese
|
joaoalvarenga
| 2021-07-06T09:30:27Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: pt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- apache-2.0
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pt
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 13.766801%
---
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
You need to install Enelvo, an open-source spell correction trained with Twitter user posts
`pip install enelvo`
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from enelvo import normaliser
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
norm = normaliser.Normaliser()
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = [norm.normalise(i) for i in processor.batch_decode(pred_ids)]
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 13.766801%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
|
joaoalvarenga/wav2vec2-large-xlsr-italian
|
joaoalvarenga
| 2021-07-06T09:16:35Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"it",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: it
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- it
- apache-2.0
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Italian
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice it
type: common_voice
args: it
metrics:
- name: Test WER
type: wer
value: 13.914924%
---
# Wav2Vec2-Large-XLSR-53-Italian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Italian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "it", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Italian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 13.914924%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-italian/blob/main/fine_tuning.py
|
joaoalvarenga/wav2vec2-large-xlsr-53-spanish
|
joaoalvarenga
| 2021-07-06T09:14:19Z | 0 | 0 | null |
[
"audio",
"speech",
"wav2vec2",
"es",
"apache-2.0",
"spanish-speech-corpus",
"automatic-speech-recognition",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: es
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- es
- apache-2.0
- spanish-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Spanish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ES
type: common_voice
args: es
metrics:
- name: Test WER
type: wer
value: Training
---
# Wav2Vec2-Large-XLSR-53-Spanish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "es", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "es", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer) **: Training
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-spanish/blob/main/fine-tuning.py
|
joaoalvarenga/wav2vec2-cv-coral-30ep
|
joaoalvarenga
| 2021-07-06T09:07:11Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: pt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- apache-2.0
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pt
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 15.037146%
---
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 15.037146%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
|
jaimin/wav2vec2-base-gujarati-demo
|
jaimin
| 2021-07-06T06:37:37Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:google",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: Guj
datasets:
- google
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Guj by Jaimin
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Google
type: voice
args: guj
metrics:
- name: Test WER
type: wer
value: 28.92
---
# wav2vec2-base-gujarati-demo
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Guj
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
common_voice_train,common_voice_test = load_dataset('csv', data_files={'train': 'train.csv','test': 'test.csv'},error_bad_lines=False,encoding='utf-8',split=['train', 'test']).
processor = Wav2Vec2Processor.from_pretrained("jaimin/wav2vec2-base-gujarati-demo")
model = Wav2Vec2ForCTC.from_pretrained("jaimin/wav2vec2-base-gujarati-demo")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = common_voice_test.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][0].lower())
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
common_voice_validation = load_dataset('csv', data_files={'test': 'validation.csv'},error_bad_lines=False,encoding='utf-8',split='test')
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("jaimin/wav2vec2-base-gujarati-demo")
model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-base-gujarati-demo")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = common_voice_validation.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = common_voice_validation.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 28.92 %
## Training
The Google datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
|
infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil
|
infinitejoy
| 2021-07-06T06:33:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ta
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Tamil
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 71.29
---
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tamil test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 71.29 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
huggingtweets/atticscientist
|
huggingtweets
| 2021-07-06T06:25:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/atticscientist/1625552752637/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1323206624765448197/eqBniY_E_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AtticScientist</div>
<div style="text-align: center; font-size: 14px;">@atticscientist</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AtticScientist.
| Data | AtticScientist |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 1 |
| Short tweets | 8 |
| Tweets kept | 3241 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pvpbxir/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atticscientist's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ktckeg7n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ktckeg7n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/atticscientist')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese
|
infinitejoy
| 2021-07-06T06:20:06Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"as",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: as
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Assamese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice as
type: common_voice
args: as
metrics:
- name: Test WER
type: wer
value: 69.63
---
# Wav2Vec2-Large-XLSR-53-Assamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "as", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Assamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "as", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\।]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 69.63 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
huggingtweets/chrmanning
|
huggingtweets
| 2021-07-06T06:17:55Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/chrmanning/1625552271211/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/512256295542333440/8Jo4w8kV_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Christopher Manning</div>
<div style="text-align: center; font-size: 14px;">@chrmanning</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Christopher Manning.
| Data | Christopher Manning |
| --- | --- |
| Tweets downloaded | 1115 |
| Retweets | 428 |
| Short tweets | 57 |
| Tweets kept | 630 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ik3m24hb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chrmanning's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1rlj5183) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1rlj5183/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chrmanning')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
iarfmoose/wav2vec2-large-xlsr-kyrgyz
|
iarfmoose
| 2021-07-06T05:57:02Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ky",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ky
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Kyrgyz by Adam Montgomerie
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ky
type: common_voice
args: ky
metrics:
- name: Test WER
type: wer
value: 34.71
---
# Wav2Vec2-Large-XLSR-53-Kyrgyz
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Kyrgyz using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ky", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Kyrgyz test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ky", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�\\\\\\\\–\\\\\\\\—\\\\\\\\¬\\\\\\\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.71 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Kyrgyz/XLSR_Kyrgyz.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Kyrgyz/wav2vec2_ky_eval.ipynb)
|
iarfmoose/wav2vec2-large-xlsr-frisian
|
iarfmoose
| 2021-07-06T05:51:20Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fy-NL
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Frisian by Adam Montgomerie
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fy-NL
type: common_voice
args: fy-NL
metrics:
- name: Test WER
type: wer
value: 21.72
---
# Wav2Vec2-Large-XLSR-53-Frisian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
tbatch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Frisian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fy-NL", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\–\\—\\¬\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 21.72 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Frisian/XLSR_Frisian.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Frisian/wav2vec2_fyNL_eval.ipynb)
|
gvs/wav2vec2-large-xlsr-malayalam
|
gvs
| 2021-07-06T05:44:26Z | 81,549 | 5 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ml",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ml
datasets:
- Indic TTS Malayalam Speech Corpus
- Openslr Malayalam Speech Corpus
- SMC Malayalam Speech Corpus
- IIIT-H Indic Speech Databases
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Malayalam XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Test split of combined dataset using all datasets mentioned above
type: custom
args: ml
metrics:
- name: Test WER
type: wer
value: 28.43
---
# Wav2Vec2-Large-XLSR-53-ml
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on ml (Malayalam) using the [Indic TTS Malayalam Speech Corpus (via Kaggle)](https://www.kaggle.com/kavyamanohar/indic-tts-malayalam-speech-corpus), [Openslr Malayalam Speech Corpus](http://openslr.org/63/), [SMC Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and [IIIT-H Indic Speech Databases](http://speech.iiit.ac.in/index.php/research-svl/69.html). The notebooks used to train model are available [here](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = <load-test-split-of-combined-dataset> # Details on loading this dataset in the evaluation section
processor = Wav2Vec2Processor.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model = Wav2Vec2ForCTC.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"])
```
## Evaluation
The model can be evaluated as follows on the test data of combined custom dataset. For more details on dataset preparation, check the notebooks mentioned at the end of this file.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from datasets import load_dataset, load_metric
from pathlib import Path
# The custom dataset needs to be created using notebook mentioned at the end of this file
data_dir = Path('<path-to-custom-dataset>')
dataset_folders = {
'iiit': 'iiit_mal_abi',
'openslr': 'openslr',
'indic-tts': 'indic-tts-ml',
'msc-reviewed': 'msc-reviewed-speech-v1.0+20200825',
}
# Set directories for datasets
openslr_male_dir = data_dir / dataset_folders['openslr'] / 'male'
openslr_female_dir = data_dir / dataset_folders['openslr'] / 'female'
iiit_dir = data_dir / dataset_folders['iiit']
indic_tts_male_dir = data_dir / dataset_folders['indic-tts'] / 'male'
indic_tts_female_dir = data_dir / dataset_folders['indic-tts'] / 'female'
msc_reviewed_dir = data_dir / dataset_folders['msc-reviewed']
# Load the datasets
openslr_male = load_dataset("json", data_files=[f"{str(openslr_male_dir.absolute())}/sample_{i}.json" for i in range(2023)], split="train")
openslr_female = load_dataset("json", data_files=[f"{str(openslr_female_dir.absolute())}/sample_{i}.json" for i in range(2103)], split="train")
iiit = load_dataset("json", data_files=[f"{str(iiit_dir.absolute())}/sample_{i}.json" for i in range(1000)], split="train")
indic_tts_male = load_dataset("json", data_files=[f"{str(indic_tts_male_dir.absolute())}/sample_{i}.json" for i in range(5649)], split="train")
indic_tts_female = load_dataset("json", data_files=[f"{str(indic_tts_female_dir.absolute())}/sample_{i}.json" for i in range(2950)], split="train")
msc_reviewed = load_dataset("json", data_files=[f"{str(msc_reviewed_dir.absolute())}/sample_{i}.json" for i in range(1541)], split="train")
# Create test split as 20%, set random seed as well.
test_size = 0.2
random_seed=1
openslr_male_splits = openslr_male.train_test_split(test_size=test_size, seed=random_seed)
openslr_female_splits = openslr_female.train_test_split(test_size=test_size, seed=random_seed)
iiit_splits = iiit.train_test_split(test_size=test_size, seed=random_seed)
indic_tts_male_splits = indic_tts_male.train_test_split(test_size=test_size, seed=random_seed)
indic_tts_female_splits = indic_tts_female.train_test_split(test_size=test_size, seed=random_seed)
msc_reviewed_splits = msc_reviewed.train_test_split(test_size=test_size, seed=random_seed)
# Get combined test dataset
split_list = [openslr_male_splits, openslr_female_splits, indic_tts_male_splits, indic_tts_female_splits, msc_reviewed_splits, iiit_splits]
test_dataset = datasets.concatenate_datasets([split['test'] for split in split_list)
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model = Wav2Vec2ForCTC.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model.to("cuda")
resamplers = {
48000: torchaudio.transforms.Resample(48_000, 16_000),
}
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\�Utrnle\\\\_]'
unicode_ignore_regex = r'[\\\\u200e]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
batch["sentence"] = re.sub(unicode_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
# Resample if its not in 16kHz
if sampling_rate != 16000:
batch["speech"] = resamplers[sampling_rate](speech_array).squeeze().numpy()
else:
batch["speech"] = speech_array.squeeze().numpy()
# If more than one dimension is present, pick first one
if batch["speech"].ndim > 1:
batch["speech"] = batch["speech"][0]
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (WER)**: 28.43 %
## Training
A combined dataset was created using [Indic TTS Malayalam Speech Corpus (via Kaggle)](https://www.kaggle.com/kavyamanohar/indic-tts-malayalam-speech-corpus), [Openslr Malayalam Speech Corpus](http://openslr.org/63/), [SMC Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and [IIIT-H Indic Speech Databases](http://speech.iiit.ac.in/index.php/research-svl/69.html). The datasets were downloaded and was converted to HF Dataset format using [this notebook](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/blob/main/make_hf_dataset.ipynb)
The notebook used for training and evaluation can be found [here](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/blob/main/fine-tune-xlsr-wav2vec2-on-malayalam-asr-with-transformers_v2.ipynb)
|
gchhablani/wav2vec2-large-xlsr-or
|
gchhablani
| 2021-07-06T05:17:20Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"or",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: or
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Odia by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice or
type: common_voice
args: or
metrics:
- name: Test WER
type: wer
value: 52.64
---
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Odia using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-or")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-or")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-or")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-or")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…\'\_\’\।\|]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 52.64 %
## Training
The Common Voice `train` and `validation` datasets were used for training.The colab notebook used can be found [here](https://colab.research.google.com/drive/1s8DrwgB5y4Z7xXIrPXo1rQA5_1OZ8WD5?usp=sharing).
|
gchhablani/wav2vec2-large-xlsr-gu
|
gchhablani
| 2021-07-06T04:38:17Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"gu",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: gu
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Gujarati by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR gu
type: openslr
metrics:
- name: Test WER
type: wer
value: 23.55
---
# Wav2Vec2-Large-XLSR-53-Gujarati
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Gujarati using the [OpenSLR SLR78](http://openslr.org/78/) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Gujarati `sentence` and `path` fields:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET.
# For sample see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
resampler = torchaudio.transforms.Resample(48_000, 16_000) # The original data was with 48,000 sampling rate. You can change it according to your input.
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset_eval = test_dataset_eval.map(speech_file_to_array_fn)
inputs = processor(test_dataset_eval["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset_eval["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on 10% of the Marathi data on OpenSLR.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…\'\_\’]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"),
attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 23.55 %
## Training
90% of the OpenSLR Gujarati Male+Female dataset was used for training, after removing few examples that contained Roman characters.
The colab notebook used for training can be found [here](https://colab.research.google.com/drive/1fRQlgl4EPR4qKGScgza3MpWgbL5BeWtn?usp=sharing).
|
gagan3012/wav2vec2-xlsr-khmer
|
gagan3012
| 2021-07-06T03:58:05Z | 122 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"km",
"dataset:OpenSLR",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: km
datasets:
- OpenSLR
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-xlsr-Khmer by Gagan Bhatia
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR km
type: OpenSLR
args: km
metrics:
- name: Test WER
type: wer
value: 24.96
---
# Wav2Vec2-Large-XLSR-53-khmer
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Khmer using the [Common Voice](https://huggingface.co/datasets/common_voice), and [OpenSLR Kh](http://www.openslr.org/42/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
!wget https://www.openslr.org/resources/42/km_kh_male.zip
!unzip km_kh_male.zip
!ls km_kh_male
colnames=['path','sentence']
df = pd.read_csv('/content/km_kh_male/line_index.tsv',sep='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t',header=None,names = colnames)
df['path'] = '/content/km_kh_male/wavs/'+df['path'] +'.wav'
train, test = train_test_split(df, test_size=0.1)
test.to_csv('/content/km_kh_male/line_index_test.csv')
test_dataset = load_dataset('csv', data_files='/content/km_kh_male/line_index_test.csv',split = 'train')
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-nepali")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-nepali")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
#### Result
Prediction: ['पारानाको ब्राजिली राज्यमा रहेको राजधानी', 'देवराज जोशी त्रिभुवन विश्वविद्यालयबाट शिक्षाशास्त्रमा स्नातक हुनुहुन्छ']
Reference: ['पारानाको ब्राजिली राज्यमा रहेको राजधानी', 'देवराज जोशी त्रिभुवन विश्वविद्यालयबाट शिक्षाशास्त्रमा स्नातक हुनुहुन्छ']
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from sklearn.model_selection import train_test_split
import pandas as pd
from datasets import load_dataset
!wget https://www.openslr.org/resources/42/km_kh_male.zip
!unzip km_kh_male.zip
!ls km_kh_male
colnames=['path','sentence']
df = pd.read_csv('/content/km_kh_male/line_index.tsv',sep='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t',header=None,names = colnames)
df['path'] = '/content/km_kh_male/wavs/'+df['path'] +'.wav'
train, test = train_test_split(df, test_size=0.1)
test.to_csv('/content/km_kh_male/line_index_test.csv')
test_dataset = load_dataset('csv', data_files='/content/km_kh_male/line_index_test.csv',split = 'train')
wer = load_metric("wer")
cer = load_metric("cer")
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-khmer")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-khmer")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["text"] = re.sub(chars_to_ignore_regex, '', batch["text"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\\tpred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
cer = load_metric("cer")
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["text"])))
print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["text"])))
```
**Test Result**: 24.96 %
WER: 24.962519
CER: 6.950925
## Training
The script used for training can be found [here](https://colab.research.google.com/drive/1yo_OTMH8FHQrAKCkKdQGMqpkj-kFhS_2?usp=sharing)
|
gagan3012/wav2vec2-xlsr-chuvash
|
gagan3012
| 2021-07-06T03:45:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"cv",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: cv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-xlsr-chuvash by Gagan Bhatia
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice cv
type: common_voice
args: cv
metrics:
- name: Test WER
type: wer
value: 48.40
---
# Wav2Vec2-Large-XLSR-53-Chuvash
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chuvash using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cv", split="test")
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-chuvash")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-chuvash")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
#### Results:
Prediction: ['проектпа килӗшӳллӗн тӗлӗ мероприяти иртермелле', 'твăра çак планета минтӗ пуяни калленнана']
Reference: ['Проектпа килӗшӳллӗн, тӗрлӗ мероприяти ирттермелле.', 'Çак планета питĕ пуян иккен.']
## Evaluation
The model can be evaluated as follows on the Chuvash test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
!mkdir cer
!wget -O cer/cer.py https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese/raw/main/cer.py
test_dataset = load_dataset("common_voice", "cv", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
cer = load_metric("cer")
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-chuvash")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-chuvash")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\\tpred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 48.40 %
## Training
The script used for training can be found [here](https://colab.research.google.com/drive/1A7Y20c1QkSHfdOmLXPMiOEpwlTjDZ7m5?usp=sharing)
|
facebook/wav2vec2-large-xlsr-53-polish
|
facebook
| 2021-07-06T02:58:29Z | 45 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"audio",
"nl",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: nl
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
---
## Evaluation on Common Voice PL Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "facebook/wav2vec2-large-xlsr-53-polish"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "pl", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 24.6 %
|
facebook/wav2vec2-large-xlsr-53-german
|
facebook
| 2021-07-06T02:46:28Z | 8,487 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"audio",
"de",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: de
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
---
## Evaluation on Common Voice DE Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "facebook/wav2vec2-large-xlsr-53-german"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "de", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 18.5 %
|
facebook/wav2vec2-large-sv-voxpopuli
|
facebook
| 2021-07-06T02:30:55Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"sv",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: sv
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the sv unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
facebook/wav2vec2-large-fr-voxpopuli
|
facebook
| 2021-07-06T02:11:48Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"fr",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fr
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the fr unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
facebook/wav2vec2-large-10k-voxpopuli
|
facebook
| 2021-07-06T01:57:22Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"multilingual",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: multilingual
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the 10k unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
facebook/wav2vec2-base-sv-voxpopuli
|
facebook
| 2021-07-06T01:55:30Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"sv",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: sv
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the sv unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
facebook/wav2vec2-base-10k-voxpopuli
|
facebook
| 2021-07-06T01:53:26Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"multilingual",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: multilingual
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10k unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
facebook/wav2vec2-base-10k-voxpopuli-ft-pl
|
facebook
| 2021-07-06T01:52:01Z | 110 | 3 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"pl",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: pl
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in pl (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-pl")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-pl")
# load dataset
ds = load_dataset("common_voice", "pl", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
facebook/wav2vec2-base-10k-voxpopuli-ft-nl
|
facebook
| 2021-07-06T01:51:40Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"nl",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: nl
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in nl (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-nl")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-nl")
# load dataset
ds = load_dataset("common_voice", "nl", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
facebook/wav2vec2-base-10k-voxpopuli-ft-it
|
facebook
| 2021-07-06T01:51:18Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"it",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: it
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in it (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-it")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-it")
# load dataset
ds = load_dataset("common_voice", "it", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
facebook/wav2vec2-base-10k-voxpopuli-ft-hu
|
facebook
| 2021-07-06T01:50:56Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"hu",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: hu
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in hu (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-hu")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-hu")
# load dataset
ds = load_dataset("common_voice", "hu", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
facebook/wav2vec2-base-10k-voxpopuli-ft-fi
|
facebook
| 2021-07-06T01:49:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"fi",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fi
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in fi (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-fi")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-fi")
# load dataset
ds = load_dataset("common_voice", "fi", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
facebook/wav2vec2-base-10k-voxpopuli-ft-de
|
facebook
| 2021-07-06T01:48:44Z | 30 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"de",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: de
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in de (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-de")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-de")
# load dataset
ds = load_dataset("common_voice", "de", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
ParkMyungkyu/KLUE-STS-roberta-base
|
ParkMyungkyu
| 2021-07-06T01:47:59Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 365 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 146,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
elgeish/wav2vec2-large-xlsr-53-levantine-arabic
|
elgeish
| 2021-07-06T01:43:32Z | 18 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"ar",
"dataset:arabic_speech_corpus",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ar
datasets:
- arabic_speech_corpus
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Arabic Speech Corpus dataset](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
dataset = load_dataset("arabic_speech_corpus", split="test") # "test[:n]" for n examples
processor = Wav2Vec2Processor.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic")
model = Wav2Vec2ForCTC.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic")
model.eval()
def prepare_example(example):
example["speech"], _ = librosa.load(example["file"], sr=16000)
example["text"] = example["text"].replace("-", " ").replace("^", "v")
example["text"] = " ".join(w for w in example["text"].split() if w != "sil")
return example
dataset = dataset.map(prepare_example, remove_columns=["file", "orthographic", "phonetic"])
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding="longest")
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.tokenizer.batch_decode(predicted)
return batch
dataset = dataset.map(predict, batched=True, batch_size=1, remove_columns=["speech"])
for reference, predicted in zip(dataset["text"], dataset["predicted"]):
print("reference:", reference)
print("predicted:", predicted)
print("reference (untransliterated):", buckwalter.untrans(reference))
print("predicted (untransliterated):", buckwalter.untrans(predicted))
print("--")
```
Here's the output:
```
reference: >atAHat lilbA}iEi lmutajaw~ili >an yakuwna jA*iban lilmuwATini l>aqal~i daxlan
predicted: >ataAHato lilobaA}iEi Alomutajaw~ili >ano yakuwna jaA*ibAF lilomuwaATini Alo>aqal~i daxolAF
reference (untransliterated): أَتاحَت لِلبائِعِ لمُتَجَوِّلِ أَن يَكُونَ جاذِبَن لِلمُواطِنِ لأَقَلِّ دَخلَن
predicted (untransliterated): أَتَاحَتْ لِلْبَائِعِ الْمُتَجَوِّلِ أَنْ يَكُونَ جَاذِباً لِلْمُوَاطِنِ الْأَقَلِّ دَخْلاً
--
reference: >aHrazat muntaxabAtu lbarAziyli wa>lmAnyA waruwsyA fawzan fiy muqAbalAtihim l<iEdAdiy~api l~atiy >uqiymat istiEdAdan linihA}iy~Ati ka>si lEAlam >al~atiy satanTaliqu baEda >aqal~i min >usbuwE
predicted: >aHorazato munotaxabaAtu AlobaraAziyli wa>alomaAnoyaA waruwsoyaA fawozAF fiy muqaAbalaAtihimo >aliEodaAdiy~api Al~atiy >uqiymat AsotiEodaAdAF linahaA}iy~aAti ka>osi AloEaAlamo >al~atiy satanoTaliqu baEoda >aqal~i mino >usobuwEo
reference (untransliterated): أَحرَزَت مُنتَخَباتُ لبَرازِيلِ وَألمانيا وَرُوسيا فَوزَن فِي مُقابَلاتِهِم لإِعدادِيَّةِ لَّتِي أُقِيمَت ِستِعدادَن لِنِهائِيّاتِ كَأسِ لعالَم أَلَّتِي سَتَنطَلِقُ بَعدَ أَقَلِّ مِن أُسبُوع
predicted (untransliterated): أَحْرَزَتْ مُنْتَخَبَاتُ الْبَرَازِيلِ وَأَلْمَانْيَا وَرُوسْيَا فَوْزاً فِي مُقَابَلَاتِهِمْ أَلِعْدَادِيَّةِ الَّتِي أُقِيمَت اسْتِعْدَاداً لِنَهَائِيَّاتِ كَأْسِ الْعَالَمْ أَلَّتِي سَتَنْطَلِقُ بَعْدَ أَقَلِّ مِنْ أُسْبُوعْ
--
reference: >axfaqa majlisu ln~uw~Abi ll~ubnAniy~u fiy xtiyAri ra}iysin jadiydin lilbilAdi xalafan lilr~a}iysi lHAliy~i l~a*iy tantahiy wilAyatuhu fiy lxAmisi wAlEi$riyn min mAyuw >ayAra lmuqbil
predicted: >axofaqa majolisu Aln~uw~aAbi All~ubonaAniy~u fiy AxotiyaAri ra}iysK jadiydK lilobilaAdi xalafAF lilr~a}iysi AloHaAliy~i Al~a*iy tanotahiy wilaAyatuhu fiy AloxaAmisi waAloEi$oriyno mino maAyuw >ay~aAra Alomuqobilo
reference (untransliterated): أَخفَقَ مَجلِسُ لنُّوّابِ للُّبنانِيُّ فِي ختِيارِ رَئِيسِن جَدِيدِن لِلبِلادِ خَلَفَن لِلرَّئِيسِ لحالِيِّ لَّذِي تَنتَهِي وِلايَتُهُ فِي لخامِسِ والعِشرِين مِن مايُو أَيارَ لمُقبِل
predicted (untransliterated): أَخْفَقَ مَجْلِسُ النُّوَّابِ اللُّبْنَانِيُّ فِي اخْتِيَارِ رَئِيسٍ جَدِيدٍ لِلْبِلَادِ خَلَفاً لِلرَّئِيسِ الْحَالِيِّ الَّذِي تَنْتَهِي وِلَايَتُهُ فِي الْخَامِسِ وَالْعِشْرِينْ مِنْ مَايُو أَيَّارَ الْمُقْبِلْ
--
reference: <i* sayaHDuru liqA'a ha*A lEAmi xamsun wavalAvuwna minhum
predicted: <i*o sayaHoDuru riqaA'a ha*aA AloEaAmi xamosN wa valaAvuwna minohumo
reference (untransliterated): إِذ سَيَحضُرُ لِقاءَ هَذا لعامِ خَمسُن وَثَلاثُونَ مِنهُم
predicted (untransliterated): إِذْ سَيَحْضُرُ رِقَاءَ هَذَا الْعَامِ خَمْسٌ وَ ثَلَاثُونَ مِنْهُمْ
--
reference: >aElanati lHukuwmapu lmiSriy~apu Ean waqfi taqdiymi ld~aEmi ln~aqdiy~i limuzAriEiy lquTni <iEtibAran mina lmuwsimi lz~irAEiy~i lmuqbil
predicted: >aEolanati AloHukuwmapu AlomiSoriy~apu Eano waqofi taqodiymi Ald~aEomi Aln~aqodiy~i limuzaAriEiy AloquToni <iEotibaArAF mina Alomuwsimi Alz~iraAEiy~i Alomuqobilo
reference (untransliterated): أَعلَنَتِ لحُكُومَةُ لمِصرِيَّةُ عَن وَقفِ تَقدِيمِ لدَّعمِ لنَّقدِيِّ لِمُزارِعِي لقُطنِ إِعتِبارَن مِنَ لمُوسِمِ لزِّراعِيِّ لمُقبِل
predicted (untransliterated): أَعْلَنَتِ الْحُكُومَةُ الْمِصْرِيَّةُ عَنْ وَقْفِ تَقْدِيمِ الدَّعْمِ النَّقْدِيِّ لِمُزَارِعِي الْقُطْنِ إِعْتِبَاراً مِنَ الْمُوسِمِ الزِّرَاعِيِّ الْمُقْبِلْ
--
reference: >aElanat wizArapu lSi~Ha~pi lsa~Euwdiya~pu lyawma Ean wafAtayni jadiydatayni biAlfayruwsi lta~Ajiyi kuwruwnA nuwfil
predicted: >aEolanato wizaArapu AlS~iH~api Als~aEuwdiy~apu Aloyawoma Eano wafaAtayoni jadiydatayoni biAlofayoruwsi Alt~aAjiy kuwruwnaA nuwfiylo
reference (untransliterated): أَعلَنَت وِزارَةُ لصِّحَّةِ لسَّعُودِيَّةُ ليَومَ عَن وَفاتَينِ جَدِيدَتَينِ بِالفَيرُوسِ لتَّاجِيِ كُورُونا نُوفِل
predicted (untransliterated): أَعْلَنَتْ وِزَارَةُ الصِّحَّةِ السَّعُودِيَّةُ الْيَوْمَ عَنْ وَفَاتَيْنِ جَدِيدَتَيْنِ بِالْفَيْرُوسِ التَّاجِي كُورُونَا نُوفِيلْ
--
reference: <iftutiHati ljumuEapa faE~Aliy~Atu ld~awrapi lr~AbiEapa Ea$rapa mina lmihrajAni ld~awliy~i lilfiylmi bimur~Aki$
predicted: <ifotutiHapi AlojumuwEapa faEaAliyaAtu Ald~aworapi Alr~aAbiEapa Ea$orapa miyna AlomihorajaAni Ald~awoliy~i lilofiylomi bimur~Aki$
reference (untransliterated): إِفتُتِحَتِ لجُمُعَةَ فَعّالِيّاتُ لدَّورَةِ لرّابِعَةَ عَشرَةَ مِنَ لمِهرَجانِ لدَّولِيِّ لِلفِيلمِ بِمُرّاكِش
predicted (untransliterated): إِفْتُتِحَةِ الْجُمُوعَةَ فَعَالِيَاتُ الدَّوْرَةِ الرَّابِعَةَ عَشْرَةَ مِينَ الْمِهْرَجَانِ الدَّوْلِيِّ لِلْفِيلْمِ بِمُرّاكِش
--
reference: >ak~adat Ea$ru duwalin Earabiy~apin $Arakati lxamiysa lmADiya fiy jtimAEi jd~ap muwAfaqatahA EalY l<inDimAmi <ilY Hilfin maEa lwilAyAti lmut~aHidapi li$an~i Hamlapin Easkariy~apin munas~aqapin Did~a tanZiymi >ald~awlapi l<islAmiy~api
predicted: >ak~adato Ea$oru duwalK Earabiy~apK $aArakapiy Aloxamiysa AlomaADiya fiy AjotimaAEi jad~ap muwaAfaqatahaA EalaY Alo<inoDimaAmi <ilaY HilofK maEa AlowilaAyaAti Alomut~aHidapi li$an~i HamolapK Easokariy~apK munas~aqapK id~a tanoZiymi Ald~awolapi Alo<isolaAmiy~api
reference (untransliterated): أَكَّدَت عَشرُ دُوَلِن عَرَبِيَّةِن شارَكَتِ لخَمِيسَ لماضِيَ فِي جتِماعِ جدَّة مُوافَقَتَها عَلى لإِنضِمامِ إِلى حِلفِن مَعَ لوِلاياتِ لمُتَّحِدَةِ لِشَنِّ حَملَةِن عَسكَرِيَّةِن مُنَسَّقَةِن ضِدَّ تَنظِيمِ أَلدَّولَةِ لإِسلامِيَّةِ
predicted (untransliterated): أَكَّدَتْ عَشْرُ دُوَلٍ عَرَبِيَّةٍ شَارَكَةِي الْخَمِيسَ الْمَاضِيَ فِي اجْتِمَاعِ جَدَّة مُوَافَقَتَهَا عَلَى الْإِنْضِمَامِ إِلَى حِلْفٍ مَعَ الْوِلَايَاتِ الْمُتَّحِدَةِ لِشَنِّ حَمْلَةٍ عَسْكَرِيَّةٍ مُنَسَّقَةٍ ِدَّ تَنْظِيمِ الدَّوْلَةِ الْإِسْلَامِيَّةِ
--
reference: <iltaHaqa luwkA ziydAna <ibnu ln~ajmi ld~awliy~i lfaransiy~i ljazA}iriy~i l>Sli zayni ld~iyni ziydAn biAlfariyq
predicted: <ilotaHaqa luwkaA ziydaAna <ibonu Aln~ajomi Ald~awoliy~i Alofaranosiy~i AlojazaA}iriy~i Alo>aSoli zayoni Ald~iyni zayodaAno biAlofariyqo
reference (untransliterated): إِلتَحَقَ لُوكا زِيدانَ إِبنُ لنَّجمِ لدَّولِيِّ لفَرَنسِيِّ لجَزائِرِيِّ لأصلِ زَينِ لدِّينِ زِيدان بِالفَرِيق
predicted (untransliterated): إِلْتَحَقَ لُوكَا زِيدَانَ إِبْنُ النَّجْمِ الدَّوْلِيِّ الْفَرَنْسِيِّ الْجَزَائِرِيِّ الْأَصْلِ زَيْنِ الدِّينِ زَيْدَانْ بِالْفَرِيقْ
--
reference: >alma$Akilu l~atiy yatrukuhA xalfahu dA}iman
predicted: Aloma$aAkilu Al~atiy yatorukuhaA xalofahu daA}imAF
reference (untransliterated): أَلمَشاكِلُ لَّتِي يَترُكُها خَلفَهُ دائِمَن
predicted (untransliterated): الْمَشَاكِلُ الَّتِي يَتْرُكُهَا خَلْفَهُ دَائِماً
--
reference: >al~a*iy yataDam~anu mazAyA barmajiy~apan wabaSariy~apan Eadiydapan tahdifu limuwAkabapi lt~aTaw~uri lHASili fiy lfaDA'i l<ilktruwniy watashiyli stifAdapi lqur~A'i min xadamAti lmawqiE
predicted: >al~a*iy yataDam~anu mazaAyaA baromajiy~apF wabaSariy~apF EadiydapF tahodifu limuwaAkabapi Alt~aTaw~uri AloHaASili fiy AlofaDaA'i Alo<iloktoruwniy watasohiyli AsotifaAdapi Aloqur~aA'i mino xadaAmaAti AlomawoqiEo
reference (untransliterated): أَلَّذِي يَتَضَمَّنُ مَزايا بَرمَجِيَّةَن وَبَصَرِيَّةَن عَدِيدَةَن تَهدِفُ لِمُواكَبَةِ لتَّطَوُّرِ لحاصِلِ فِي لفَضاءِ لإِلكترُونِي وَتَسهِيلِ ستِفادَةِ لقُرّاءِ مِن خَدَماتِ لمَوقِع
predicted (untransliterated): أَلَّذِي يَتَضَمَّنُ مَزَايَا بَرْمَجِيَّةً وَبَصَرِيَّةً عَدِيدَةً تَهْدِفُ لِمُوَاكَبَةِ التَّطَوُّرِ الْحَاصِلِ فِي الْفَضَاءِ الْإِلْكتْرُونِي وَتَسْهِيلِ اسْتِفَادَةِ الْقُرَّاءِ مِنْ خَدَامَاتِ الْمَوْقِعْ
--
reference: >alfikrapu wa<in badat jadiydapan EalY mujtamaEin yaEiy$u wAqiEan sayi}aan lA tu$aj~iEu EalY lD~aHik
predicted: >alofikorapu wa<inobadato jadiydapF EalaY mujotamaEK yaEiy$u waAqi Eano say~i}AF laA tu$aj~iEu EalaY AlD~aHiko
reference (untransliterated): أَلفِكرَةُ وَإِن بَدَت جَدِيدَةَن عَلى مُجتَمَعِن يَعِيشُ واقِعَن سَيِئََن لا تُشَجِّعُ عَلى لضَّحِك
predicted (untransliterated): أَلْفِكْرَةُ وَإِنْبَدَتْ جَدِيدَةً عَلَى مُجْتَمَعٍ يَعِيشُ وَاقِ عَنْ سَيِّئاً لَا تُشَجِّعُ عَلَى الضَّحِكْ
--
reference: mu$iyraan <ilY xidmapi lqur>Ani lkariymi wataEziyzi EalAqapi lmuslimiyna bihi
predicted: mu$iyrAF <ilaY xidomapi Aloquro|ni Alokariymi wataEoziyzi EalaAqapi Alomusolimiyna bihi
reference (untransliterated): مُشِيرََن إِلى خِدمَةِ لقُرأانِ لكَرِيمِ وَتَعزِيزِ عَلاقَةِ لمُسلِمِينَ بِهِ
predicted (untransliterated): مُشِيراً إِلَى خِدْمَةِ الْقُرْآنِ الْكَرِيمِ وَتَعْزِيزِ عَلَاقَةِ الْمُسْلِمِينَ بِهِ
--
reference: <in~ahu EindamA yakuwnu >aHadu lz~awjayni yastaxdimu >aHada >a$kAli lt~iknuwluwjyA >akvara mina l>Axar
predicted: <in~ahu EinodamaA yakuwnu >aHadu Alz~awojayoni yasotaxodimu >aHada >a$okaAli Alt~iykonuwluwjoyaA >akovara mina Alo|xaro
reference (untransliterated): إِنَّهُ عِندَما يَكُونُ أَحَدُ لزَّوجَينِ يَستَخدِمُ أَحَدَ أَشكالِ لتِّكنُولُوجيا أَكثَرَ مِنَ لأاخَر
predicted (untransliterated): إِنَّهُ عِنْدَمَا يَكُونُ أَحَدُ الزَّوْجَيْنِ يَسْتَخْدِمُ أَحَدَ أَشْكَالِ التِّيكْنُولُوجْيَا أَكْثَرَ مِنَ الْآخَرْ
--
reference: wa*alika biHuDuwri ra}yisi lhay}api
predicted: wa*alika biHuDuwri ra}iysi Alohayo>api
reference (untransliterated): وَذَلِكَ بِحُضُورِ رَئيِسِ لهَيئَةِ
predicted (untransliterated): وَذَلِكَ بِحُضُورِ رَئِيسِ الْهَيْأَةِ
--
reference: wa*alika fiy buTuwlapa ka>si lEAlami lil>andiyapi baEda nusxapin tAriyxiy~apin >alEAma lmADiya <intahat bitatwiyji bAyrin miyuwniyxa l>almAniy~a EalY HisAbi lr~ajA'i lmagribiy~i fiy >aw~ali ta>ah~ulin lifariyqin Earabiy~in <ilY nihA}iy~i lmusAbaqapi
predicted: wa*alika fiy buTuwlapi ka>osiy AloEaAlami lilo>anodiyapi baEoda nusoxapK taAriyxiy~apK >aloEaAma AlomaADiya <inotahato bitatowiyji bAyorinmoyuwnixa Alo>alomaAniy~a EalaY HisaAbi Alr~ajaA'i Alomagoribiy~ifiy >aw~ali ta>ah~ulK lifariyqKEarabiy~K <ilaY nihaA}iy~i AlomusaAbaqapi
reference (untransliterated): وَذَلِكَ فِي بُطُولَةَ كَأسِ لعالَمِ لِلأَندِيَةِ بَعدَ نُسخَةِن تارِيخِيَّةِن أَلعامَ لماضِيَ إِنتَهَت بِتَتوِيجِ بايرِن مِيُونِيخَ لأَلمانِيَّ عَلى حِسابِ لرَّجاءِ لمَغرِبِيِّ فِي أَوَّلِ تَأَهُّلِن لِفَرِيقِن عَرَبِيِّن إِلى نِهائِيِّ لمُسابَقَةِ
predicted (untransliterated): وَذَلِكَ فِي بُطُولَةِ كَأْسِي الْعَالَمِ لِلْأَنْدِيَةِ بَعْدَ نُسْخَةٍ تَارِيخِيَّةٍ أَلْعَامَ الْمَاضِيَ إِنْتَهَتْ بِتَتْوِيجِ بايْرِنمْيُونِخَ الْأَلْمَانِيَّ عَلَى حِسَابِ الرَّجَاءِ الْمَغْرِبِيِّفِي أَوَّلِ تَأَهُّلٍ لِفَرِيقٍعَرَبِيٍّ إِلَى نِهَائِيِّ الْمُسَابَقَةِ
--
reference: bal yajibu lbaHvu fiymA tumav~iluhu min <iDAfapin Haqiyqiy~apin lil<iqtiSAdi lmaSriy~i fiy majAlAti lt~awZiyf biAEtibAri >an~a mu$kilapa lbiTAlapi mina lmu$kilAti lr~a}iysiy~api fiy miSr
predicted: balo yajibu AlobaHovu fiymaA tumav~iluhu mino <iDaAfapK Haqiyqiy~apK lilo<iqotiSaAdi AlomaSoriy~i fiy majaAlaAti Alt~awoZiyfo biAEotibaAri >an~a mu$okilapa AlobiTaAlapi mina Alomu$okilaAti Alr~a}iysiy~api fiy miSori
reference (untransliterated): بَل يَجِبُ لبَحثُ فِيما تُمَثِّلُهُ مِن إِضافَةِن حَقِيقِيَّةِن لِلإِقتِصادِ لمَصرِيِّ فِي مَجالاتِ لتَّوظِيف بِاعتِبارِ أَنَّ مُشكِلَةَ لبِطالَةِ مِنَ لمُشكِلاتِ لرَّئِيسِيَّةِ فِي مِصر
predicted (untransliterated): بَلْ يَجِبُ الْبَحْثُ فِيمَا تُمَثِّلُهُ مِنْ إِضَافَةٍ حَقِيقِيَّةٍ لِلْإِقْتِصَادِ الْمَصْرِيِّ فِي مَجَالَاتِ التَّوْظِيفْ بِاعْتِبَارِ أَنَّ مُشْكِلَةَ الْبِطَالَةِ مِنَ الْمُشْكِلَاتِ الرَّئِيسِيَّةِ فِي مِصْرِ
--
reference: taHtaDinu qAEapu *A fiynyuw wasaTa bayruwta maEriDa lfan~i l<istivnA}iy~i
predicted: taHotaDinu qaAEapu *aAfiynoyw wasaTa bayoruwta maEoriDa Alofan~i Alo<isotivonaA}iy~i
reference (untransliterated): تَحتَضِنُ قاعَةُ ذا فِينيُو وَسَطَ بَيرُوتَ مَعرِضَ لفَنِّ لإِستِثنائِيِّ
predicted (untransliterated): تَحْتَضِنُ قَاعَةُ ذَافِينْيو وَسَطَ بَيْرُوتَ مَعْرِضَ الْفَنِّ الْإِسْتِثْنَائِيِّ
--
reference: tarbiyapu lHamAmi hiwAyapun wamihnapun libaEDi ln~As
predicted: tarobiy~apu AloHamaAmi hiwaAyapN wamihonapN libaEoDi Aln~aAs
reference (untransliterated): تَربِيَةُ لحَمامِ هِوايَةُن وَمِهنَةُن لِبَعضِ لنّاس
predicted (untransliterated): تَرْبِيَّةُ الْحَمَامِ هِوَايَةٌ وَمِهْنَةٌ لِبَعْضِ النَّاس
--
reference: tasEY $abakapu lt~awASuli l<ijtimAEiy~i lS~AEidapu <iylw <ilY munAfasapi $abakapi fysbuwk Eabra lt~axal~iy Eani l<iElAnAti wAlHifAZi EalY lxuSuwSiy~api waHimAyapi lbayAnAt
predicted: tasoEap $abakapu Alt~awaASuli Alo<ijotimaAEiy~i AlS~aAEidapu <iylw <ilaY munaAfasapi $abakapi fysobuwko Eabora Alt~axal~iy Eani Alo<iEolaAnaAti waAloHifaAZi EalaY AloxuSuwSiy~api waHimaAyapi AlobayaAnaAt
reference (untransliterated): تَسعى شَبَكَةُ لتَّواصُلِ لإِجتِماعِيِّ لصّاعِدَةُ إِيلو إِلى مُنافَسَةِ شَبَكَةِ فيسبُوك عَبرَ لتَّخَلِّي عَنِ لإِعلاناتِ والحِفاظِ عَلى لخُصُوصِيَّةِ وَحِمايَةِ لبَيانات
predicted (untransliterated): تَسْعَة شَبَكَةُ التَّوَاصُلِ الْإِجْتِمَاعِيِّ الصَّاعِدَةُ إِيلو إِلَى مُنَافَسَةِ شَبَكَةِ فيسْبُوكْ عَبْرَ التَّخَلِّي عَنِ الْإِعْلَانَاتِ وَالْحِفَاظِ عَلَى الْخُصُوصِيَّةِ وَحِمَايَةِ الْبَيَانَات
--
reference: jamEu lmu&ana~vi lsa~Alimi mivla fAzat <iHdY lTa~AlibAti fiy musAbaqapi lqirA'Ati lqur>Aniya~pi
predicted: jamoEu Alomu&an~avi Als~aAlimi mivola faAzato <iHodaY AlT~aAlibaAti fiy musaAbaqapi AloqiraA'aAti Aloquro|niy~api
reference (untransliterated): جَمعُ لمُؤَنَّثِ لسَّالِمِ مِثلَ فازَت إِحدى لطَّالِباتِ فِي مُسابَقَةِ لقِراءاتِ لقُرأانِيَّةِ
predicted (untransliterated): جَمْعُ الْمُؤَنَّثِ السَّالِمِ مِثْلَ فَازَتْ إِحْدَى الطَّالِبَاتِ فِي مُسَابَقَةِ الْقِرَاءَاتِ الْقُرْآنِيَّةِ
--
reference: Hat~Y l>amsi lqariyb kAna lkaviyru mina l>uwkrAniy~iyn yu$ak~ikuwna fiy ntimA'i tatAri $ibhi jaziyrapi lqarm
predicted: Hat~aY Alo>amosi Aloqariybo kaAna Alokaviyru mina Alo>uwkoraAniy~iyno yu$ak~ikuwna fiy AnotimaA'i tataAri $ibohi jaziyrapi Aloqaromo
reference (untransliterated): حَتّى لأَمسِ لقَرِيب كانَ لكَثِيرُ مِنَ لأُوكرانِيِّين يُشَكِّكُونَ فِي نتِماءِ تَتارِ شِبهِ جَزِيرَةِ لقَرم
predicted (untransliterated): حَتَّى الْأَمْسِ الْقَرِيبْ كَانَ الْكَثِيرُ مِنَ الْأُوكْرَانِيِّينْ يُشَكِّكُونَ فِي انْتِمَاءِ تَتَارِ شِبْهِ جَزِيرَةِ الْقَرْمْ
--
reference: Ha*~arati l>umamu lmut~aHidapu min >an~a lEAlama sayuwAjihu xilAla lEuquwdi lmuqbilapi tafAquma >azmapin muzdawijapin fiy lmiyAh wAlkahrabA'
predicted: Ha*~arapi Alo>umamu Alomut~aHidapu mino >an~a AloEaAlama sayuwaAjihu xilaAla AloEuquwdi Alomuqobilapi tafaAq~uma >azomapK muzodawyijapK fiy AlomiyaA waAlokahorabaA'o
reference (untransliterated): حَذَّرَتِ لأُمَمُ لمُتَّحِدَةُ مِن أَنَّ لعالَمَ سَيُواجِهُ خِلالَ لعُقُودِ لمُقبِلَةِ تَفاقُمَ أَزمَةِن مُزدَوِجَةِن فِي لمِياه والكَهرَباء
predicted (untransliterated): حَذَّرَةِ الْأُمَمُ الْمُتَّحِدَةُ مِنْ أَنَّ الْعَالَمَ سَيُوَاجِهُ خِلَالَ الْعُقُودِ الْمُقْبِلَةِ تَفَاقُّمَ أَزْمَةٍ مُزْدَويِجَةٍ فِي الْمِيَا وَالْكَهْرَبَاءْ
--
reference: HuDuwru baEDi lz~uEamA'i fiy >almasiyrapi ljumhuwriy~api bibAriys
predicted: HuDuwru baEoDi Alz~aEamaA'ifiy >alomasiyrapi Alojumohuwriy~api bibaArys
reference (untransliterated): حُضُورُ بَعضِ لزُّعَماءِ فِي أَلمَسِيرَةِ لجُمهُورِيَّةِ بِبارِيس
predicted (untransliterated): حُضُورُ بَعْضِ الزَّعَمَاءِفِي أَلْمَسِيرَةِ الْجُمْهُورِيَّةِ بِبَاريس
--
reference: Hayvu kAna lEarabu >w~ala man Earafa qiymatahA lEilAjiy~apa fiy lqarni lEA$iri qabla lmiylAd fiy mamlakapi saba>
predicted: Hayovu kaAna AloEarabu >aw~ala mano Earafa qiymatahaA AloEilaAjiy~apa fiy Aloqaroni AloEaA$iri qabola AlomiylaAd fiy mamolakapi saba>o
reference (untransliterated): حَيثُ كانَ لعَرَبُ أوَّلَ مَن عَرَفَ قِيمَتَها لعِلاجِيَّةَ فِي لقَرنِ لعاشِرِ قَبلَ لمِيلاد فِي مَملَكَةِ سَبَأ
predicted (untransliterated): حَيْثُ كَانَ الْعَرَبُ أَوَّلَ مَنْ عَرَفَ قِيمَتَهَا الْعِلَاجِيَّةَ فِي الْقَرْنِ الْعَاشِرِ قَبْلَ الْمِيلَاد فِي مَمْلَكَةِ سَبَأْ
--
reference: daxalati lt~iknuwluwjyA fiy kul~i baytin wa>usrapin wa>aSbaHat tu$ak~ilu ljuz'a lkabiyra min HayAtinA
predicted: daxalati Alt~ikonuwluwjoyaA fiy kul~i bayotK wa>usorapK wa>aSobaHaAtlotu$ak~ilu Alojuzo'a Alokabiyra mino HayaAtina
reference (untransliterated): دَخَلَتِ لتِّكنُولُوجيا فِي كُلِّ بَيتِن وَأُسرَةِن وَأَصبَحَت تُشَكِّلُ لجُزءَ لكَبِيرَ مِن حَياتِنا
predicted (untransliterated): دَخَلَتِ التِّكْنُولُوجْيَا فِي كُلِّ بَيْتٍ وَأُسْرَةٍ وَأَصْبَحَاتلْتُشَكِّلُ الْجُزْءَ الْكَبِيرَ مِنْ حَيَاتِنَ
--
reference: duwna taHmiyli ljismi juhdan kabiyran fiy lbidAyapi qad yatasaba~bu fiy nufuwri l$a~xSi mina l<istimrAr
predicted: duwna taHomiyli Alojisomi juhodAF kabiyrAF fiy AlobidaAyapi qado yatasab~abu fiy nufuwri Al$~axoSi mina Al<isotimoraAro
reference (untransliterated): دُونَ تَحمِيلِ لجِسمِ جُهدَن كَبِيرَن فِي لبِدايَةِ قَد يَتَسَبَّبُ فِي نُفُورِ لشَّخصِ مِنَ لإِستِمرار
predicted (untransliterated): دُونَ تَحْمِيلِ الْجِسْمِ جُهْداً كَبِيراً فِي الْبِدَايَةِ قَدْ يَتَسَبَّبُ فِي نُفُورِ الشَّخْصِ مِنَ الإِسْتِمْرَارْ
--
reference: ragma ln~izAEi ld~Amiy >al~a*iy yaESifu biAlbilAd mun*u val>avi sanawAt
predicted: ragoma Aln~izaAEi Ald~aAmiy >al~a*iy yaEoSifu biAlobilAd muno*u valAvi sanawAt
reference (untransliterated): رَغمَ لنِّزاعِ لدّامِي أَلَّذِي يَعصِفُ بِالبِلاد مُنذُ ثَلأَثِ سَنَوات
predicted (untransliterated): رَغْمَ النِّزَاعِ الدَّامِي أَلَّذِي يَعْصِفُ بِالْبِلاد مُنْذُ ثَلاثِ سَنَوات
--
reference: rafaDa majlisu l>amni ld~awliy~u ma$ruwEa lqarAri lfilisTiyniy~i lr~Amiy <ilY <inhA'i l<iHtilAli l<isrA}iyliy~i fiy EAmayn
predicted: rafaDa majolisu Alo>amoni Ald~awoliy~u ma$oruwEa AloqaraAri AlofilisoTiyniy~i Alr~aAmi <ilaY <inohaA'i Alo<iHotilaAli Alo<isoraA}iyliy~i fiy EaAmayno
reference (untransliterated): رَفَضَ مَجلِسُ لأَمنِ لدَّولِيُّ مَشرُوعَ لقَرارِ لفِلِسطِينِيِّ لرّامِي إِلى إِنهاءِ لإِحتِلالِ لإِسرائِيلِيِّ فِي عامَين
predicted (untransliterated): رَفَضَ مَجْلِسُ الْأَمْنِ الدَّوْلِيُّ مَشْرُوعَ الْقَرَارِ الْفِلِسْطِينِيِّ الرَّامِ إِلَى إِنْهَاءِ الْإِحْتِلَالِ الْإِسْرَائِيلِيِّ فِي عَامَينْ
--
reference: ramzu ld~awlapi lt~urkiy~api lEilmAniy~api al~atiy ta>as~asat Eaqiba nhiyAri ld~awlapi lEuvmAniy~api
predicted: ramozu Ald~awolapi Alt~urokiy~api AloEilomaAniy~api Al~atiy ta>as~asato EaqibaAF hiyaAri Ald~awolapi AloEuvomaAniy~api
reference (untransliterated): رَمزُ لدَّولَةِ لتُّركِيَّةِ لعِلمانِيَّةِ َلَّتِي تَأَسَّسَت عَقِبَ نهِيارِ لدَّولَةِ لعُثمانِيَّةِ
predicted (untransliterated): رَمْزُ الدَّوْلَةِ التُّرْكِيَّةِ الْعِلْمَانِيَّةِ الَّتِي تَأَسَّسَتْ عَقِبَاً هِيَارِ الدَّوْلَةِ الْعُثْمَانِيَّةِ
--
reference: $Araka mawqiEu >aljaziyrapi litaEal~umi lEarabiy~api fiy lmu&tamari ld~awliy~i lv~Aniy lil~ugapi lEarabiy~api >al~a*iy naZ~amathu jAmiEapu mawlAnA mAlik <ibrAhiym >al<islAmiy~apu lHukuwmiyapu bimadiynapi mAlAnq biAlt~aEAwuni maEa jAmiEapi dAri ls~alAm bimadiynapi kuwntuwr fiy >anduwniysyA
predicted: $aAraka mawoqiEu >alojaziyrapi litaEal~umi AloEarabiy~api fiy Alomu&otamari Ald~awoliy~i Alv~aAniy lill~ugapi AloEarabiy~api >al~a*iy naZ~amatohu jaAmiEapu mawolaAnaA maAlik <iboraAhiymo >alo<isolaAmiy~apu AloHukuwmiy~apu bimadiynapi maA laAnoqo biAlt~aEaAwuni maEa jaAmiEapi daAri Als~alaAmo bimadiynapi kuwnotuwro fiy >anoduwniysoyaA
reference (untransliterated): شارَكَ مَوقِعُ أَلجَزِيرَةِ لِتَعَلُّمِ لعَرَبِيَّةِ فِي لمُؤتَمَرِ لدَّولِيِّ لثّانِي لِلُّغَةِ لعَرَبِيَّةِ أَلَّذِي نَظَّمَتهُ جامِعَةُ مَولانا مالِك إِبراهِيم أَلإِسلامِيَّةُ لحُكُومِيَةُ بِمَدِينَةِ مالانق بِالتَّعاوُنِ مَعَ جامِعَةِ دارِ لسَّلام بِمَدِينَةِ كُونتُور فِي أَندُونِيسيا
predicted (untransliterated): شَارَكَ مَوْقِعُ أَلْجَزِيرَةِ لِتَعَلُّمِ الْعَرَبِيَّةِ فِي الْمُؤْتَمَرِ الدَّوْلِيِّ الثَّانِي لِللُّغَةِ الْعَرَبِيَّةِ أَلَّذِي نَظَّمَتْهُ جَامِعَةُ مَوْلَانَا مَالِك إِبْرَاهِيمْ أَلْإِسْلَامِيَّةُ الْحُكُومِيَّةُ بِمَدِينَةِ مَا لَانْقْ بِالتَّعَاوُنِ مَعَ جَامِعَةِ دَارِ السَّلَامْ بِمَدِينَةِ كُونْتُورْ فِي أَنْدُونِيسْيَا
--
reference: $araEa l<it~iHAdu lt~uwnusiy~u lilfuruwsiy~api fiy tanfiy* xuT~apin tarnuw <ilY lmuDiy~i biha*ihi lr~iyADapi naHwa buluwgi lEAlamiy~api
predicted: $aAraEa Alo<it~iHaAdu Alt~uwnusiy~u lilofuruwsiy~api fiy tanofiy*o xuT~apK taronuwA <ilaY AlomuDiy~i biha*ihi Alr~iy~aADapi naHowa buluwgi AloEaAlamiy~api
reference (untransliterated): شَرَعَ لإِتِّحادُ لتُّونُسِيُّ لِلفُرُوسِيَّةِ فِي تَنفِيذ خُطَّةِن تَرنُو إِلى لمُضِيِّ بِهَذِهِ لرِّياضَةِ نَحوَ بُلُوغِ لعالَمِيَّةِ
predicted (untransliterated): شَارَعَ الْإِتِّحَادُ التُّونُسِيُّ لِلْفُرُوسِيَّةِ فِي تَنْفِيذْ خُطَّةٍ تَرْنُوا إِلَى الْمُضِيِّ بِهَذِهِ الرِّيَّاضَةِ نَحْوَ بُلُوغِ الْعَالَمِيَّةِ
--
reference: $ahida EAmu >alfayni wa>arbaEapa Ea$rapa Eid~apa <injAzAtin Tib~iy~apin
predicted: $ahida EaAmu >alfayni wa>arobaEapa Ea$orapa Eid~apa <inojaAzaAtK Tib~iy~apK
reference (untransliterated): شَهِدَ عامُ أَلفَينِ وَأَربَعَةَ عَشرَةَ عِدَّةَ إِنجازاتِن طِبِّيَّةِن
predicted (untransliterated): شَهِدَ عَامُ أَلفَينِ وَأَرْبَعَةَ عَشْرَةَ عِدَّةَ إِنْجَازَاتٍ طِبِّيَّةٍ
--
reference: EAda <irtifAEu >asEAri l>dwiyapi wa$uH~u lmunqi*i lilHayApi minhA liyuTil~a bira>sihi fiy ls~uwdAni min jadiydin
predicted: EaAda <irotifaAEu >asoEaAri Alo>adowiyapi wa$uH~u Alomunoqi*i liloHayaAti minohaA liyuTil~a bira>osihi fiy Als~uwdaAni mino jadiydK
reference (untransliterated): عادَ إِرتِفاعُ أَسعارِ لأدوِيَةِ وَشُحُّ لمُنقِذِ لِلحَياةِ مِنها لِيُطِلَّ بِرَأسِهِ فِي لسُّودانِ مِن جَدِيدِن
predicted (untransliterated): عَادَ إِرْتِفَاعُ أَسْعَارِ الْأَدْوِيَةِ وَشُحُّ الْمُنْقِذِ لِلْحَيَاتِ مِنْهَا لِيُطِلَّ بِرَأْسِهِ فِي السُّودَانِ مِنْ جَدِيدٍ
--
reference: EalY EtibArihA tusAEidu EalY tawsiyEi madAriki l>aTfAl watajEalu minhum >unAsan muvaq~afiyna mustaqbalan wamuwAkibiyna liEaSri tiknuwluwjyA lmaEluwmAt
predicted: EalaY AEotibaArihaA tusaAEidu EalaY tawosiyEi ma*ariki Alo>aTofaAl watajoEalu minohumo >unaAsAF muvaq~afiyna musotaqobalAF wamuwaAkibiyna liEaSori Alt~ikonuwluwjoyaA AlomaEoluwmaAt
reference (untransliterated): عَلى عتِبارِها تُساعِدُ عَلى تَوسِيعِ مَدارِكِ لأَطفال وَتَجعَلُ مِنهُم أُناسَن مُثَقَّفِينَ مُستَقبَلَن وَمُواكِبِينَ لِعَصرِ تِكنُولُوجيا لمَعلُومات
predicted (untransliterated): عَلَى اعْتِبَارِهَا تُسَاعِدُ عَلَى تَوْسِيعِ مَذَرِكِ الْأَطْفَال وَتَجْعَلُ مِنْهُمْ أُنَاساً مُثَقَّفِينَ مُسْتَقْبَلاً وَمُوَاكِبِينَ لِعَصْرِ التِّكْنُولُوجْيَا الْمَعْلُومَات
--
reference: wa*alika EalY xilAfi nuZarA}ihi ls~Abiqiyn
predicted: wa*alika EalaY xilaAfi nuZaraA}ihi Als~aAbiqiyno
reference (untransliterated): وَذَلِكَ عَلى خِلافِ نُظَرائِهِ لسّابِقِين
predicted (untransliterated): وَذَلِكَ عَلَى خِلَافِ نُظَرَائِهِ السَّابِقِينْ
--
reference: fataHat >akAdiymiy~apu lmuwsiyqY lEarabiy~api rasmiy~an yawma ls~abt >abwAbahA fiy bruwksil biHuDuwri majmuwEapin mina lwuzarA' warijAli lfan~i lbaljiykiy~iyna wAlEarab
predicted: fataHato >akaAdiymiy~apu AlomuwsiyqaY AloEarabiy~api rasomiy~AF yawoma Als~abot >abowaAbahaA fiy boruwkosil biHuDuwri majomuwEapK mina AlowuzaraYA warijaAli Alofan~i Alobalojiykiy~iyna waAloEarabo
reference (untransliterated): فَتَحَت أَكادِيمِيَّةُ لمُوسِيقى لعَرَبِيَّةِ رَسمِيَّن يَومَ لسَّبت أَبوابَها فِي برُوكسِل بِحُضُورِ مَجمُوعَةِن مِنَ لوُزَراء وَرِجالِ لفَنِّ لبَلجِيكِيِّينَ والعَرَب
predicted (untransliterated): فَتَحَتْ أَكَادِيمِيَّةُ الْمُوسِيقَى الْعَرَبِيَّةِ رَسْمِيّاً يَوْمَ السَّبْت أَبْوَابَهَا فِي بْرُوكْسِل بِحُضُورِ مَجْمُوعَةٍ مِنَ الْوُزَرَىا وَرِجَالِ الْفَنِّ الْبَلْجِيكِيِّينَ وَالْعَرَبْ
--
reference: fataHZY bitaEal~umin yamHuw >um~iy~atahA wayuDiy'u lahA Tariyqa lmaErifapi wAlt~iknuwluwjyA
predicted: fataHoZaY bitaEal~umK yamoHu >um~iy~atahaA wayuDiy'u lahaA Tariyqa AlomaEorifapi waAlt~iykonuwluwjoyaA
reference (untransliterated): فَتَحظى بِتَعَلُّمِن يَمحُو أُمِّيَّتَها وَيُضِيءُ لَها طَرِيقَ لمَعرِفَةِ والتِّكنُولُوجيا
predicted (untransliterated): فَتَحْظَى بِتَعَلُّمٍ يَمْحُ أُمِّيَّتَهَا وَيُضِيءُ لَهَا طَرِيقَ الْمَعْرِفَةِ وَالتِّيكْنُولُوجْيَا
--
reference: faha*A lmanzilu lmutawADiE >aSbaHa maHaj~aan liEadadin kabiyrin mina ln~isA'i lmariyDAti biAls~araTAn
predicted: faha*aA Alomanozilu AlomutawaADiEi >aSobaHa maHaj~AF liEadadK kabiyrK mina Aln~isaA'i AlomariyDaAti biAls~araTaAno
reference (untransliterated): فَهَذا لمَنزِلُ لمُتَواضِع أَصبَحَ مَحَجََّن لِعَدَدِن كَبِيرِن مِنَ لنِّساءِ لمَرِيضاتِ بِالسَّرَطان
predicted (untransliterated): فَهَذَا الْمَنْزِلُ الْمُتَوَاضِعِ أَصْبَحَ مَحَجّاً لِعَدَدٍ كَبِيرٍ مِنَ النِّسَاءِ الْمَرِيضَاتِ بِالسَّرَطَانْ
--
reference: Hadava *alika fiy Hay yaEquwba lmanSuwr l$~aEbiy~i
predicted: Hadava *alika fiy Hay yaEoquwba AlomanoSuwro >al$~aEobiy~i
reference (untransliterated): حَدَثَ ذَلِكَ فِي حَي يَعقُوبَ لمَنصُور لشَّعبِيِّ
predicted (untransliterated): حَدَثَ ذَلِكَ فِي حَي يَعْقُوبَ الْمَنْصُورْ أَلشَّعْبِيِّ
--
reference: fiy Hiyni kAna lmarkazu l>aw~alu fiy lwavbi lEAliy min naSiybi lkuruwAtiy~api >AnA siymiyt$
predicted: fiy Hiyni kaAna Alomarokazu Alo>aw~alu fiy Alowavobi AloEaAli mino naSiybi AlokuruwaAtiy~api |naA siymito$
reference (untransliterated): فِي حِينِ كانَ لمَركَزُ لأَوَّلُ فِي لوَثبِ لعالِي مِن نَصِيبِ لكُرُواتِيَّةِ أانا سِيمِيتش
predicted (untransliterated): فِي حِينِ كَانَ الْمَرْكَزُ الْأَوَّلُ فِي الْوَثْبِ الْعَالِ مِنْ نَصِيبِ الْكُرُوَاتِيَّةِ آنَا سِيمِتْش
--
reference: qAla bAHivuwna <in~a riyAHan >aqwY mina lmuEtAd xaf~afat min HarArapi saTHi lmuHiyTi lhAdiy hiya sababu lt~abATu}i lmu&aq~at fiy rtifAEi darajapi HarArapi l>arD mun*u bidAyapi lqarni lHAdiy wAlEi$riyn
predicted: qaAla baAHivuwna <in~a riyaAHAF >aqowaY mina AlomuEotaAd xaf~afato mino HaraArapi saToHi AlomuHiyTi AlohaAdiy hiya sababu Alt~abaATu&i Alomu&aq~aTi fiy ArotifaAEi darajapi HaraArapi Alo>aroD muno*u bidaAyapi Aloqaroni AloHaAdiy waAloEi$oriyno
reference (untransliterated): قالَ باحِثُونَ إِنَّ رِياحَن أَقوى مِنَ لمُعتاد خَفَّفَت مِن حَرارَةِ سَطحِ لمُحِيطِ لهادِي هِيَ سَبَبُ لتَّباطُئِ لمُؤَقَّت فِي رتِفاعِ دَرَجَةِ حَرارَةِ لأَرض مُنذُ بِدايَةِ لقَرنِ لحادِي والعِشرِين
predicted (untransliterated): قَالَ بَاحِثُونَ إِنَّ رِيَاحاً أَقْوَى مِنَ الْمُعْتَاد خَفَّفَتْ مِنْ حَرَارَةِ سَطْحِ الْمُحِيطِ الْهَادِي هِيَ سَبَبُ التَّبَاطُؤِ الْمُؤَقَّطِ فِي ارْتِفَاعِ دَرَجَةِ حَرَارَةِ الْأَرْض مُنْذُ بِدَايَةِ الْقَرْنِ الْحَادِي وَالْعِشْرِينْ
--
reference: qabla >an yuslima liyudAfiEa Ean diynih muHib~aan wamuHtariman li>aSlihi wamADiyh
predicted: qabola >ano yusolima liyudaAfiEa Eano diyni muHib~AF wamuHotarimAF li>aSolihi wamaADiyh
reference (untransliterated): قَبلَ أَن يُسلِمَ لِيُدافِعَ عَن دِينِه مُحِبََّن وَمُحتَرِمَن لِأَصلِهِ وَماضِيه
predicted (untransliterated): قَبْلَ أَنْ يُسْلِمَ لِيُدَافِعَ عَنْ دِينِ مُحِبّاً وَمُحْتَرِماً لِأَصْلِهِ وَمَاضِيه
--
reference: kamA tam~a taHsiynu wAjihAti lt~anaq~ul wAxtiyAri wasA}ili ln~aqli lmunAsibapi bi$aklin kabiyr
predicted: kamaA tam~a taHosiynu waAjihaAti Alt~anaq~ulo waAxotiyaAri wasaA}ili Aln~aqoli AlomunaAsibapi bi$akolK kabiyro
reference (untransliterated): كَما تَمَّ تَحسِينُ واجِهاتِ لتَّنَقُّل واختِيارِ وَسائِلِ لنَّقلِ لمُناسِبَةِ بِشَكلِن كَبِير
predicted (untransliterated): كَمَا تَمَّ تَحْسِينُ وَاجِهَاتِ التَّنَقُّلْ وَاخْتِيَارِ وَسَائِلِ النَّقْلِ الْمُنَاسِبَةِ بِشَكْلٍ كَبِيرْ
--
reference: kamA tuwuf~iyati lr~iwA}iy~apu lbArizapu wAl>ustA*apu ljAmiEiy~apu lmiSriy~apu raDwY EA$uwr Ean vamAniy wasit~iyna EAman
predicted: kamaA tuwuf~iyapi Alr~iwaA}iy~apu AlobaArizapu waAlo>usotaA*apu Alj~aAmiEiy~apu AlomiSoriy~apu raDowaY EaA$uwro Eano vamaAniy wasit~iyna EaAmAF
reference (untransliterated): كَما تُوُفِّيَتِ لرِّوائِيَّةُ لبارِزَةُ والأُستاذَةُ لجامِعِيَّةُ لمِصرِيَّةُ رَضوى عاشُور عَن ثَمانِي وَسِتِّينَ عامَن
predicted (untransliterated): كَمَا تُوُفِّيَةِ الرِّوَائِيَّةُ الْبَارِزَةُ وَالْأُسْتَاذَةُ الجَّامِعِيَّةُ الْمِصْرِيَّةُ رَضْوَى عَاشُورْ عَنْ ثَمَانِي وَسِتِّينَ عَاماً
--
reference: kamA $Arakat TAlibAtun min madArisa filasTiyniy~apin >alfan~Anapa lt~urkiy~apa fiy Eamali lawHAt
predicted: kamaA $aArakato TaAlibaAtN mino madaArisa fiylasoTiydiy~apK >alofan~aAnapa Alt~urokiy~apa fiy Eamali lawoHaAt
reference (untransliterated): كَما شارَكَت طالِباتُن مِن مَدارِسَ فِلَسطِينِيَّةِن أَلفَنّانَةَ لتُّركِيَّةَ فِي عَمَلِ لَوحات
predicted (untransliterated): كَمَا شَارَكَتْ طَالِبَاتٌ مِنْ مَدَارِسَ فِيلَسْطِيدِيَّةٍ أَلْفَنَّانَةَ التُّرْكِيَّةَ فِي عَمَلِ لَوْحَات
--
reference: lAmasa mu*an~abun yuTlaqu Ealayhi <ismu sAydiyng sbriyng kawkaba lmir~iyxi Einda muruwrihi bimuHA*Atih
predicted: laAmasa mu*an~abN yuTolaqu Ealayohi <isomu saAyodynosoboriynogo kawokaba Alomar~iyxi Einoda muruwrihi bimuHaA*aAti
reference (untransliterated): لامَسَ مُذَنَّبُن يُطلَقُ عَلَيهِ إِسمُ سايدِينغ سبرِينغ كَوكَبَ لمِرِّيخِ عِندَ مُرُورِهِ بِمُحاذاتِه
predicted (untransliterated): لَامَسَ مُذَنَّبٌ يُطْلَقُ عَلَيْهِ إِسْمُ سَايْدينْسْبْرِينْغْ كَوْكَبَ الْمَرِّيخِ عِنْدَ مُرُورِهِ بِمُحَاذَاتِ
--
reference: laqad sAhamati lt~iknuluwjyA fiy taqliyli ln~izAEAti l>usariy~api wa>aETat likul~i fardin nawEan mina l<istiqlAliy~api
predicted: laqado saAhamapi Alt~iykonuwluwjoyaA fiy taqoliyli Aln~izaAEaAti Alo>usariy~api wa>aEoTaTo likul~i farodK nawoEAF mina Alo<isotiqolaAliy~api
reference (untransliterated): لَقَد ساهَمَتِ لتِّكنُلُوجيا فِي تَقلِيلِ لنِّزاعاتِ لأُسَرِيَّةِ وَأَعطَت لِكُلِّ فَردِن نَوعَن مِنَ لإِستِقلالِيَّةِ
predicted (untransliterated): لَقَدْ سَاهَمَةِ التِّيكْنُولُوجْيَا فِي تَقْلِيلِ النِّزَاعَاتِ الْأُسَرِيَّةِ وَأَعْطَطْ لِكُلِّ فَرْدٍ نَوْعاً مِنَ الْإِسْتِقْلَالِيَّةِ
--
reference: lakin~a maSdaran fiy lwafdi qAl <in~a ls~iEra sayanxafiDu baEda nxifADi >asEAri ln~afTi fiy lEAlam
predicted: lakin~a maSodarAF fiy Alowafodi qaAl <in~a Als~iEoara sayanoxafiDu baEoda AnoxifaADi >asoEaAri Aln~afoTi fiy AloEaAlamo
reference (untransliterated): لَكِنَّ مَصدَرَن فِي لوَفدِ قال إِنَّ لسِّعرَ سَيَنخَفِضُ بَعدَ نخِفاضِ أَسعارِ لنَّفطِ فِي لعالَم
predicted (untransliterated): لَكِنَّ مَصْدَراً فِي الْوَفْدِ قَال إِنَّ السِّعَْرَ سَيَنْخَفِضُ بَعْدَ انْخِفَاضِ أَسْعَارِ النَّفْطِ فِي الْعَالَمْ
--
reference: lam yamnaE DaEfu mawAridi lt~amwiyl wArtifAEu kulfapi lmu$ArakAti ld~awliy~api riyADapa lfuruwsiy~api fiy tuwnusa min >an tastaqTiba lmi}At min Eu$~AqihA fiy baladin yakAdu l<ihtimAmu fiyhi yaqtaSir EalY riyADAtin $aEbiy~apin muEay~anapin
predicted: lamo yamonaEoDaEaofu mawaAridi Alt~amowiylo waArotifaAEu kulofapi Alomu$aArakaAti Ald~awoliy~api riyaADapa Alofuruwsiy~api fiy tuwnusa mino >ano tasotaqoTiba Almi}At mino Eu$~aAqihaA fiy baladK yakaAdu Al<ihotimaAmu fiy hiyaqotaSir EalaY riy~aADaAtK $aEobiy~apK muEay~inapK
reference (untransliterated): لَم يَمنَع ضَعفُ مَوارِدِ لتَّموِيل وارتِفاعُ كُلفَةِ لمُشارَكاتِ لدَّولِيَّةِ رِياضَةَ لفُرُوسِيَّةِ فِي تُونُسَ مِن أَن تَستَقطِبَ لمِئات مِن عُشّاقِها فِي بَلَدِن يَكادُ لإِهتِمامُ فِيهِ يَقتَصِر عَلى رِياضاتِن شَعبِيَّةِن مُعَيَّنَةِن
predicted (untransliterated): لَمْ يَمْنَعْضَعَْفُ مَوَارِدِ التَّمْوِيلْ وَارْتِفَاعُ كُلْفَةِ الْمُشَارَكَاتِ الدَّوْلِيَّةِ رِيَاضَةَ الْفُرُوسِيَّةِ فِي تُونُسَ مِنْ أَنْ تَسْتَقْطِبَ المِئات مِنْ عُشَّاقِهَا فِي بَلَدٍ يَكَادُ الإِهْتِمَامُ فِي هِيَقْتَصِر عَلَى رِيَّاضَاتٍ شَعْبِيَّةٍ مُعَيِّنَةٍ
--
reference: liyaDaEA bi*alika Hadaan lilEadiydi mina lt~aqAriyr >al~atiy >ak~adat <imkAniy~apa raHiyli ll~AEibi lmu$Agibi qariybaan
predicted: liyaDaEaAbi *alika Had~AF liloEadiydi mina Alt~aqaAriyro >al~atiy >ak~adat <imokaAniy~apa raHiyli All~aAEibi Alomu$aAgibi qariybAF
reference (untransliterated): لِيَضَعا بِذَلِكَ حَدََن لِلعَدِيدِ مِنَ لتَّقارِير أَلَّتِي أَكَّدَت إِمكانِيَّةَ رَحِيلِ للّاعِبِ لمُشاغِبِ قَرِيبََن
predicted (untransliterated): لِيَضَعَابِ ذَلِكَ حَدّاً لِلْعَدِيدِ مِنَ التَّقَارِيرْ أَلَّتِي أَكَّدَت إِمْكَانِيَّةَ رَحِيلِ اللَّاعِبِ الْمُشَاغِبِ قَرِيباً
--
reference: muDiyfan nuHAwilu xalqa furaSi Eamalin bi>aydiynA
predicted: muDiyfAF nuHaAwilu xaloqa furaSi EamalK bi>ayodiyna
reference (untransliterated): مُضِيفَن نُحاوِلُ خَلقَ فُرَصِ عَمَلِن بِأَيدِينا
predicted (untransliterated): مُضِيفاً نُحَاوِلُ خَلْقَ فُرَصِ عَمَلٍ بِأَيْدِينَ
--
reference: wa*alika muqAranapan maEa lmaHASiyli lz~irAEiy~api l>uxrY
predicted: wa*alika muqaAranapF maEa AlomaHaASiyli Alz~iraAEiy~api Alo>uxoraY
reference (untransliterated): وَذَلِكَ مُقارَنَةَن مَعَ لمَحاصِيلِ لزِّراعِيَّةِ لأُخرى
predicted (untransliterated): وَذَلِكَ مُقَارَنَةً مَعَ الْمَحَاصِيلِ الزِّرَاعِيَّةِ الْأُخْرَى
--
reference: mulqiyan lD~aw'a EalY qaDiy~api lfitnapi lT~A}ifiy~api fiy lmujtamaEi lmiSriy~i bi>usluwbin basiyTin min xilAli EalAqAti l>aTfAl fiy lmadrasapi bizamiylihimu lmasiyHiy~i
predicted: muloqiyani AlD~awo'a EalaY qadiy~api Alofitonapi AlT~aA}ifiy~api fiy AlomujotamaEi AlomiSoriy~i bi>usoluwbK basiyTK mino xilaAli EalaAqaAti Alo>aTofaAlo fiy Alomadorasapi bizamiylihimu AlomasiyHiy~i
reference (untransliterated): مُلقِيَن لضَّوءَ عَلى قَضِيَّةِ لفِتنَةِ لطّائِفِيَّةِ فِي لمُجتَمَعِ لمِصرِيِّ بِأُسلُوبِن بَسِيطِن مِن خِلالِ عَلاقاتِ لأَطفال فِي لمَدرَسَةِ بِزَمِيلِهِمُ لمَسِيحِيِّ
predicted (untransliterated): مُلْقِيَنِ الضَّوْءَ عَلَى قَدِيَّةِ الْفِتْنَةِ الطَّائِفِيَّةِ فِي الْمُجْتَمَعِ الْمِصْرِيِّ بِأُسْلُوبٍ بَسِيطٍ مِنْ خِلَالِ عَلَاقَاتِ الْأَطْفَالْ فِي الْمَدْرَسَةِ بِزَمِيلِهِمُ الْمَسِيحِيِّ
--
reference: mim~A yadEamu natA}ija dirAsAtin sAbiqapin tuHa*~iru min maxATiri l<ifrATi fiy stiEmAli ljaw~Al
predicted: mim~aA yadoEamu nataA}ija diraAsaAtK saAbiqapK tuHa*~iru mino maxaATiri Alo<iforaATi fiy AsotiEomaAli Alj~aw~aAl
reference (untransliterated): مِمّا يَدعَمُ نَتائِجَ دِراساتِن سابِقَةِن تُحَذِّرُ مِن مَخاطِرِ لإِفراطِ فِي ستِعمالِ لجَوّال
predicted (untransliterated): مِمَّا يَدْعَمُ نَتَائِجَ دِرَاسَاتٍ سَابِقَةٍ تُحَذِّرُ مِنْ مَخَاطِرِ الْإِفْرَاطِ فِي اسْتِعْمَالِ الجَّوَّال
--
reference: min baynihA >al<istiqrAru wanawEiy~apu lr~iEAyapi lS~iH~iy~api wAlv~aqAfapi wAlbiy}api wAlt~aEliymi wAlbinyapi lt~aHtiy~api
predicted: mino bayonihaA >alo<isotiqoraAru wanawoEiy~apu Alr~iEaAyapi AlS~iH~iy~api waAlv~aqaAfapi waAlobiy}api waAlt~aEoliymi waAlobinoyapi Alt~aHotiy~api
reference (untransliterated): مِن بَينِها أَلإِستِقرارُ وَنَوعِيَّةُ لرِّعايَةِ لصِّحِّيَّةِ والثَّقافَةِ والبِيئَةِ والتَّعلِيمِ والبِنيَةِ لتَّحتِيَّةِ
predicted (untransliterated): مِنْ بَيْنِهَا أَلْإِسْتِقْرَارُ وَنَوْعِيَّةُ الرِّعَايَةِ الصِّحِّيَّةِ وَالثَّقَافَةِ وَالْبِيئَةِ وَالتَّعْلِيمِ وَالْبِنْيَةِ التَّحْتِيَّةِ
--
reference: minhA >aqmi$apun wa>adawAtun maEdaniy~apun waxa$abiy~apun waqinAnun blAstiykiy~apun wazujAjiy~apun wa>awrAqu SuHuf
predicted: minohaA >aqomi$apN wa>adawaAtN maEodaniy~apN waxa$abiy~apN waqinAnN bolaAsotiykiy~apN wazujaAjiy~atN wa>aworaAqu SuHafo
reference (untransliterated): مِنها أَقمِشَةُن وَأَدَواتُن مَعدَنِيَّةُن وَخَشَبِيَّةُن وَقِنانُن بلاستِيكِيَّةُن وَزُجاجِيَّةُن وَأَوراقُ صُحُف
predicted (untransliterated): مِنْهَا أَقْمِشَةٌ وَأَدَوَاتٌ مَعْدَنِيَّةٌ وَخَشَبِيَّةٌ وَقِنانٌ بْلَاسْتِيكِيَّةٌ وَزُجَاجِيَّتٌ وَأَوْرَاقُ صُحَفْ
--
reference: hal lilS~iyAmi ta>viyrun EalY Eamali lmuslimiyna fiy l$~arikAti bi>uwruwb~A
predicted: hal~i AlS~iyaAmi ta>oviyrN EalaY Eamali Alomusolimiyna fiy Al$~arikaAti bi>uwruwb~aA
reference (untransliterated): هَل لِلصِّيامِ تَأثِيرُن عَلى عَمَلِ لمُسلِمِينَ فِي لشَّرِكاتِ بِأُورُوبّا
predicted (untransliterated): هَلِّ الصِّيَامِ تَأْثِيرٌ عَلَى عَمَلِ الْمُسْلِمِينَ فِي الشَّرِكَاتِ بِأُورُوبَّا
--
reference: hunAka fikrapun TuriHat bAdi}a l>amr biEaqdi qim~apin >uwruwbiy~apin fiy sarayiyfuw biha*ihi lmunAsabapi
predicted: hunaAka fikorapN TuriHato baAdi >alo>amor biEaqoDi qim~apK >uwruwbiy~apK fiy sarayiyfuw biha*ihi AlomunaAsabapi
reference (untransliterated): هُناكَ فِكرَةُن طُرِحَت بادِئَ لأَمر بِعَقدِ قِمَّةِن أُورُوبِيَّةِن فِي سَرَيِيفُو بِهَذِهِ لمُناسَبَةِ
predicted (untransliterated): هُنَاكَ فِكْرَةٌ طُرِحَتْ بَادِ أَلْأَمْر بِعَقْضِ قِمَّةٍ أُورُوبِيَّةٍ فِي سَرَيِيفُو بِهَذِهِ الْمُنَاسَبَةِ
--
reference: wa yumkinu >an tuHSada lv~imAr EalY madY fatrapin zamaniy~apin Tawiylapin
predicted: wayumokinu >ano tuHoSada Alv~imaAr EalaY madaY fatorapK zamaniy~apK TawiylapK
reference (untransliterated): وَ يُمكِنُ أَن تُحصَدَ لثِّمار عَلى مَدى فَترَةِن زَمَنِيَّةِن طَوِيلَةِن
predicted (untransliterated): وَيُمْكِنُ أَنْ تُحْصَدَ الثِّمَار عَلَى مَدَى فَتْرَةٍ زَمَنِيَّةٍ طَوِيلَةٍ
--
reference: wa>Hraza lmarkaza lv~Aliv >alr~iwA}iy~u ljazA}iriy~u >aHmadu TiybAwiy Ean riwAyatihi mawtun nAEim
predicted: wa>aHoraza Alomarokaza Alv~aAlivo >alr~iwaA}iy~u AlojazaA}iriy~u >aHomadu TiybaAwi Eano riwaAyatihi mawotunnaAEimo
reference (untransliterated): وَأحرَزَ لمَركَزَ لثّالِث أَلرِّوائِيُّ لجَزائِرِيُّ أَحمَدُ طِيباوِي عَن رِوايَتِهِ مَوتُن ناعِم
predicted (untransliterated): وَأَحْرَزَ الْمَرْكَزَ الثَّالِثْ أَلرِّوَائِيُّ الْجَزَائِرِيُّ أَحْمَدُ طِيبَاوِ عَنْ رِوَايَتِهِ مَوْتُننَاعِمْ
--
reference: wAxtatama lbarAziyliy~uwna mubArAyAtihimi l<iEdAdiy~apa biAlfawzi EalY SirbyA bihadafin waHiydin saj~alahu lmuhAjimu farydun fiy l$~awTi lv~Aniy mina lmubArApi >al~atiy >uqiymat fiy sAwbAwluw
predicted: waAxotatama AlobaraAziyliy~uwna mubaArayaAtihimi Alo<iEodaAdiy~api biAlofawozi EalaY Sirobiya bihadafK waHiydK saj~alahu AlomuhaAjimu fariydN fiy Al$~awoTi Alv~aAniy mina AlomubaAraApi >al~atiy >uqiymato fiy saAwobaAluw
reference (untransliterated): واختَتَمَ لبَرازِيلِيُّونَ مُباراياتِهِمِ لإِعدادِيَّةَ بِالفَوزِ عَلى صِربيا بِهَدَفِن وَحِيدِن سَجَّلَهُ لمُهاجِمُ فَريدُن فِي لشَّوطِ لثّانِي مِنَ لمُباراةِ أَلَّتِي أُقِيمَت فِي ساوباولُو
predicted (untransliterated): وَاخْتَتَمَ الْبَرَازِيلِيُّونَ مُبَارَيَاتِهِمِ الْإِعْدَادِيَّةِ بِالْفَوْزِ عَلَى صِرْبِيَ بِهَدَفٍ وَحِيدٍ سَجَّلَهُ الْمُهَاجِمُ فَرِيدٌ فِي الشَّوْطِ الثَّانِي مِنَ الْمُبَارَاةِ أَلَّتِي أُقِيمَتْ فِي سَاوْبَالُو
--
reference: wA$tahara lr~AHilu bimaqAlAtihi wakutubihi lr~aSiynapi >al~atiy taDam~anat qirA'Atin mustaqbaliy~apan lil>AfAqi ls~iyAsiy~api wAl<ijtimAEiy~api fiy lEAlami lEarabiy~i l<islAmiy~i
predicted: waA$otahara Alr~aAHilu bimaqaAlaAtihi wakutubihi Alr~aSiynapi >al~atiy taDam~anato qiraA'aAtK musotaqobaliy~apF lilo|faAqi Als~iyaAsiy~api waAlo<ijotimaAEiy~api fiy AloEaAlami AloEarabiy~i Alo<isolaAmiy~i
reference (untransliterated): واشتَهَرَ لرّاحِلُ بِمَقالاتِهِ وَكُتُبِهِ لرَّصِينَةِ أَلَّتِي تَضَمَّنَت قِراءاتِن مُستَقبَلِيَّةَن لِلأافاقِ لسِّياسِيَّةِ والإِجتِماعِيَّةِ فِي لعالَمِ لعَرَبِيِّ لإِسلامِيِّ
predicted (untransliterated): وَاشْتَهَرَ الرَّاحِلُ بِمَقَالَاتِهِ وَكُتُبِهِ الرَّصِينَةِ أَلَّتِي تَضَمَّنَتْ قِرَاءَاتٍ مُسْتَقْبَلِيَّةً لِلْآفَاقِ السِّيَاسِيَّةِ وَالْإِجْتِمَاعِيَّةِ فِي الْعَالَمِ الْعَرَبِيِّ الْإِسْلَامِيِّ
--
reference: wa>aSbaHa ha*A lS~arHu matHafan rasmiy~an
predicted: wa>aSobaHa ha*aA AlS~aroHu matoHafAF rasomiy~AF
reference (untransliterated): وَأَصبَحَ هَذا لصَّرحُ مَتحَفَن رَسمِيَّن
predicted (untransliterated): وَأَصْبَحَ هَذَا الصَّرْحُ مَتْحَفاً رَسْمِيّاً
--
reference: w>aDAfa lbayAnu an~a fariyqaan min l>aTib~A'i wAlmumar~iDAt w<ixtiSASiy~iyna >Axariyna fiy majAli lS~iH~api yaEtanuwna bimAndiyl~A EalY madAri ls~AEapi
predicted: wa>aDaAfa AlobayaAnu >an~a fariyqAF mina Alo>aTib~aA'i waAlomumar~iDaAt waAxotiSaASiy~iyna |xariyna fiy majaAli AlS~iH~api yaEotanuwna bimaAnodil~aA EalaY madaAri Als~aAEapi
reference (untransliterated): وأَضافَ لبَيانُ َنَّ فَرِيقََن مِن لأَطِبّاءِ والمُمَرِّضات وإِختِصاصِيِّينَ أاخَرِينَ فِي مَجالِ لصِّحَّةِ يَعتَنُونَ بِماندِيلّا عَلى مَدارِ لسّاعَةِ
predicted (untransliterated): وَأَضَافَ الْبَيَانُ أَنَّ فَرِيقاً مِنَ الْأَطِبَّاءِ وَالْمُمَرِّضَات وَاخْتِصَاصِيِّينَ آخَرِينَ فِي مَجَالِ الصِّحَّةِ يَعْتَنُونَ بِمَانْدِلَّا عَلَى مَدَارِ السَّاعَةِ
--
reference: wAEtabaruwhA falsafapan ruwHiy~apan mutakAmilapan litaHriyri ljismi wAlfikr
predicted: waAEotabaruwhaA falosafapF ruwHiy~apF mutakaAmilapF litaHoriyri Alojisomi waAlofikor
reference (untransliterated): واعتَبَرُوها فَلسَفَةَن رُوحِيَّةَن مُتَكامِلَةَن لِتَحرِيرِ لجِسمِ والفِكر
predicted (untransliterated): وَاعْتَبَرُوهَا فَلْسَفَةً رُوحِيَّةً مُتَكَامِلَةً لِتَحْرِيرِ الْجِسْمِ وَالْفِكْر
--
reference: >alt~awaH~udu huwa majmuwEapu DTirAbAtin EaSabiy~apin fiy lt~aTaw~ur ta$malu >aErADuhA wujuwda ma$Akila fiy ls~uluwki lAjtimAEiy~i lil$~axSi lmuSAb
predicted: >alt~awaH~udu huwa majomuwEapu AlT~iraAbaAtK EaSabiy~apK fiy Alt~aTaw~uro ta$omalu >aEoraADuhaA bujuwda ma$aAkila fiy Als~uluwki Alo<ijotimaAEiy~i lil$~axoSi AlomuSaAbo
reference (untransliterated): أَلتَّوَحُّدُ هُوَ مَجمُوعَةُ ضطِراباتِن عَصَبِيَّةِن فِي لتَّطَوُّر تَشمَلُ أَعراضُها وُجُودَ مَشاكِلَ فِي لسُّلُوكِ لاجتِماعِيِّ لِلشَّخصِ لمُصاب
predicted (untransliterated): أَلتَّوَحُّدُ هُوَ مَجْمُوعَةُ الطِّرَابَاتٍ عَصَبِيَّةٍ فِي التَّطَوُّرْ تَشْمَلُ أَعْرَاضُهَا بُجُودَ مَشَاكِلَ فِي السُّلُوكِ الْإِجْتِمَاعِيِّ لِلشَّخْصِ الْمُصَابْ
--
reference: wAlEamalu lr~a}iysiy~u lahu huwa riwAyatahu lmalHamiy~apu mA}apu EAmin mina lEuzlapi >al~atiy nAla EanhA jA}izapa nuwbila fiy l>adab EAma >alfin watisEimi}apin wa<ivnAni wavamAnuwn
predicted: waAloEamalu Alr~a}iysiy~u lahu huwa riwaAyatahu AlomaloHamiy~apu ma>apu EaAmK mina AloEuzolapi >al~atiy naAla EanohaA jaA}izapa nuwbila fiy Alo>adabo EaAma >alofK watisoEi ma}apK wa<ivnaAni wavamAnuwna
reference (untransliterated): والعَمَلُ لرَّئِيسِيُّ لَهُ هُوَ رِوايَتَهُ لمَلحَمِيَّةُ مائَةُ عامِن مِنَ لعُزلَةِ أَلَّتِي نالَ عَنها جائِزَةَ نُوبِلَ فِي لأَدَب عامَ أَلفِن وَتِسعِمِئَةِن وَإِثنانِ وَثَمانُون
predicted (untransliterated): وَالْعَمَلُ الرَّئِيسِيُّ لَهُ هُوَ رِوَايَتَهُ الْمَلْحَمِيَّةُ مَأَةُ عَامٍ مِنَ الْعُزْلَةِ أَلَّتِي نَالَ عَنْهَا جَائِزَةَ نُوبِلَ فِي الْأَدَبْ عَامَ أَلْفٍ وَتِسْعِ مَئَةٍ وَإِثنَانِ وَثَمانُونَ
--
reference: wAlmiykuwng was>aluwyn fiy januwbi $arqi >AsyA
predicted: waAlomiykuwnogo wasaAluwiyno fiy januwbi $aroqi |soyaA
reference (untransliterated): والمِيكُونغ وَسأَلُوين فِي جَنُوبِ شَرقِ أاسيا
predicted (untransliterated): وَالْمِيكُونْغْ وَسَالُوِينْ فِي جَنُوبِ شَرْقِ آسْيَا
--
reference: wa>n~a >aham~a muEaw~iqAti najAHihA takmunu fiy Eadami tafar~ugi >aSHAbihA li<idAratihA
predicted: wa>an~a >aham~a muEaw~iqaAti najaAHihaA takomunu fiy Eadami tafar~ugi >aSoHaAbihaA li<idaAratihaA
reference (untransliterated): وَأنَّ أَهَمَّ مُعَوِّقاتِ نَجاحِها تَكمُنُ فِي عَدَمِ تَفَرُّغِ أَصحابِها لِإِدارَتِها
predicted (untransliterated): وَأَنَّ أَهَمَّ مُعَوِّقَاتِ نَجَاحِهَا تَكْمُنُ فِي عَدَمِ تَفَرُّغِ أَصْحَابِهَا لِإِدَارَتِهَا
--
reference: wa>awDaHa lbAHivuwna >an~a suw'a lt~ag*iyapi huwa ls~ababu lr~a}iysiy~u litawaq~ufi ln~umuw Einda l>aTfAl
predicted: wa>awoDaHa AlobaAHivuwna >an~a suw'a Alt~ago*iyapi huwa Als~ababu Alr~a}iysiy~u litawaq~ufi Aln~umuw Einoda Alo>aTofaAlo
reference (untransliterated): وَأَوضَحَ لباحِثُونَ أَنَّ سُوءَ لتَّغذِيَةِ هُوَ لسَّبَبُ لرَّئِيسِيُّ لِتَوَقُّفِ لنُّمُو عِندَ لأَطفال
predicted (untransliterated): وَأَوْضَحَ الْبَاحِثُونَ أَنَّ سُوءَ التَّغْذِيَةِ هُوَ السَّبَبُ الرَّئِيسِيُّ لِتَوَقُّفِ النُّمُو عِنْدَ الْأَطْفَالْ
--
reference: wa>awDaHati lmajal~apu >an~a ls~ababa fiy *alika yarjiEu <ilY taDay~uqi l$~uEabi lhawA}iy~api wata$an~ujihA bifiEli lhawA'i lbArid
predicted: wa>awoDaHati Alomajal~apu >an~a Als~ababa fiy *alika yarojiEu <ilaY taDay~uqi Al$~uEabi AlohawaA}iy~api wata$an~ujihaA bifiEoli AlohawaA'i AlobaArid
reference (untransliterated): وَأَوضَحَتِ لمَجَلَّةُ أَنَّ لسَّبَبَ فِي ذَلِكَ يَرجِعُ إِلى تَضَيُّقِ لشُّعَبِ لهَوائِيَّةِ وَتَشَنُّجِها بِفِعلِ لهَواءِ لبارِد
predicted (untransliterated): وَأَوْضَحَتِ الْمَجَلَّةُ أَنَّ السَّبَبَ فِي ذَلِكَ يَرْجِعُ إِلَى تَضَيُّقِ الشُّعَبِ الْهَوَائِيَّةِ وَتَشَنُّجِهَا بِفِعْلِ الْهَوَاءِ الْبَارِد
--
reference: wabAta >atlitiykuw madriyd fiy SadArapi lt~artiybi lEAm~i bi>arbaEi niqAT
predicted: wabaAta >atolitiykuw madoriydo fiy SadaArapi Alt~arotiybi AloEaAm~i bi>arobaEi niqaAT
reference (untransliterated): وَباتَ أَتلِتِيكُو مَدرِيد فِي صَدارَةِ لتَّرتِيبِ لعامِّ بِأَربَعِ نِقاط
predicted (untransliterated): وَبَاتَ أَتْلِتِيكُو مَدْرِيدْ فِي صَدَارَةِ التَّرْتِيبِ الْعَامِّ بِأَرْبَعِ نِقَاط
--
reference: wabiAlt~Aliy tusAEidu EalY lwiqAyapi mina l<imsAk
predicted: wabiAt~aAliy tusaAEidu EalaY AlowiyqaAyapi mina Alo<imosaAko
reference (untransliterated): وَبِالتّالِي تُساعِدُ عَلى لوِقايَةِ مِنَ لإِمساك
predicted (untransliterated): وَبِاتَّالِي تُسَاعِدُ عَلَى الْوِيقَايَةِ مِنَ الْإِمْسَاكْ
--
reference: wa*alika biziyArapi jumhuwrin xAS~in jid~an sanawiy~an
predicted: wa*alika biziyaArapi jumohuwrK xaAS~K jid~AF sanawiy~AF
reference (untransliterated): وَذَلِكَ بِزِيارَةِ جُمهُورِن خاصِّن جِدَّن سَنَوِيَّن
predicted (untransliterated): وَذَلِكَ بِزِيَارَةِ جُمْهُورٍ خَاصٍّ جِدّاً سَنَوِيّاً
--
reference: wabisababi $ukuwkin bi>an~a lT~A}irapa kAnat tuqil~u idwArd snuwdun >al~a*iy tat~ahimuhu wA$inTun biAlt~ajas~us
predicted: wabisababi $ukuwkK bi>an~a AlT~aA}irapa kaAna Alt~uqil~u <idowaAbo snuwduno >al~a*iy tat~ahimuhu wa $inoTun biAlt~ajas~us
reference (untransliterated): وَبِسَبَبِ شُكُوكِن بِأَنَّ لطّائِرَةَ كانَت تُقِلُّ ِدوارد سنُودُن أَلَّذِي تَتَّهِمُهُ واشِنطُن بِالتَّجَسُّس
predicted (untransliterated): وَبِسَبَبِ شُكُوكٍ بِأَنَّ الطَّائِرَةَ كَانَ التُّقِلُّ إِدْوَابْ سنُودُنْ أَلَّذِي تَتَّهِمُهُ وَ شِنْطُن بِالتَّجَسُّس
--
reference: wabaEavuwA risAlapan <ilY lra~}iysi tataDama~nu maTAliba liEawdatihim
predicted: wabaEavuwA risaAlapF <ilaY Alr~a}iysi tataDam~anu maTaAliba liEawodatihimo
reference (untransliterated): وَبَعَثُوا رِسالَةَن إِلى لرَّئِيسِ تَتَضَمَّنُ مَطالِبَ لِعَودَتِهِم
predicted (untransliterated): وَبَعَثُوا رِسَالَةً إِلَى الرَّئِيسِ تَتَضَمَّنُ مَطَالِبَ لِعَوْدَتِهِمْ
--
reference: wabaEda $uhuwrin mina lHayrapi wAlqalaq taEara~fa kuwmAr EalY markazi Eabdi llhi bni zaydi lva~qAfiy~i lilta~Eriyfi biAl<islAm
predicted: wabaEoda $uhuwrK mina AloHayorapi waAloqalaqo taEar~afa kuwmaAra EalaY marokazi Eabodi All~aAhi bonizayodi Alv~aqaAfiy~i lilt~aEoriyfi biAlo<isolaAmo
reference (untransliterated): وَبَعدَ شُهُورِن مِنَ لحَيرَةِ والقَلَق تَعَرَّفَ كُومار عَلى مَركَزِ عَبدِ للهِ بنِ زَيدِ لثَّقافِيِّ لِلتَّعرِيفِ بِالإِسلام
predicted (untransliterated): وَبَعْدَ شُهُورٍ مِنَ الْحَيْرَةِ وَالْقَلَقْ تَعَرَّفَ كُومَارَ عَلَى مَرْكَزِ عَبْدِ اللَّاهِ بْنِزَيْدِ الثَّقَافِيِّ لِلتَّعْرِيفِ بِالْإِسْلَامْ
--
reference: wabiha*A yabqY mi}apun wasit~apun wav~l>avuwna muHtajazan fiy lmuEtaqali lmuviyri liljadal
predicted: wabiha*A yaboqaY mi}apN wasit~apN wavalaAvuwna muHotajazAF fiy AlomuEotaqali Alomuviyri lilojadaYlo
reference (untransliterated): وَبِهَذا يَبقى مِئَةُن وَسِتَّةُن وَثّلأَثُونَ مُحتَجَزَن فِي لمُعتَقَلِ لمُثِيرِ لِلجَدَل
predicted (untransliterated): وَبِهَذا يَبْقَى مِئَةٌ وَسِتَّةٌ وَثَلَاثُونَ مُحْتَجَزاً فِي الْمُعْتَقَلِ الْمُثِيرِ لِلْجَدَىلْ
--
reference: watustaxdamu fiy baEDi ld~uwal wasA}ilu EilAjin muxtalifapun
predicted: watusotaxodamu fiy baEoDi Ald~uwalo wasaA}ilu EilaAjK muxotalifapN
reference (untransliterated): وَتُستَخدَمُ فِي بَعضِ لدُّوَل وَسائِلُ عِلاجِن مُختَلِفَةُن
predicted (untransliterated): وَتُسْتَخْدَمُ فِي بَعْضِ الدُّوَلْ وَسَائِلُ عِلَاجٍ مُخْتَلِفَةٌ
--
reference: wataTaw~ara stixdAmu lT~A}irAti lEAmilapi biduwni Tay~Ar wabada>ati ls~AEAtu l*~akiy~apu al<inti$Ara waka*alika lT~ibAEapu lv~ulAviy~apu l>abEAd
predicted: wataTaw~ara AsotixodaAmu AlT~aA}iraAti AloEaAmilapi biduwni Tay~aAr wabada>ati Als~aAEaAtu Al*~akiy~apu Alo<inoti$aAra waka*alika AlT~ibaAEapu Alv~ulAviy~apu Al>aboEAd
reference (untransliterated): وَتَطَوَّرَ ستِخدامُ لطّائِراتِ لعامِلَةِ بِدُونِ طَيّار وَبَدَأَتِ لسّاعاتُ لذَّكِيَّةُ َلإِنتِشارَ وَكَذَلِكَ لطِّباعَةُ لثُّلاثِيَّةُ لأَبعاد
predicted (untransliterated): وَتَطَوَّرَ اسْتِخْدَامُ الطَّائِرَاتِ الْعَامِلَةِ بِدُونِ طَيَّار وَبَدَأَتِ السَّاعَاتُ الذَّكِيَّةُ الْإِنْتِشَارَ وَكَذَلِكَ الطِّبَاعَةُ الثُّلاثِيَّةُ الأَبْعاد
--
reference: wajA'a ha*A lqarAr baEda <iElAni lsa~Euwdiya~pi taxfiyDa >aEdAdi lHuja~Aji ha*A lEAm
predicted: wajaA'a ha*aA AloqaraAro baEoda <iEolaAni Als~uEuwdiy~api taxofiyDa >aEodaAdi AloHuj~aAji ha*aA AloEaAmo
reference (untransliterated): وَجاءَ هَذا لقَرار بَعدَ إِعلانِ لسَّعُودِيَّةِ تَخفِيضَ أَعدادِ لحُجَّاجِ هَذا لعام
predicted (untransliterated): وَجَاءَ هَذَا الْقَرَارْ بَعْدَ إِعْلَانِ السُّعُودِيَّةِ تَخْفِيضَ أَعْدَادِ الْحُجَّاجِ هَذَا الْعَامْ
--
reference: wajA'ati l>arqAmu SAdimapan fiy mA yaxuS~u l$~arqa l>awsaT
predicted: wajaA'api Alo>aroqaAmu SaAdimapF fiymaA yaxuS~u Al$~aroqa Alo>awoSaTo
reference (untransliterated): وَجاءَتِ لأَرقامُ صادِمَةَن فِي ما يَخُصُّ لشَّرقَ لأَوسَط
predicted (untransliterated): وَجَاءَةِ الْأَرْقَامُ صَادِمَةً فِيمَا يَخُصُّ الشَّرْقَ الْأَوْصَطْ
--
reference: waSadarati lr~asA}il bi<ismi mubdiEiy wafan~Aniy miSra
predicted: wasaDarati Alr~asaA'ilo bi<isomi mubodiEi wafan~aAniy miSora
reference (untransliterated): وَصَدَرَتِ لرَّسائِل بِإِسمِ مُبدِعِي وَفَنّانِي مِصرَ
predicted (untransliterated): وَسَضَرَتِ الرَّسَاءِلْ بِإِسْمِ مُبْدِعِ وَفَنَّانِي مِصْرَ
--
reference: wafiy ftitAHi lmu&tamari qAlati l$~AEirapu $ariyfapa ls~ay~id <in~a lEaq~Ada it~axa*a mina lqirA'api wAl<iT~ilAEi EalY kul~i lEuluwm wamuxtalafi lHaDArAt silAHan yuHaT~imu bihi lS~anamiy~apa wayaksiru lmuHar~amAt
predicted: wafiy AfotitaAHi Alomu&otamari qaAlati Al$~aAEirapu $ariyfapa Als~ay~ido <in~a AloEaq~aAda Alt~axa*a mina AloqiraA'api waliADoTilaAEi EalaY kul~i AloEuluwmo wamuxotalifi AloHaDaAraAt silaAHAF yuHaT~i mgubihi AlS~anamiy~apa wayakosiru AlomuHar~amaAt
reference (untransliterated): وَفِي فتِتاحِ لمُؤتَمَرِ قالَتِ لشّاعِرَةُ شَرِيفَةَ لسَّيِّد إِنَّ لعَقّادَ ِتَّخَذَ مِنَ لقِراءَةِ والإِطِّلاعِ عَلى كُلِّ لعُلُوم وَمُختَلَفِ لحَضارات سِلاحَن يُحَطِّمُ بِهِ لصَّنَمِيَّةَ وَيَكسِرُ لمُحَرَّمات
predicted (untransliterated): وَفِي افْتِتَاحِ الْمُؤْتَمَرِ قَالَتِ الشَّاعِرَةُ شَرِيفَةَ السَّيِّدْ إِنَّ الْعَقَّادَ التَّخَذَ مِنَ الْقِرَاءَةِ وَلِاضْطِلَاعِ عَلَى كُلِّ الْعُلُومْ وَمُخْتَلِفِ الْحَضَارَات سِلَاحاً يُحَطِّ مغُبِهِ الصَّنَمِيَّةَ وَيَكْسِرُ الْمُحَرَّمَات
--
reference: wafiy kuwryA ljanuwbiy~api taquwmu lHukuwmapu bitamwiyli musta$fayAtin liEilAji ha*A l<idmAni l~a*iy yuEtabaru mu$kilapan qawmiy~apan
predicted: wafiy kuwriyaA Alojanuwbiy~api taquwmu AloHukuwmapu bitamowiyli musota$ofayaAtK liEilaAji ha*aA Alo<idomaAni Al~a*iy yuEotabaru mu$okilapF qawomiy~apF
reference (untransliterated): وَفِي كُوريا لجَنُوبِيَّةِ تَقُومُ لحُكُومَةُ بِتَموِيلِ مُستَشفَياتِن لِعِلاجِ هَذا لإِدمانِ لَّذِي يُعتَبَرُ مُشكِلَةَن قَومِيَّةَن
predicted (untransliterated): وَفِي كُورِيَا الْجَنُوبِيَّةِ تَقُومُ الْحُكُومَةُ بِتَمْوِيلِ مُسْتَشْفَيَاتٍ لِعِلَاجِ هَذَا الْإِدْمَانِ الَّذِي يُعْتَبَرُ مُشْكِلَةً قَوْمِيَّةً
--
reference: wakAna l>amalu >an takuwna ha*ihi ld~iymuqrATiy~Atu maSHuwbapan bi>adA'in tanmawiy~in muxtalif
predicted: wakAna Alo>amalu >ano takuwna ha*ihi Ald~iymuwqoraATiy~aAtu maSoHuwbapF bi>adaA'K tF mawiy~K muxotalifo
reference (untransliterated): وَكانَ لأَمَلُ أَن تَكُونَ هَذِهِ لدِّيمُقراطِيّاتُ مَصحُوبَةَن بِأَداءِن تَنمَوِيِّن مُختَلِف
predicted (untransliterated): وَكانَ الْأَمَلُ أَنْ تَكُونَ هَذِهِ الدِّيمُوقْرَاطِيَّاتُ مَصْحُوبَةً بِأَدَاءٍ تً مَوِيٍّ مُخْتَلِفْ
--
reference: wakatabuwA fiy dawriy~api lkul~iy~api l>amiyrikiy~api li>amrADi lqalb >an~a ls~umnapa tartabiTu biHuduwvi tagayiyrAt fiy lqalbi ladY lbAligiyn
predicted: wakatabuwA fiy daworiy~api Alokul~iy~api Alo>amiyriykiy~api li>amoraADi Aloqalo >an~a Als~umonapa tarotabiTu biHuduwvi tagoyiyraAt fiy Aloqalobi ladaY AlobaAligiyno
reference (untransliterated): وَكَتَبُوا فِي دَورِيَّةِ لكُلِّيَّةِ لأَمِيرِكِيَّةِ لِأَمراضِ لقَلب أَنَّ لسُّمنَةَ تَرتَبِطُ بِحُدُوثِ تَغَيِيرات فِي لقَلبِ لَدى لبالِغِين
predicted (untransliterated): وَكَتَبُوا فِي دَوْرِيَّةِ الْكُلِّيَّةِ الْأَمِيرِيكِيَّةِ لِأَمْرَاضِ الْقَلْ أَنَّ السُّمْنَةَ تَرْتَبِطُ بِحُدُوثِ تَغْيِيرَات فِي الْقَلْبِ لَدَى الْبَالِغِينْ
--
reference: wakul~u *alika bimuHtawYan munxafiDin lilgAyapi mina ls~uErAti lHarAriy~api
predicted: wakul~u *alika bimuHotawAF munoxafiDK lilogaAyapi mina Als~uEoraAti AloHaraAriy~api
reference (untransliterated): وَكُلُّ ذَلِكَ بِمُحتَوىَن مُنخَفِضِن لِلغايَةِ مِنَ لسُّعراتِ لحَرارِيَّةِ
predicted (untransliterated): وَكُلُّ ذَلِكَ بِمُحْتَواً مُنْخَفِضٍ لِلْغَايَةِ مِنَ السُّعْرَاتِ الْحَرَارِيَّةِ
--
reference: wakul~amA zAdat kamiy~apu ls~uk~ari lmutanAwalapi maEa lt~amri taqil~u fA}idatuhu lgi*A}iy~apu
predicted: wakul~amaA zaAdato kam~ay~apu Als~uk~ari AlomutanaAwalapi maEa Alotamori taqil~u faA}idatuhu Alogi*aA}iy~apu
reference (untransliterated): وَكُلَّما زادَت كَمِيَّةُ لسُّكَّرِ لمُتَناوَلَةِ مَعَ لتَّمرِ تَقِلُّ فائِدَتُهُ لغِذائِيَّةُ
predicted (untransliterated): وَكُلَّمَا زَادَتْ كَمَّيَّةُ السُّكَّرِ الْمُتَنَاوَلَةِ مَعَ الْتَمْرِ تَقِلُّ فَائِدَتُهُ الْغِذَائِيَّةُ
--
reference: walA yazAlu ha*A lbaladu mutamas~ikan bitaqwiymi lkaniysapi lqibTiy~api >almaEruwfi maHal~iy~an biAlt~aqwiymi l<ivyuwbiy~i
predicted: walaA yazaAlu ha*aA Alobaladu mutamas~ikAF bitaqowiymi Alokaniysapi AloqiboTiy~api >alomaEoruwfi maHal~iy~AF biAlt~aqowiymi Alo<ivoyuwbiy~i
reference (untransliterated): وَلا يَزالُ هَذا لبَلَدُ مُتَمَسِّكَن بِتَقوِيمِ لكَنِيسَةِ لقِبطِيَّةِ أَلمَعرُوفِ مَحَلِّيَّن بِالتَّقوِيمِ لإِثيُوبِيِّ
predicted (untransliterated): وَلَا يَزَالُ هَذَا الْبَلَدُ مُتَمَسِّكاً بِتَقْوِيمِ الْكَنِيسَةِ الْقِبْطِيَّةِ أَلْمَعْرُوفِ مَحَلِّيّاً بِالتَّقْوِيمِ الْإِثْيُوبِيِّ
--
reference: walaEibati lxibrapu dawrahA fiy tatwiyji EA$uwra lxAmisi EAlamiy~an
predicted: walaEibapi Aloxiborapu daworahaA fiy tatowiyji EaA$uwra AloxaAmisi EaAlamiy~AF
reference (untransliterated): وَلَعِبَتِ لخِبرَةُ دَورَها فِي تَتوِيجِ عاشُورَ لخامِسِ عالَمِيَّن
predicted (untransliterated): وَلَعِبَةِ الْخِبْرَةُ دَوْرَهَا فِي تَتْوِيجِ عَاشُورَ الْخَامِسِ عَالَمِيّاً
--
reference: tatawAlY lEamalyAtu ls~ir~iyapa biAlHuduwv
predicted: tatawaAlaY AloEamaliy~aAtu Als~ir~iy~apu biAloHuduwv
reference (untransliterated): تَتَوالى لعَمَلياتُ لسِّرِّيَةَ بِالحُدُوث
predicted (untransliterated): تَتَوَالَى الْعَمَلِيَّاتُ السِّرِّيَّةُ بِالْحُدُوث
--
reference: wamin tilka ls~ilaE >al$~Ayu lS~iyniy~u wAlwaraqu wAlbAruwdu wAlbuwSilapu
predicted: wamino tiloka Als~ilaE >al$~aAyu AlS~iyniy~u waAlowaraqu waAlobaAruwdu waAlobuwSilapu
reference (untransliterated): وَمِن تِلكَ لسِّلَع أَلشّايُ لصِّينِيُّ والوَرَقُ والبارُودُ والبُوصِلَةُ
predicted (untransliterated): وَمِنْ تِلْكَ السِّلَع أَلشَّايُ الصِّينِيُّ وَالْوَرَقُ وَالْبَارُودُ وَالْبُوصِلَةُ
--
reference: wamanaHa >AbA}uhumu lqudrapa EalY lt~aHak~umi fiy kayfiy~api stixdAmi ha*ihi lxidmapi
predicted: wamanaHa |baA&uhumu Aloqudorapa EalaY Alt~aHak~umi fiy kayofiy~api AsotixodaAmi ha*ihi Aloxidomapi
reference (untransliterated): وَمَنَحَ أابائُهُمُ لقُدرَةَ عَلى لتَّحَكُّمِ فِي كَيفِيَّةِ ستِخدامِ هَذِهِ لخِدمَةِ
predicted (untransliterated): وَمَنَحَ آبَاؤُهُمُ الْقُدْرَةَ عَلَى التَّحَكُّمِ فِي كَيْفِيَّةِ اسْتِخْدَامِ هَذِهِ الْخِدْمَةِ
--
reference: waya>mulu lbAHivuwna taTwiyra Hubuwbin >aw nusxapin mina ld~awA' qAbilapan lilHaqni xilAla xamsi sanawAt
predicted: waya>omulu AlobaAHivuwna taTowiyra HuwuwbK >awo nusoxapK mina Ald~awaA qaAbilapF liloHaqoni xilaAla xamosi sanawaAt
reference (untransliterated): وَيَأمُلُ لباحِثُونَ تَطوِيرَ حُبُوبِن أَو نُسخَةِن مِنَ لدَّواء قابِلَةَن لِلحَقنِ خِلالَ خَمسِ سَنَوات
predicted (untransliterated): وَيَأْمُلُ الْبَاحِثُونَ تَطْوِيرَ حُوُوبٍ أَوْ نُسْخَةٍ مِنَ الدَّوَا قَابِلَةً لِلْحَقْنِ خِلَالَ خَمْسِ سَنَوَات
--
reference: wayastaxdimu lbarnAmaju niZAman saHAbiy~an lil*~akA'i lS~unEiy~i yasmaHu lahu bitaHliyli l<iymA'Ati wAlt~aEAbiyr
predicted: wayasotaxodimu AlobaronaAmaju niZaAmAF saHaAbiy~AF lil*~akaA'i AlS~unoEiy~i yasomaHu lahu bitaHoliyli Alo<iymaA'aAti waAlt~aEaAbiyro
reference (untransliterated): وَيَستَخدِمُ لبَرنامَجُ نِظامَن سَحابِيَّن لِلذَّكاءِ لصُّنعِيِّ يَسمَحُ لَهُ بِتَحلِيلِ لإِيماءاتِ والتَّعابِير
predicted (untransliterated): وَيَسْتَخْدِمُ الْبَرْنَامَجُ نِظَاماً سَحَابِيّاً لِلذَّكَاءِ الصُّنْعِيِّ يَسْمَحُ لَهُ بِتَحْلِيلِ الْإِيمَاءَاتِ وَالتَّعَابِيرْ
--
reference: wayuEtabaru mihrajAnu qarTAja ls~iynamA}iy~u min >aEraqi mihrajAnAti >afriyqyA
predicted: wayuEotabaru mihorajaAnu qaroTaAja Als~iynamaA}iy~u mino >aEoraqi mihorajaAnaAti >afriyqoyaA
reference (untransliterated): وَيُعتَبَرُ مِهرَجانُ قَرطاجَ لسِّينَمائِيُّ مِن أَعرَقِ مِهرَجاناتِ أَفرِيقيا
predicted (untransliterated): وَيُعْتَبَرُ مِهْرَجَانُ قَرْطَاجَ السِّينَمَائِيُّ مِنْ أَعْرَقِ مِهْرَجَانَاتِ أَفرِيقْيَا
--
reference: wayaquwlu lEulamA'u <in~ahu min gayri lmuraj~aHi >an tuTaw~ira lbaktiyryA lmuEdiyapu muqAwamapan Did~a lEilAji ljadiyd >al~a*iy >aSbaHa mutAHan biAlfiEl fiy $akli marhamin lil>amrADi ljildiy~api
predicted: wayaquwlu AloEulamaA'u <in~ahu mino gayori Alomuraj~aHi >ano tuTaw~ira AlobakotiyroyaA AlomuEodiyapu muqaAwamapF Did~a AloEilaAji lojadiyd >al~a*iy >aSobaHa mutaAHAF biAlofiEol fiy $akoli marohamK lilo>amoraADi Alojiylodiy~api
reference (untransliterated): وَيَقُولُ لعُلَماءُ إِنَّهُ مِن غَيرِ لمُرَجَّحِ أَن تُطَوِّرَ لبَكتِيريا لمُعدِيَةُ مُقاوَمَةَن ضِدَّ لعِلاجِ لجَدِيد أَلَّذِي أَصبَحَ مُتاحَن بِالفِعل فِي شَكلِ مَرهَمِن لِلأَمراضِ لجِلدِيَّةِ
predicted (untransliterated): وَيَقُولُ الْعُلَمَاءُ إِنَّهُ مِنْ غَيْرِ الْمُرَجَّحِ أَنْ تُطَوِّرَ الْبَكْتِيرْيَا الْمُعْدِيَةُ مُقَاوَمَةً ضِدَّ الْعِلَاجِ لْجَدِيد أَلَّذِي أَصْبَحَ مُتَاحاً بِالْفِعْل فِي شَكْلِ مَرْهَمٍ لِلْأَمْرَاضِ الْجِيلْدِيَّةِ
--
reference: wayumkinuka lHuSuwlu EalY taTbiyqAtin lilt~adriybAti l>asAsiy~api maj~Anan
predicted: wayumokinuka AloHuSuwlu EalaY taTobiyqaAtK liltadoriybaAti Alo>asaAsiy~api maj~aAnAF
reference (untransliterated): وَيُمكِنُكَ لحُصُولُ عَلى تَطبِيقاتِن لِلتَّدرِيباتِ لأَساسِيَّةِ مَجّانَن
predicted (untransliterated): وَيُمْكِنُكَ الْحُصُولُ عَلَى تَطْبِيقَاتٍ لِلتَدْرِيبَاتِ الْأَسَاسِيَّةِ مَجَّاناً
--
```
## Fine-Tuning Script
You can find the script used to produce this model
[here](https://github.com/elgeish/transformers/blob/cfc0bd01f2ac2ea3a5acc578ef2e204bf4304de7/examples/research_projects/wav2vec2/finetune_base_arabic_speech_corpus.sh).
|
elgeish/wav2vec2-large-lv60-timit-asr
|
elgeish
| 2021-07-06T01:39:41Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"en",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- timit_asr
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
---
# Wav2Vec2-Large-LV60-TIMIT
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60)
on the [timit_asr dataset](https://huggingface.co/datasets/timit_asr).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
model_name = "elgeish/wav2vec2-large-lv60-timit-asr"
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
model.eval()
dataset = load_dataset("timit_asr", split="test").shuffle().select(range(10))
char_translations = str.maketrans({"-": " ", ",": "", ".": "", "?": ""})
def prepare_example(example):
example["speech"], _ = sf.read(example["file"])
example["text"] = example["text"].translate(char_translations)
example["text"] = " ".join(example["text"].split()) # clean up whitespaces
example["text"] = example["text"].lower()
return example
dataset = dataset.map(prepare_example, remove_columns=["file"])
inputs = processor(dataset["speech"], sampling_rate=16000, return_tensors="pt", padding="longest")
with torch.no_grad():
predicted_ids = torch.argmax(model(inputs.input_values).logits, dim=-1)
predicted_ids[predicted_ids == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
predicted_transcripts = processor.tokenizer.batch_decode(predicted_ids)
for reference, predicted in zip(dataset["text"], predicted_transcripts):
print("reference:", reference)
print("predicted:", predicted)
print("--")
```
Here's the output:
```
reference: the emblem depicts the acropolis all aglow
predicted: the amblum depicts the acropolis all a glo
--
reference: don't ask me to carry an oily rag like that
predicted: don't ask me to carry an oily rag like that
--
reference: they enjoy it when i audition
predicted: they enjoy it when i addition
--
reference: set aside to dry with lid on sugar bowl
predicted: set aside to dry with a litt on shoogerbowl
--
reference: a boring novel is a superb sleeping pill
predicted: a bor and novel is a suberb sleeping peel
--
reference: only the most accomplished artists obtain popularity
predicted: only the most accomplished artists obtain popularity
--
reference: he has never himself done anything for which to be hated which of us has
predicted: he has never himself done anything for which to be hated which of us has
--
reference: the fish began to leap frantically on the surface of the small lake
predicted: the fish began to leap frantically on the surface of the small lake
--
reference: or certain words or rituals that child and adult go through may do the trick
predicted: or certain words or rituals that child an adult go through may do the trick
--
reference: are your grades higher or lower than nancy's
predicted: are your grades higher or lower than nancies
--
```
## Fine-Tuning Script
You can find the script used to produce this model
[here](https://github.com/elgeish/transformers/blob/8ee49e09c91ffd5d23034ce32ed630d988c50ddf/examples/research_projects/wav2vec2/finetune_large_lv60_timit_asr.sh).
**Note:** This model can be fine-tuned further;
[trainer_state.json](https://huggingface.co/elgeish/wav2vec2-large-lv60-timit-asr/blob/main/trainer_state.json)
shows useful details, namely the last state (this checkpoint):
```json
{
"epoch": 29.51,
"eval_loss": 25.424150466918945,
"eval_runtime": 182.9499,
"eval_samples_per_second": 9.183,
"eval_wer": 0.1351704233095107,
"step": 8500
}
```
|
dundar/wav2vec2-large-xlsr-53-lithuanian
|
dundar
| 2021-07-06T01:34:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"lt",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: lt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Lithuanian by Enes Burak Dundar
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lt
type: common_voice
args: lt
metrics:
- name: Test WER
type: wer
value: 35.87
---
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Lithuanian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("dundar/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("dundar/wav2vec2-large-xlsr-53-lithuanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Lithuanian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("dundar/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("dundar/wav2vec2-large-xlsr-53-lithuanian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 35.87 %
## Training
The Common Voice datasets `except the test` set were used for training.
The script used for training can be found [here](https://github.com/ebdundar/)
|
distractedm1nd/wav2vec-en-finetuned-on-cryptocurrency
|
distractedm1nd
| 2021-07-06T01:32:06Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- audio
- automatic-speech-recognition
metrics:
- wer
license: mit
---
We took `facebook/wav2vec2-large-960h` and fine tuned it using 1400 audio clips (around 10-15 seconds each) from various cryptocurrency related podcasts. To label the data, we downloaded cryptocurrency podcasts from youtube with their subtitle data and split the clips up by sentence. We then compared the youtube transcription with `facebook/wav2vec2-large-960h` to correct many mistakes in the youtube transcriptions. We can probably achieve better results with more data clean up.
On our data we achieved a WER of 13.1%. `facebook/wav2vec2-large-960h` only reached a WER of 27% on our data.
## Usage
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("distractedm1nd/wav2vec-en-finetuned-on-cryptocurrency")
model = Wav2Vec2ForCTC.from_pretrained("distractedm1nd/wav2vec-en-finetuned-on-cryptocurrency"
filename = "INSERT_FILENAME"
audio, sampling_rate = sf.read(filename)
input_values = processor(audio, return_tensors="pt", padding="longest", sampling_rate=sampling_rate).input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
tokenizer.batch_decode(predicted_ids
```
|
danurahul/wav2vec2-large-xlsr-or
|
danurahul
| 2021-07-06T01:22:42Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"or",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: or
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: odia XLSR Wav2Vec2 Large 2000
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice or
type: common_voice
args: or
metrics:
- name: Test WER
type: wer
value: 54.6
---
# Wav2Vec2-Large-XLSR-53-or
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on odia using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 54.6 %
## Training
The Common Voice `train`, `validation`, and test datasets were used for training as well as prediction and testing
The script used for training can be found [https://github.com/rahul-art/wav2vec2_or]
|
crang/wav2vec2-large-xlsr-53-tatar
|
crang
| 2021-07-06T00:58:16Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tt",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Tatar XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tt
type: common_voice
args: tt
metrics:
- name: Test WER
type: wer
value: 30.93
---
# Wav2Vec2-Large-XLSR-53-Tatar
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tatar using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar")
model = Wav2Vec2ForCTC.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tatar test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar")
model = Wav2Vec2ForCTC.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\u2013\u2014\;\:\"\\%\\\]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 30.93 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
chompk/wav2vec2-large-xlsr-thai-tokenized
|
chompk
| 2021-07-06T00:36:51Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning",
"th",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: th
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning
license: apache-2.0
---
# Wav2Vec2-Large-XLSR-53 in Thai Language (Train with deepcut tokenizer)
|
ceyda/wav2vec2-base-760
|
ceyda
| 2021-07-06T00:16:35Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
Pretrained on 720h~ of Turkish speech data
TBA
|
cahya/wav2vec2-large-xlsr-turkish-artificial-cv
|
cahya
| 2021-07-06T00:02:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Turkish by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 14.61
---
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Large-XLSR-Turkish-Artificial-CV, a fine-tuned
[cahya/wav2vec2-large-xlsr-turkish-artificial](https://huggingface.co/cahya/wav2vec2-large-xlsr-turkish-artificial)
model on [Turkish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 14.61 %
## Training
The Common Voice `train`, `validation`, other and invalidated
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
cahya/wav2vec2-large-xlsr-sundanese
|
cahya
| 2021-07-06T00:00:07Z | 27 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"su",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: su
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Sundanese by cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR High quality TTS data for Sundanese
type: OpenSLR
args: su
metrics:
- name: Test WER
type: wer
value: 6.19
---
# Wav2Vec2-Large-XLSR-Sundanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [OpenSLR High quality TTS data for Sundanese](https://openslr.org/44/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from datasets.utils.download_manager import DownloadManager
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from pathlib import Path
import pandas as pd
def load_dataset_sundanese():
urls = [
"https://www.openslr.org/resources/44/su_id_female.zip",
"https://www.openslr.org/resources/44/su_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"su_id_female/wavs",
Path(download_dirs[1])/"su_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"su_id_female/line_index.tsv",
Path(download_dirs[1])/"su_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t4?\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t\t', names=["path", "sentence"]))
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_sundanese()
test_dataset = dataset['test']
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows or using the [notebook](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb).
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets.utils.download_manager import DownloadManager
import re
from pathlib import Path
import pandas as pd
def load_dataset_sundanese():
urls = [
"https://www.openslr.org/resources/44/su_id_female.zip",
"https://www.openslr.org/resources/44/su_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"su_id_female/wavs",
Path(download_dirs[1])/"su_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"su_id_female/line_index.tsv",
Path(download_dirs[1])/"su_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t4?\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t\t', names=["path", "sentence"]))
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_sundanese()
test_dataset = dataset['test']
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”_\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 6.19 %
## Training
[OpenSLR High quality TTS data for Sundanese](https://openslr.org/44/) was used for training.
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb)
and to [evaluate it](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb)
|
cahya/wav2vec2-large-xlsr-javanese
|
cahya
| 2021-07-05T23:57:54Z | 225 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"jv",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: jv
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Javanese by cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR High quality TTS data for Javanese
type: OpenSLR
args: jv
metrics:
- name: Test WER
type: wer
value: 17.61
---
# Wav2Vec2-Large-XLSR-Javanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [OpenSLR High quality TTS data for Javanese](https://openslr.org/41/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets.utils.download_manager import DownloadManager
from pathlib import Path
import pandas as pd
def load_dataset_javanese():
urls = [
"https://www.openslr.org/resources/41/jv_id_female.zip",
"https://www.openslr.org/resources/41/jv_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"jv_id_female/wavs",
Path(download_dirs[1])/"jv_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"jv_id_female/line_index.tsv",
Path(download_dirs[1])/"jv_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t', names=["path", "client_id", "sentence"]))
dfs[1] = dfs[1].drop(["client_id"], axis=1)
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_javanese()
test_dataset = dataset['test']
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows or using this
[notebook](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Javanese.ipynb)
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from datasets.utils.download_manager import DownloadManager
from pathlib import Path
import pandas as pd
def load_dataset_javanese():
urls = [
"https://www.openslr.org/resources/41/jv_id_female.zip",
"https://www.openslr.org/resources/41/jv_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"jv_id_female/wavs",
Path(download_dirs[1])/"jv_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"jv_id_female/line_index.tsv",
Path(download_dirs[1])/"jv_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t', names=["path", "client_id", "sentence"]))
dfs[1] = dfs[1].drop(["client_id"], axis=1)
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_javanese()
test_dataset = dataset['test']
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”_\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.61 %
## Training
[OpenSLR High quality TTS data for Javanese](https://openslr.org/41/) was used for training.
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Javanese.ipynb)
and to [evaluate it](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Javanese.ipynb)
|
cahya/wav2vec2-large-xlsr-indonesian
|
cahya
| 2021-07-05T23:55:41Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: id
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesian by cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: 25.86
---
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.86 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
(will be available soon)
|
cahya/wav2vec2-large-xlsr-indonesian-artificial
|
cahya
| 2021-07-05T23:51:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: id
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesian with Artificial Voice by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: 51.69
---
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Indonesian Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 51.69 %
## Training
The Artificial Common Voice `train`, `validation`, and ... datasets were used for training.
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
(will be available soon)
|
eduardofv/stsb-m-mt-es-distilbert-base-uncased
|
eduardofv
| 2021-07-05T23:29:52Z | 27 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"sentence-similarity",
"es",
"dataset:stsb_multi_mt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language: es
datasets:
- stsb_multi_mt
tags:
- sentence-similarity
- sentence-transformers
---
# distilbert-base-uncased trained for Semantic Textual Similarity in Spanish
This is a test model that was fine-tuned using the Spanish datasets from [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) in order to understand and benchmark STS models.
## Model and training data description
This model was built taking `distilbert-base-uncased` and training it on a Semantic Textual Similarity task using a modified version of the training script for STS from Sentece Transformers (the modified script is included in the repo). It was trained using the Spanish datasets from [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) which are the STSBenchmark datasets automatically translated to other languages using deepl.com. Refer to the dataset repository for more details.
## Intended uses & limitations
This model was built just as a proof-of-concept on STS fine-tuning using Spanish data and no specific use other than getting a sense on how this training works.
## How to use
You may use it as any other STS trained model to extract sentence embeddings. Check Sentence Transformers documentation.
## Training procedure
Use the included script to train in Spanish the base model. You can also try to train another model passing it's reference as first argument. You can also train in some other language of those included in the training dataset.
## Evaluation results
Evaluating `distilbert-base-uncased` on the Spanish test dataset before training results in:
```
Cosine-Similarity : Pearson: 0.2980 Spearman: 0.4008
```
While the fine-tuned version with the defaults of the training script and the Spanish training dataset results in:
```
Cosine-Similarity : Pearson: 0.7451 Spearman: 0.7364
```
In our [STS Evaluation repository](https://github.com/eduardofv/sts_eval) we compare the performance of this model with other models from Sentence Transformers and Tensorflow Hub using the standard STSBenchmark and the 2017 STSBenchmark Task 3 for Spanish.
## Resources
- Training dataset [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt)
- Sentence Transformers [Semantic Textual Similarity](https://www.sbert.net/examples/training/sts/README.html)
- Check [sts_eval](https://github.com/eduardofv/sts_eval) for a comparison with Tensorflow and Sentence-Transformers models
- Check the [development environment to run the scripts and evaluation](https://github.com/eduardofv/ai-denv)
|
birgermoell/wav2vec2-large-xlsr-hungarian
|
birgermoell
| 2021-07-05T23:16:31Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hu",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: hu
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Hugarian by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hu
type: common_voice
args: hu
metrics:
- name: Test WER
type: wer
value: 46.97
---
# Wav2Vec2-Large-XLSR-53-Hungarian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Hungarian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hu", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Hungarian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hu", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 46.97 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1c8LS-RP-RMukvXkpqJ9kLXRWmRKFjevs?usp=sharing)
|
birgermoell/wav2vec2-large-xlrs-estonian
|
birgermoell
| 2021-07-05T23:07:04Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"et",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: et
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Estonian by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Estonian
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 36.951816
---
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "et", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Luganda test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\twith torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER: 36.951816
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found here
https://colab.research.google.com/drive/1VcWT92vBCwVn-5d-mkYxhgILPr11OHfR?usp=sharing
|
birgermoell/swedish-common-voice-vox-voxpopuli
|
birgermoell
| 2021-07-05T23:02:25Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"et",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: et
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: common-voice-vox-populi-swedish by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Vox Populi Swedish
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 36.951816
---
# common-voice-vox-populi-swedish
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/birgermoell/common-voice-vox-populi-swedish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/common-voice-vox-populi-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/common-voice-vox-populi-swedish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/common-voice-vox-populi-swedish")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\twith torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\\\\\\\\\\\\\
```
**Test Result**:
WER: 22.684600
|
arampacha/wav2vec2-large-xlsr-ukrainian
|
arampacha
| 2021-07-05T22:02:32Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"uk",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: uk
dataset: common_voice
metrics: wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Ukrainian XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice uk
type: common_voice
args: uk
metrics:
- name: Test WER
type: wer
value: 29.89
---
# Wav2Vec2-Large-XLSR-53-Ukrainian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Ukrainian using the [Common Voice](https://huggingface.co/datasets/common_voice) and sample of [M-AILABS Ukrainian Corpus](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "uk", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "uk", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", '«', '»', '—', '…', '(', ')', '*', '”', '“']
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays and normalize charecters
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(re.compile("['`]"), '’', batch['sentence'])
batch["sentence"] = re.sub(re.compile(chars_to_ignore_regex), '', batch["sentence"]).lower().strip()
batch["sentence"] = re.sub(re.compile('i'), 'і', batch['sentence'])
batch["sentence"] = re.sub(re.compile('o'), 'о', batch['sentence'])
batch["sentence"] = re.sub(re.compile('a'), 'а', batch['sentence'])
batch["sentence"] = re.sub(re.compile('ы'), 'и', batch['sentence'])
batch["sentence"] = re.sub(re.compile("–"), '', batch['sentence'])
batch['sentence'] = re.sub(' ', ' ', batch['sentence'])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.89
## Training
The Common Voice `train`, `validation` and the M-AILABS Ukrainian corpus.
The script used for training will be available [here](https://github.com/arampacha/hf-sprint-xlsr) soon.
|
anuragshas/wav2vec2-xlsr-53-pa-in
|
anuragshas
| 2021-07-05T21:47:48Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: pa-IN
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Punjabi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pa-IN
type: common_voice
args: pa-IN
metrics:
- name: Test WER
type: wer
value: 58.05
---
# Wav2Vec2-Large-XLSR-53-Punjabi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Punjabi using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pa-IN", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Punjabi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pa-IN", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\।\’\'\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 58.05 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anuragshas/wav2vec2-large-xlsr-53-ia
|
anuragshas
| 2021-07-05T21:04:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ia",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ia
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Interlingua
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ia
type: common_voice
args: ia
metrics:
- name: Test WER
type: wer
value: 22.08
---
# Wav2Vec2-Large-XLSR-53-Interlingua
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Interlingua using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ia", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Interlingua test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ia", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
model.to("cuda")
chars_to_ignore_regex = '[\.\,\!\?\-\"\:\;\'\“\”]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 22.08 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anton-l/wav2vec2-large-xlsr-53-tatar
|
anton-l
| 2021-07-05T20:40:41Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tt",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Tatar XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tt
type: common_voice
args: tt
metrics:
- name: Test WER
type: wer
value: 26.76
---
# Wav2Vec2-Large-XLSR-53-Tatar
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tatar using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tatar test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/tt.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/tt/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/tt/clips/"
def clean_sentence(sent):
sent = sent.lower()
# 'ё' is equivalent to 'е'
sent = sent.replace('ё', 'е')
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 26.76 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anton-l/wav2vec2-large-xlsr-53-romanian
|
anton-l
| 2021-07-05T20:20:21Z | 34,663 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ro",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ro
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Romanian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ro
type: common_voice
args: ro
metrics:
- name: Test WER
type: wer
value: 24.84
---
# Wav2Vec2-Large-XLSR-53-Romanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romanian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ro", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Romanian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ro.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ro/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/ro/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 24.84 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anton-l/wav2vec2-large-xlsr-53-mongolian
|
anton-l
| 2021-07-05T20:13:41Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mn",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: mn
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Mongolian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: mn
metrics:
- name: Test WER
type: wer
value: 38.53
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/mn.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/mn/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/mn/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 38.53 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anton-l/wav2vec2-large-xlsr-53-kyrgyz
|
anton-l
| 2021-07-05T19:53:54Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ky",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ky
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Kyrgyz XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ky
type: common_voice
args: ky
metrics:
- name: Test WER
type: wer
value: 31.88
---
# Wav2Vec2-Large-XLSR-53-Kyrgyz
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kyrgyz using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ky", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Kyrgyz test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ky.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ky/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/ky/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 31.88 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
aniltrkkn/wav2vec2-large-xlsr-53-turkish
|
aniltrkkn
| 2021-07-05T19:34:22Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2-Large-XLSR-53-Turkish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 17.46
---
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from unicode_tr import unicode_tr
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = str(unicode_tr(re.sub(chars_to_ignore_regex, "", batch["sentence"])).lower())
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.46 %
## Training
unicode_tr package is used for converting sentences to lower case since regular lower() does not work well with Turkish.
Since training data is very limited for Turkish, all data is employed with a K-Fold (k=5) training approach. Best model out of the 5 trainings is uploaded. Training arguments:
--num_train_epochs="30" \\
--per_device_train_batch_size="32" \\
--evaluation_strategy="steps" \\
--activation_dropout="0.055" \\
--attention_dropout="0.094" \\
--feat_proj_dropout="0.04" \\
--hidden_dropout="0.047" \\
--layerdrop="0.041" \\
--learning_rate="2.34e-4" \\
--mask_time_prob="0.082" \\
--warmup_steps="250" \\
All trainings took ~20 hours with a GeForce RTX 3090 Graphics Card.
|
anas/wav2vec2-large-xlsr-arabic
|
anas
| 2021-07-05T19:27:53Z | 147 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ar",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ar
datasets:
- common_voice: Common Voice Corpus 4
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Hasni XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 52.18
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice Corpus 4](https://commonvoice.mozilla.org/en/datasets) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
processor = Wav2Vec2Processor.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("anas/wav2vec2-large-xlsr-arabic/")
model.to("cuda")
chars_to_ignore_regex = '[\,\؟\.\!\-\;\\:\'\"\☭\«\»\؛\—\ـ\_\،\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
batch["sentence"] = re.sub('[a-z]','',batch["sentence"])
batch["sentence"] = re.sub("[إأٱآا]", "ا", batch["sentence"])
noise = re.compile(""" ّ | # Tashdid
َ | # Fatha
ً | # Tanwin Fath
ُ | # Damma
ٌ | # Tanwin Damm
ِ | # Kasra
ٍ | # Tanwin Kasr
ْ | # Sukun
ـ # Tatwil/Kashida
""", re.VERBOSE)
batch["sentence"] = re.sub(noise, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 52.18 %
## Training
The Common Voice Corpus 4 `train`, `validation`, datasets were used for training
The script used for training can be found [here](https://github.com/anashas/Fine-Tuning-of-XLSR-Wav2Vec2-on-Arabic)
Twitter: [here](https://twitter.com/hasnii_anas)
Email: anashasni146@gmail.com
|
amoghsgopadi/wav2vec2-large-xlsr-kn
|
amoghsgopadi
| 2021-07-05T19:21:53Z | 41 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"kn",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: kn
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Kannada by Amogh Gopadi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR kn
type: openslr
metrics:
- name: Test WER
type: wer
value: 27.08
---
# Wav2Vec2-Large-XLSR-53-Kannada
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kannada using the [OpenSLR SLR79](http://openslr.org/79/) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Kannada `sentence` and `path` fields:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For a sample, see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
resampler = torchaudio.transforms.Resample(48_000, 16_000) # The original data was with 48,000 sampling rate. You can change it according to your input.
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on 10% of the Kannada data on OpenSLR.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"),
attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 27.08 %
## Training
90% of the OpenSLR Kannada dataset was used for training.
The colab notebook used for training can be found [here](https://colab.research.google.com/github/amoghgopadi/wav2vec2-xlsr-kannada/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Kannada_ASR.ipynb).
|
alokmatta/wav2vec2-large-xlsr-53-sw
|
alokmatta
| 2021-07-05T19:12:57Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"sw",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: sw
datasets:
- ALFFA,Gamayun & IWSLT
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Swahili XLSR-53 Wav2Vec2.0 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: ALFFA sw
args: sw
metrics:
- name: Test WER
type: wer
value: WIP
---
# Wav2Vec2-Large-XLSR-53-Swahili
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Swahili using the following datasets:
- [ALFFA](http://www.openslr.org/25/),
- [Gamayun](https://gamayun.translatorswb.org/download/gamayun-5k-english-swahili/)
- [IWSLT](https://iwslt.org/2021/low-resource)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("alokmatta/wav2vec2-large-xlsr-53-sw")
model = Wav2Vec2ForCTC.from_pretrained("alokmatta/wav2vec2-large-xlsr-53-sw").to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def load_file_to_data(file):
batch = {}
speech, _ = torchaudio.load(file)
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
return batch
def predict(data):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to("cuda")
attention_mask = features.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
return processor.batch_decode(pred_ids)
predict(load_file_to_data('./demo.wav'))
```
**Test Result**: 40 %
## Training
The script used for training can be found [here](https://colab.research.google.com/drive/1_RL6TQv_Yiu_xbWXu4ycbzdCdXCqEQYU?usp=sharing)
|
adresgezgini/wav2vec-tr-lite-AG
|
adresgezgini
| 2021-07-05T18:56:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Turkish by Davut Emre TASAR
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
---
# wav2vec-tr-lite-AG
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("emre/wav2vec-tr-lite-AG")
model = Wav2Vec2ForCTC.from_pretrained("emre/wav2vec-tr-lite-AG")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
**Test Result**: 27.30 %
[here](https://adresgezgini.com)
|
Tommi/wav2vec2-large-xlsr-53-finnish
|
Tommi
| 2021-07-05T17:57:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fi
datasets:
- common_voice
- CSS10
- Finnish parliament session 2
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Finnish XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 35.43
---
# Wav2Vec2-Large-XLSR-53-Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset) and [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import numpy as np
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Tommi/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("Tommi/wav2vec2-large-xlsr-53-finnish")
resampler = lambda sr, y: librosa.resample(y.squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array.numpy()).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Tommi/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("Tommi/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\"\%\'\"\�\'\...\…\–\é]'
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 35.43 %
## Training
The Common Voice `train`, `validation`, and `other` datasets were used for training as well as CSS10 and Finnish parliament session 2
The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
Thanish/wav2vec2-large-xlsr-tamil
|
Thanish
| 2021-07-05T17:50:46Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ta
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: thanish wav2vec2-large-xlsr-tamil
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 100.00
---
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tamil test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "{lang_id}", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\\tpred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 100.00 %
## Training
The Common Voice `train`, `validation` were used for training
The script used for training can be found [https://colab.research.google.com/drive/1PC2SjxpcWMQ2qmRw21NbP38wtQQUa5os#scrollTo=YKBZdqqJG9Tv](...)
|
Srulikbdd/Wav2Vec2-large-xlsr-welsh
|
Srulikbdd
| 2021-07-05T17:38:11Z | 212 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"sv",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: sv
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Welsh by Srulik Ben David
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice cy
type: common_voice
args: cy
metrics:
- name: Test WER
type: wer
value: 29.4
---
Wav2Vec2-Large-XLSR-Welsh
Fine-tuned facebook/wav2vec2-large-xlsr-53 on the Welsh Common Voice dataset.
The data was augmented using standard augmentation approach.
When using this model, make sure that your speech input is sampled at 16kHz.
Test Result: 29.4%
Usage
The model can be used directly (without a language model) as follows:
```
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cy", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Srulikbdd/Wav2vec2-large-xlsr-welsh")
model = Wav2Vec2ForCTC.from_pretrained("Srulikbdd/Wav2vec2-large-xlsr-welsh")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
Evaluation
The model can be evaluated as follows on the Welsh test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cy", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Srulikbdd/Wav2Vec2-large-xlsr-welsh")
model = Wav2Vec2ForCTC.from_pretrained("Srulikbdd/Wav2Vec2-large-xlsr-welsh")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u2013\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u2014\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
|
RuudVelo/XLSR-Wav2Vec2-Maltese-1
|
RuudVelo
| 2021-07-05T17:21:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mt",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: mt
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Maltese by RuudVelo
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mt
type: common_voice
args: mt
metrics:
- name: Test WER
type: wer
value: 30.0
---
## Evaluation on Common Voice Maltese Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "RuudVelo/XLSR-Wav2Vec2-Maltese-1"
device = "cuda"
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]'
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "mt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 30.0 %
|
Rubens/Wav2Vec2-Large-XLSR-53-Portuguese
|
Rubens
| 2021-07-05T17:09:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: pt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- apache-2.0
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: Rubens XLSR Wav2Vec2 Large 53 Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pt
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 20.41%
---
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Rubens/Wav2Vec2-Large-XLSR-53-Portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Rubens/Wav2Vec2-Large-XLSR-53-Portuguese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Rubens/Wav2Vec2-Large-XLSR-53-Portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Rubens/Wav2Vec2-Large-XLSR-53-Portuguese")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 20.41 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/RubensZimbres/wav2vec2/blob/main/fine-tuning.py
|
Nhut/wav2vec2-large-xlsr-french
|
Nhut
| 2021-07-05T16:25:03Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: fr
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-French by Nhut DOAN NGUYEN
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fr
type: common_voice
args: fr
metrics:
- name: Test WER
type: wer
value: xx.xx
---
# wav2vec2-large-xlsr-53-french
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in French using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fr", split="test[:20%]")
processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the French test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fr")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\â€\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.31 %
## Training
V1 of the Common Voice `train`, `validation` datasets were used for training.
## Testing
20% of V6.1 of the Common Voice `Test` dataset were used for training.
|
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish
|
MehdiHosseiniMoghadam
| 2021-07-05T16:13:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: sv-SE
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-Swedish by Mehdi Hosseini Moghadam
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sv-SE
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 41.388337
---
# wav2vec2-large-xlsr-53-Swedish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 41.388337 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian
|
MehdiHosseiniMoghadam
| 2021-07-05T16:05:44Z | 49 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ka",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: ka
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-Georgian by Mehdi Hosseini Moghadam
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ka
type: common_voice
args: ka
metrics:
- name: Test WER
type: wer
value: 60.504024
---
# wav2vec2-large-xlsr-53-Georgian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ka", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Georgian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ka", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 60.504024 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French
|
MehdiHosseiniMoghadam
| 2021-07-05T15:56:43Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: fr
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-French by Mehdi Hosseini Moghadam
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fr
type: common_voice
args: fr
metrics:
- name: Test WER
type: wer
value: 34.856015
---
# wav2vec2-large-xlsr-53-French
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in French using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the French test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fr", split="test[:10%]")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.856015 %
## Training
10% of the Common Voice `train`, `validation` datasets were used for training.
## Testing
10% of the Common Voice `Test` dataset were used for training.
|
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech
|
MehdiHosseiniMoghadam
| 2021-07-05T15:42:45Z | 73 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"cs",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: cs
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-Czech by Mehdi Hosseini Moghadam
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice cs
type: common_voice
args: cs
metrics:
- name: Test WER
type: wer
value: 27.047806
---
# wav2vec2-large-xlsr-53-Czech
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Czech using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Czech test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cs", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 27.047806 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
edixo/road_good_damaged_condition
|
edixo
| 2021-07-05T14:43:15Z | 84 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: road_good_damaged_condition
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9583333134651184
---
# road_good_damaged_condition
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### damaged road

#### good road

|
KBLab/wav2vec2-base-voxpopuli-sv-swedish
|
KBLab
| 2021-07-05T14:29:11Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"voxpopuli",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: sv-SE
datasets:
- common_voice
- NST Swedish ASR Database
metrics:
- wer
#- cer
tags:
- audio
- automatic-speech-recognition
- speech
- voxpopuli
license: cc-by-nc-4.0
model-index:
- name: Wav2vec 2.0 base VoxPopuli-sv swedish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: NST Swedish ASR Database
metrics:
- name: Test WER
type: wer
value: 5.619804368919309
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 19.145252414798616
---
# Wav2vec 2.0 base-voxpopuli-sv-swedish
Finetuned version of Facebooks [VoxPopuli-sv base](https://huggingface.co/facebook/wav2vec2-base-sv-voxpopuli) model using NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **5.62%**, WER for Common Voice test set is **19.15%**.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-base-voxpopuli-sv-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-base-voxpopuli-sv-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.