pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
image-segmentation
|
transformers
|
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerImageProcessor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
processor = MaskFormerImageProcessor.from_pretrained("facebook/maskformer-swin-large-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-large-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
|
{"license": "other", "tags": ["vision", "image-segmentation"], "datasets": ["coco"], "widget": [{"src": "http://images.cocodataset.org/val2017/000000039769.jpg", "example_title": "Cats"}, {"src": "http://images.cocodataset.org/val2017/000000039770.jpg", "example_title": "Castle"}]}
|
facebook/maskformer-swin-large-coco
| null |
[
"transformers",
"pytorch",
"safetensors",
"maskformer",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2107.06278"
] |
[] |
TAGS
#transformers #pytorch #safetensors #maskformer #vision #image-segmentation #dataset-coco #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us
|
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository.
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.
!model image
## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the model hub to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
For more code examples, we refer to the documentation.
|
[
"# MaskFormer\n\nMaskFormer model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image",
"## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation."
] |
[
"TAGS\n#transformers #pytorch #safetensors #maskformer #vision #image-segmentation #dataset-coco #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us \n",
"# MaskFormer\n\nMaskFormer model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image",
"## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation."
] |
image-segmentation
|
transformers
|
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-small-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-small-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
|
{"license": "other", "tags": ["vision", "image-segmentation"], "datasets": ["scene_parse_150"], "widget": [{"src": "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg", "example_title": "House"}, {"src": "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg", "example_title": "Castle"}]}
|
facebook/maskformer-swin-small-ade
| null |
[
"transformers",
"pytorch",
"safetensors",
"maskformer",
"vision",
"image-segmentation",
"dataset:scene_parse_150",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2107.06278"
] |
[] |
TAGS
#transformers #pytorch #safetensors #maskformer #vision #image-segmentation #dataset-scene_parse_150 #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us
|
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (small-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository.
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.
!model image
## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the model hub to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
For more code examples, we refer to the documentation.
|
[
"# MaskFormer\n\nMaskFormer model trained on ADE20k semantic segmentation (small-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image",
"## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation."
] |
[
"TAGS\n#transformers #pytorch #safetensors #maskformer #vision #image-segmentation #dataset-scene_parse_150 #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us \n",
"# MaskFormer\n\nMaskFormer model trained on ADE20k semantic segmentation (small-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image",
"## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation."
] |
image-segmentation
|
transformers
|
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-small-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-small-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
|
{"license": "other", "tags": ["vision", "image-segmentation"], "datasets": ["coco"], "widget": [{"src": "http://images.cocodataset.org/val2017/000000039769.jpg", "example_title": "Cats"}, {"src": "http://images.cocodataset.org/val2017/000000039770.jpg", "example_title": "Castle"}]}
|
facebook/maskformer-swin-small-coco
| null |
[
"transformers",
"pytorch",
"safetensors",
"maskformer",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2107.06278"
] |
[] |
TAGS
#transformers #pytorch #safetensors #maskformer #vision #image-segmentation #dataset-coco #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us
|
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository.
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.
!model image
## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the model hub to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
For more code examples, we refer to the documentation.
|
[
"# MaskFormer\n\nMaskFormer model trained on COCO panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image",
"## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation."
] |
[
"TAGS\n#transformers #pytorch #safetensors #maskformer #vision #image-segmentation #dataset-coco #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us \n",
"# MaskFormer\n\nMaskFormer model trained on COCO panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image",
"## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation."
] |
image-segmentation
|
transformers
|
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-tiny-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-tiny-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
|
{"license": "other", "tags": ["vision", "image-segmentation"], "datasets": ["scene_parse_150"], "widget": [{"src": "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg", "example_title": "House"}, {"src": "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg", "example_title": "Castle"}]}
|
facebook/maskformer-swin-tiny-ade
| null |
[
"transformers",
"pytorch",
"safetensors",
"maskformer",
"vision",
"image-segmentation",
"dataset:scene_parse_150",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2107.06278"
] |
[] |
TAGS
#transformers #pytorch #safetensors #maskformer #vision #image-segmentation #dataset-scene_parse_150 #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us
|
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository.
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.
!model image
## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the model hub to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
For more code examples, we refer to the documentation.
|
[
"# MaskFormer\n\nMaskFormer model trained on ADE20k semantic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image",
"## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation."
] |
[
"TAGS\n#transformers #pytorch #safetensors #maskformer #vision #image-segmentation #dataset-scene_parse_150 #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us \n",
"# MaskFormer\n\nMaskFormer model trained on ADE20k semantic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image",
"## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation."
] |
image-segmentation
|
transformers
|
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-tiny-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-tiny-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
|
{"license": "other", "tags": ["vision", "image-segmentation"], "datasets": ["coco"], "widget": [{"src": "http://images.cocodataset.org/val2017/000000039769.jpg", "example_title": "Cats"}, {"src": "http://images.cocodataset.org/val2017/000000039770.jpg", "example_title": "Castle"}]}
|
facebook/maskformer-swin-tiny-coco
| null |
[
"transformers",
"pytorch",
"safetensors",
"maskformer",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2107.06278"
] |
[] |
TAGS
#transformers #pytorch #safetensors #maskformer #vision #image-segmentation #dataset-coco #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us
|
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository.
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.
!model image
## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the model hub to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
For more code examples, we refer to the documentation.
|
[
"# MaskFormer\n\nMaskFormer model trained on COCO panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image",
"## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation."
] |
[
"TAGS\n#transformers #pytorch #safetensors #maskformer #vision #image-segmentation #dataset-coco #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us \n",
"# MaskFormer\n\nMaskFormer model trained on COCO panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image",
"## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation."
] |
translation
|
transformers
|
# mBART-50 many to many multilingual machine translation
This model is a fine-tuned checkpoint of [mBART-large-50](https://huggingface.co/facebook/mbart-large-50). `mbart-large-50-many-to-many-mmt` is fine-tuned for multilingual machine translation. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
The model can translate directly between any pair of 50 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
# translate Hindi to French
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(
**encoded_hi,
forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire dans la Syrie."
# translate Arabic to English
tokenizer.src_lang = "ar_AR"
encoded_ar = tokenizer(article_ar, return_tensors="pt")
generated_tokens = model.generate(
**encoded_ar,
forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "The Secretary-General of the United Nations says there is no military solution in Syria."
```
See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for more fine-tuned versions.
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
```
@article{tang2020multilingual,
title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning},
author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan},
year={2020},
eprint={2008.00401},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["multilingual", "ar", "cs", "de", "en", "es", "et", "fi", "fr", "gu", "hi", "it", "ja", "kk", "ko", "lt", "lv", "my", "ne", "nl", "ro", "ru", "si", "tr", "vi", "zh", "af", "az", "bn", "fa", "he", "hr", "id", "ka", "km", "mk", "ml", "mn", "mr", "pl", "ps", "pt", "sv", "sw", "ta", "te", "th", "tl", "uk", "ur", "xh", "gl", "sl"], "tags": ["mbart-50"], "pipeline_tag": "translation"}
|
facebook/mbart-large-50-many-to-many-mmt
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"mbart",
"text2text-generation",
"mbart-50",
"translation",
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl",
"arxiv:2008.00401",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2008.00401"
] |
[
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl"
] |
TAGS
#transformers #pytorch #tf #jax #rust #safetensors #mbart #text2text-generation #mbart-50 #translation #multilingual #ar #cs #de #en #es #et #fi #fr #gu #hi #it #ja #kk #ko #lt #lv #my #ne #nl #ro #ru #si #tr #vi #zh #af #az #bn #fa #he #hr #id #ka #km #mk #ml #mn #mr #pl #ps #pt #sv #sw #ta #te #th #tl #uk #ur #xh #gl #sl #arxiv-2008.00401 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# mBART-50 many to many multilingual machine translation
This model is a fine-tuned checkpoint of mBART-large-50. 'mbart-large-50-many-to-many-mmt' is fine-tuned for multilingual machine translation. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.
The model can translate directly between any pair of 50 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method.
See the model hub to look for more fine-tuned versions.
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
|
[
"# mBART-50 many to many multilingual machine translation\n\n\nThis model is a fine-tuned checkpoint of mBART-large-50. 'mbart-large-50-many-to-many-mmt' is fine-tuned for multilingual machine translation. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.\n\n\nThe model can translate directly between any pair of 50 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method.\n\n\n\n\n\nSee the model hub to look for more fine-tuned versions.",
"## Languages covered\nArabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #mbart #text2text-generation #mbart-50 #translation #multilingual #ar #cs #de #en #es #et #fi #fr #gu #hi #it #ja #kk #ko #lt #lv #my #ne #nl #ro #ru #si #tr #vi #zh #af #az #bn #fa #he #hr #id #ka #km #mk #ml #mn #mr #pl #ps #pt #sv #sw #ta #te #th #tl #uk #ur #xh #gl #sl #arxiv-2008.00401 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# mBART-50 many to many multilingual machine translation\n\n\nThis model is a fine-tuned checkpoint of mBART-large-50. 'mbart-large-50-many-to-many-mmt' is fine-tuned for multilingual machine translation. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.\n\n\nThe model can translate directly between any pair of 50 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method.\n\n\n\n\n\nSee the model hub to look for more fine-tuned versions.",
"## Languages covered\nArabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)",
"## BibTeX entry and citation info"
] |
text2text-generation
|
transformers
|
# mBART-50 many to one multilingual machine translation
This model is a fine-tuned checkpoint of [mBART-large-50](https://huggingface.co/facebook/mbart-large-50). `mbart-large-50-many-to-many-mmt` is fine-tuned for multilingual machine translation. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
The model can translate directly between any pair of 50 languages.
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
# translate Hindi to English
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "The head of the UN says there is no military solution in Syria."
# translate Arabic to English
tokenizer.src_lang = "ar_AR"
encoded_ar = tokenizer(article_ar, return_tensors="pt")
generated_tokens = model.generate(**encoded_ar)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "The Secretary-General of the United Nations says there is no military solution in Syria."
```
See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for more fine-tuned versions.
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
```
@article{tang2020multilingual,
title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning},
author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan},
year={2020},
eprint={2008.00401},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["multilingual", "ar", "cs", "de", "en", "es", "et", "fi", "fr", "gu", "hi", "it", "ja", "kk", "ko", "lt", "lv", "my", "ne", "nl", "ro", "ru", "si", "tr", "vi", "zh", "af", "az", "bn", "fa", "he", "hr", "id", "ka", "km", "mk", "ml", "mn", "mr", "pl", "ps", "pt", "sv", "sw", "ta", "te", "th", "tl", "uk", "ur", "xh", "gl", "sl"], "tags": ["mbart-50"]}
|
facebook/mbart-large-50-many-to-one-mmt
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"mbart",
"text2text-generation",
"mbart-50",
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl",
"arxiv:2008.00401",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2008.00401"
] |
[
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl"
] |
TAGS
#transformers #pytorch #tf #jax #mbart #text2text-generation #mbart-50 #multilingual #ar #cs #de #en #es #et #fi #fr #gu #hi #it #ja #kk #ko #lt #lv #my #ne #nl #ro #ru #si #tr #vi #zh #af #az #bn #fa #he #hr #id #ka #km #mk #ml #mn #mr #pl #ps #pt #sv #sw #ta #te #th #tl #uk #ur #xh #gl #sl #arxiv-2008.00401 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# mBART-50 many to one multilingual machine translation
This model is a fine-tuned checkpoint of mBART-large-50. 'mbart-large-50-many-to-many-mmt' is fine-tuned for multilingual machine translation. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.
The model can translate directly between any pair of 50 languages.
See the model hub to look for more fine-tuned versions.
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
|
[
"# mBART-50 many to one multilingual machine translation\n\n\nThis model is a fine-tuned checkpoint of mBART-large-50. 'mbart-large-50-many-to-many-mmt' is fine-tuned for multilingual machine translation. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.\nThe model can translate directly between any pair of 50 languages.\n\n\n\n\n\nSee the model hub to look for more fine-tuned versions.",
"## Languages covered\nArabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #mbart #text2text-generation #mbart-50 #multilingual #ar #cs #de #en #es #et #fi #fr #gu #hi #it #ja #kk #ko #lt #lv #my #ne #nl #ro #ru #si #tr #vi #zh #af #az #bn #fa #he #hr #id #ka #km #mk #ml #mn #mr #pl #ps #pt #sv #sw #ta #te #th #tl #uk #ur #xh #gl #sl #arxiv-2008.00401 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# mBART-50 many to one multilingual machine translation\n\n\nThis model is a fine-tuned checkpoint of mBART-large-50. 'mbart-large-50-many-to-many-mmt' is fine-tuned for multilingual machine translation. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.\nThe model can translate directly between any pair of 50 languages.\n\n\n\n\n\nSee the model hub to look for more fine-tuned versions.",
"## Languages covered\nArabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)",
"## BibTeX entry and citation info"
] |
text2text-generation
|
transformers
|
# mBART-50 one to many multilingual machine translation
This model is a fine-tuned checkpoint of [mBART-large-50](https://huggingface.co/facebook/mbart-large-50). `mbart-large-50-one-to-many-mmt` is fine-tuned for multilingual machine translation. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
The model can translate English to other 49 languages mentioned below.
To translate into a target language, the target language id is forced as the first generated token. To force the
target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_en = "The head of the United Nations says there is no military solution in Syria"
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-one-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", src_lang="en_XX")
model_inputs = tokenizer(article_en, return_tensors="pt")
# translate from English to Hindi
generated_tokens = model.generate(
**model_inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["hi_IN"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => 'संयुक्त राष्ट्र के नेता कहते हैं कि सीरिया में कोई सैन्य समाधान नहीं है'
# translate from English to Chinese
generated_tokens = model.generate(
**model_inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["zh_CN"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => '联合国首脑说,叙利亚没有军事解决办法'
```
See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for more fine-tuned versions.
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
```
@article{tang2020multilingual,
title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning},
author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan},
year={2020},
eprint={2008.00401},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["multilingual", "ar", "cs", "de", "en", "es", "et", "fi", "fr", "gu", "hi", "it", "ja", "kk", "ko", "lt", "lv", "my", "ne", "nl", "ro", "ru", "si", "tr", "vi", "zh", "af", "az", "bn", "fa", "he", "hr", "id", "ka", "km", "mk", "ml", "mn", "mr", "pl", "ps", "pt", "sv", "sw", "ta", "te", "th", "tl", "uk", "ur", "xh", "gl", "sl"], "tags": ["mbart-50"]}
|
facebook/mbart-large-50-one-to-many-mmt
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"mbart",
"text2text-generation",
"mbart-50",
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl",
"arxiv:2008.00401",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2008.00401"
] |
[
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl"
] |
TAGS
#transformers #pytorch #tf #jax #mbart #text2text-generation #mbart-50 #multilingual #ar #cs #de #en #es #et #fi #fr #gu #hi #it #ja #kk #ko #lt #lv #my #ne #nl #ro #ru #si #tr #vi #zh #af #az #bn #fa #he #hr #id #ka #km #mk #ml #mn #mr #pl #ps #pt #sv #sw #ta #te #th #tl #uk #ur #xh #gl #sl #arxiv-2008.00401 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# mBART-50 one to many multilingual machine translation
This model is a fine-tuned checkpoint of mBART-large-50. 'mbart-large-50-one-to-many-mmt' is fine-tuned for multilingual machine translation. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.
The model can translate English to other 49 languages mentioned below.
To translate into a target language, the target language id is forced as the first generated token. To force the
target language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method.
See the model hub to look for more fine-tuned versions.
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
|
[
"# mBART-50 one to many multilingual machine translation\n\n\nThis model is a fine-tuned checkpoint of mBART-large-50. 'mbart-large-50-one-to-many-mmt' is fine-tuned for multilingual machine translation. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.\n\n\nThe model can translate English to other 49 languages mentioned below. \nTo translate into a target language, the target language id is forced as the first generated token. To force the\ntarget language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method.\n\n\n\nSee the model hub to look for more fine-tuned versions.",
"## Languages covered\nArabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #mbart #text2text-generation #mbart-50 #multilingual #ar #cs #de #en #es #et #fi #fr #gu #hi #it #ja #kk #ko #lt #lv #my #ne #nl #ro #ru #si #tr #vi #zh #af #az #bn #fa #he #hr #id #ka #km #mk #ml #mn #mr #pl #ps #pt #sv #sw #ta #te #th #tl #uk #ur #xh #gl #sl #arxiv-2008.00401 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# mBART-50 one to many multilingual machine translation\n\n\nThis model is a fine-tuned checkpoint of mBART-large-50. 'mbart-large-50-one-to-many-mmt' is fine-tuned for multilingual machine translation. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.\n\n\nThe model can translate English to other 49 languages mentioned below. \nTo translate into a target language, the target language id is forced as the first generated token. To force the\ntarget language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method.\n\n\n\nSee the model hub to look for more fine-tuned versions.",
"## Languages covered\nArabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)",
"## BibTeX entry and citation info"
] |
text2text-generation
|
transformers
|
# mBART-50
mBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the "Multilingual Denoising Pretraining" objective. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
## Model description
mBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning.
Instead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below.
**Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data:
`D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`. The source documents are noised using two schemes,
first randomly shuffling the original sentences' order, and second a novel in-filling scheme,
where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text.
35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`.
The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict the sentence.
## Intended uses & limitations
`mbart-large-50` is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks.
See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for fine-tuned versions.
## Training
As the model is multilingual, it expects the sequences in a different format. A special language id token is used as a prefix in both the source and target text. The text format is `[lang_code] X [eos]` with `X` being the source or target text respectively and `lang_code` is `source_lang_code` for source text and `tgt_lang_code` for target text. `bos` is never used. Once the examples are prepared in this format, it can be trained as any other sequence-to-sequence model.
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
src_text = " UN Chief Says There Is No Military Solution in Syria"
tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
model_inputs = tokenizer(src_text, return_tensors="pt")
with tokenizer.as_target_tokenizer():
labels = tokenizer(tgt_text, return_tensors="pt").input_ids
model(**model_inputs, labels=labels) # forward pass
```
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
```
@article{tang2020multilingual,
title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning},
author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan},
year={2020},
eprint={2008.00401},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["multilingual", "ar", "cs", "de", "en", "es", "et", "fi", "fr", "gu", "hi", "it", "ja", "kk", "ko", "lt", "lv", "my", "ne", "nl", "ro", "ru", "si", "tr", "vi", "zh", "af", "az", "bn", "fa", "he", "hr", "id", "ka", "km", "mk", "ml", "mn", "mr", "pl", "ps", "pt", "sv", "sw", "ta", "te", "th", "tl", "uk", "ur", "xh", "gl", "sl"], "license": "mit", "tags": ["mbart-50"]}
|
facebook/mbart-large-50
| null |
[
"transformers",
"pytorch",
"tf",
"mbart",
"text2text-generation",
"mbart-50",
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl",
"arxiv:2008.00401",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2008.00401"
] |
[
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl"
] |
TAGS
#transformers #pytorch #tf #mbart #text2text-generation #mbart-50 #multilingual #ar #cs #de #en #es #et #fi #fr #gu #hi #it #ja #kk #ko #lt #lv #my #ne #nl #ro #ru #si #tr #vi #zh #af #az #bn #fa #he #hr #id #ka #km #mk #ml #mn #mr #pl #ps #pt #sv #sw #ta #te #th #tl #uk #ur #xh #gl #sl #arxiv-2008.00401 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# mBART-50
mBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the "Multilingual Denoising Pretraining" objective. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.
## Model description
mBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning.
Instead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below.
Multilingual Denoising Pretraining: The model incorporates N languages by concatenating data:
'D = {D1, ..., DN }' where each Di is a collection of monolingual documents in language 'i'. The source documents are noised using two schemes,
first randomly shuffling the original sentences' order, and second a novel in-filling scheme,
where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text.
35% of each instance's words are masked by random sampling a span length according to a Poisson distribution '(λ = 3.5)'.
The decoder input is the original text with one position offset. A language id symbol 'LID' is used as the initial token to predict the sentence.
## Intended uses & limitations
'mbart-large-50' is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks.
See the model hub to look for fine-tuned versions.
## Training
As the model is multilingual, it expects the sequences in a different format. A special language id token is used as a prefix in both the source and target text. The text format is '[lang_code] X [eos]' with 'X' being the source or target text respectively and 'lang_code' is 'source_lang_code' for source text and 'tgt_lang_code' for target text. 'bos' is never used. Once the examples are prepared in this format, it can be trained as any other sequence-to-sequence model.
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
|
[
"# mBART-50\n\nmBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the \"Multilingual Denoising Pretraining\" objective. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.",
"## Model description\n\nmBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning. \nInstead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below.\n\nMultilingual Denoising Pretraining: The model incorporates N languages by concatenating data: \n'D = {D1, ..., DN }' where each Di is a collection of monolingual documents in language 'i'. The source documents are noised using two schemes, \nfirst randomly shuffling the original sentences' order, and second a novel in-filling scheme, \nwhere spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text. \n35% of each instance's words are masked by random sampling a span length according to a Poisson distribution '(λ = 3.5)'.\nThe decoder input is the original text with one position offset. A language id symbol 'LID' is used as the initial token to predict the sentence.",
"## Intended uses & limitations\n\n'mbart-large-50' is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks. \nSee the model hub to look for fine-tuned versions.",
"## Training\n\nAs the model is multilingual, it expects the sequences in a different format. A special language id token is used as a prefix in both the source and target text. The text format is '[lang_code] X [eos]' with 'X' being the source or target text respectively and 'lang_code' is 'source_lang_code' for source text and 'tgt_lang_code' for target text. 'bos' is never used. Once the examples are prepared in this format, it can be trained as any other sequence-to-sequence model.",
"## Languages covered\nArabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #mbart #text2text-generation #mbart-50 #multilingual #ar #cs #de #en #es #et #fi #fr #gu #hi #it #ja #kk #ko #lt #lv #my #ne #nl #ro #ru #si #tr #vi #zh #af #az #bn #fa #he #hr #id #ka #km #mk #ml #mn #mr #pl #ps #pt #sv #sw #ta #te #th #tl #uk #ur #xh #gl #sl #arxiv-2008.00401 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# mBART-50\n\nmBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the \"Multilingual Denoising Pretraining\" objective. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper.",
"## Model description\n\nmBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning. \nInstead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below.\n\nMultilingual Denoising Pretraining: The model incorporates N languages by concatenating data: \n'D = {D1, ..., DN }' where each Di is a collection of monolingual documents in language 'i'. The source documents are noised using two schemes, \nfirst randomly shuffling the original sentences' order, and second a novel in-filling scheme, \nwhere spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text. \n35% of each instance's words are masked by random sampling a span length according to a Poisson distribution '(λ = 3.5)'.\nThe decoder input is the original text with one position offset. A language id symbol 'LID' is used as the initial token to predict the sentence.",
"## Intended uses & limitations\n\n'mbart-large-50' is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks. \nSee the model hub to look for fine-tuned versions.",
"## Training\n\nAs the model is multilingual, it expects the sequences in a different format. A special language id token is used as a prefix in both the source and target text. The text format is '[lang_code] X [eos]' with 'X' being the source or target text respectively and 'lang_code' is 'source_lang_code' for source text and 'tgt_lang_code' for target text. 'bos' is never used. Once the examples are prepared in this format, it can be trained as any other sequence-to-sequence model.",
"## Languages covered\nArabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)",
"## BibTeX entry and citation info"
] |
translation
|
transformers
|
#### mbart-large-cc25
Pretrained (not finetuned) multilingual mbart model.
Original Languages
```
export langs=ar_AR,cs_CZ,de_DE,en_XX,es_XX,et_EE,fi_FI,fr_XX,gu_IN,hi_IN,it_IT,ja_XX,kk_KZ,ko_KR,lt_LT,lv_LV,my_MM,ne_NP,nl_XX,ro_RO,ru_RU,si_LK,tr_TR,vi_VN,zh_CN
```
Original Code: https://github.com/pytorch/fairseq/tree/master/examples/mbart
Docs: https://huggingface.co/transformers/master/model_doc/mbart.html
Finetuning Code: examples/seq2seq/finetune.py (as of Aug 20, 2020)
Can also be finetuned for summarization.
|
{"language": ["en", "ar", "cs", "de", "et", "fi", "fr", "gu", "hi", "it", "ja", "kk", "ko", "lt", "lv", "my", "ne", "nl", "ro", "ru", "si", "tr", "vi", "zh", "multilingual"], "tags": ["translation"]}
|
facebook/mbart-large-cc25
| null |
[
"transformers",
"pytorch",
"tf",
"mbart",
"text2text-generation",
"translation",
"en",
"ar",
"cs",
"de",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"multilingual",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en",
"ar",
"cs",
"de",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"multilingual"
] |
TAGS
#transformers #pytorch #tf #mbart #text2text-generation #translation #en #ar #cs #de #et #fi #fr #gu #hi #it #ja #kk #ko #lt #lv #my #ne #nl #ro #ru #si #tr #vi #zh #multilingual #autotrain_compatible #endpoints_compatible #has_space #region-us
|
#### mbart-large-cc25
Pretrained (not finetuned) multilingual mbart model.
Original Languages
Original Code: URL
Docs: URL
Finetuning Code: examples/seq2seq/URL (as of Aug 20, 2020)
Can also be finetuned for summarization.
|
[
"#### mbart-large-cc25\n\nPretrained (not finetuned) multilingual mbart model.\nOriginal Languages\n\n\nOriginal Code: URL\nDocs: URL\nFinetuning Code: examples/seq2seq/URL (as of Aug 20, 2020)\n\nCan also be finetuned for summarization."
] |
[
"TAGS\n#transformers #pytorch #tf #mbart #text2text-generation #translation #en #ar #cs #de #et #fi #fr #gu #hi #it #ja #kk #ko #lt #lv #my #ne #nl #ro #ru #si #tr #vi #zh #multilingual #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### mbart-large-cc25\n\nPretrained (not finetuned) multilingual mbart model.\nOriginal Languages\n\n\nOriginal Code: URL\nDocs: URL\nFinetuning Code: examples/seq2seq/URL (as of Aug 20, 2020)\n\nCan also be finetuned for summarization."
] |
translation
|
transformers
|
### mbart-large-en-ro
This is mbart-large-cc25, finetuned on wmt_en_ro.
It scores BLEU 28.1 without post processing and BLEU 38 with postprocessing. Instructions in `romanian_postprocessing.md`
Original Code: https://github.com/pytorch/fairseq/tree/master/examples/mbart
Docs: https://huggingface.co/transformers/master/model_doc/mbart.html
Finetuning Code: examples/seq2seq/finetune.py (as of Aug 20, 2020)
|
{"language": ["en", "ro"], "license": "mit", "tags": ["translation"]}
|
facebook/mbart-large-en-ro
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"mbart",
"translation",
"en",
"ro",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en",
"ro"
] |
TAGS
#transformers #pytorch #tf #safetensors #mbart #translation #en #ro #license-mit #endpoints_compatible #has_space #region-us
|
### mbart-large-en-ro
This is mbart-large-cc25, finetuned on wmt_en_ro.
It scores BLEU 28.1 without post processing and BLEU 38 with postprocessing. Instructions in 'romanian_postprocessing.md'
Original Code: URL
Docs: URL
Finetuning Code: examples/seq2seq/URL (as of Aug 20, 2020)
|
[
"### mbart-large-en-ro\nThis is mbart-large-cc25, finetuned on wmt_en_ro.\n\nIt scores BLEU 28.1 without post processing and BLEU 38 with postprocessing. Instructions in 'romanian_postprocessing.md'\n\nOriginal Code: URL\n\nDocs: URL\n\nFinetuning Code: examples/seq2seq/URL (as of Aug 20, 2020)"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #mbart #translation #en #ro #license-mit #endpoints_compatible #has_space #region-us \n",
"### mbart-large-en-ro\nThis is mbart-large-cc25, finetuned on wmt_en_ro.\n\nIt scores BLEU 28.1 without post processing and BLEU 38 with postprocessing. Instructions in 'romanian_postprocessing.md'\n\nOriginal Code: URL\n\nDocs: URL\n\nFinetuning Code: examples/seq2seq/URL (as of Aug 20, 2020)"
] |
fill-mask
|
transformers
|
# Muppet: Massive Multi-task Representations with Pre-Finetuning
# RoBERTa base model
This is a Massive Multi-task Pre-finetuned version of Roberta base. It was introduced in
[this paper](https://arxiv.org/abs/2101.11038). The model improves over roberta-base in a wide range of GLUE, QA tasks (details can be found in the paper). The gains in
smaller datasets are significant.
Note: This checkpoint does not contain the classificaiton/MRC heads used during pre-finetuning due to compatibility issues and hence you might get slightly lower performance than that reported in the paper on some datasets
## Model description
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=roberta) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Model | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | SQuAD|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:----:|
| Roberta-base | 87.6 | 91.9 | 92.8 | 94.8 | 63.6 | 91.2 | 90.2 | 78.7 | 82.6|
| MUPPET Roberta-base | 88.1 | 91.9 | 93.3 | 96.7 | - | - | 91.7 | 87.8 | 86.6|
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2101-11038,
author = {Armen Aghajanyan and
Anchit Gupta and
Akshat Shrivastava and
Xilun Chen and
Luke Zettlemoyer and
Sonal Gupta},
title = {Muppet: Massive Multi-task Representations with Pre-Finetuning},
journal = {CoRR},
volume = {abs/2101.11038},
year = {2021},
url = {https://arxiv.org/abs/2101.11038},
archivePrefix = {arXiv},
eprint = {2101.11038},
timestamp = {Sun, 31 Jan 2021 17:23:50 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-11038.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
|
facebook/muppet-roberta-base
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2101.11038",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.11038"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2101.11038 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Muppet: Massive Multi-task Representations with Pre-Finetuning
==============================================================
RoBERTa base model
==================
This is a Massive Multi-task Pre-finetuned version of Roberta base. It was introduced in
this paper. The model improves over roberta-base in a wide range of GLUE, QA tasks (details can be found in the paper). The gains in
smaller datasets are significant.
Note: This checkpoint does not contain the classificaiton/MRC heads used during pre-finetuning due to compatibility issues and hence you might get slightly lower performance than that reported in the paper on some datasets
Model description
-----------------
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Intended uses & limitations
---------------------------
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
### BibTeX entry and citation info
|
[
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2101.11038 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### BibTeX entry and citation info"
] |
fill-mask
|
transformers
|
# Muppet: Massive Multi-task Representations with Pre-Finetuning
# RoBERTa large model
This is a Massive Multi-task Pre-finetuned version of Roberta large. It was introduced in
[this paper](https://arxiv.org/abs/2101.11038). The model improves over roberta-base in a wide range of GLUE, QA tasks (details can be found in the paper). The gains in
smaller datasets are significant.
Note: This checkpoint does not contain the classificaiton/MRC heads used during pre-finetuning due to compatibility issues and hence you might get slightly lower performance than that reported in the paper on some datasets
## Model description
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=roberta) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Model | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | SQuAD|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:----:|
| Roberta-large | 90.2 | 92.2 | 94.7 | 96.4 | 63.6 | 91.2 | 90.9 | 88.1 | 88.7|
| MUPPET Roberta-large | 90.8 | 92.2 | 94.9 | 97.4 | - | - | 91.4 | 92.8 | 89.4|
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2101-11038,
author = {Armen Aghajanyan and
Anchit Gupta and
Akshat Shrivastava and
Xilun Chen and
Luke Zettlemoyer and
Sonal Gupta},
title = {Muppet: Massive Multi-task Representations with Pre-Finetuning},
journal = {CoRR},
volume = {abs/2101.11038},
year = {2021},
url = {https://arxiv.org/abs/2101.11038},
archivePrefix = {arXiv},
eprint = {2101.11038},
timestamp = {Sun, 31 Jan 2021 17:23:50 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-11038.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
|
facebook/muppet-roberta-large
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2101.11038",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.11038"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2101.11038 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Muppet: Massive Multi-task Representations with Pre-Finetuning
==============================================================
RoBERTa large model
===================
This is a Massive Multi-task Pre-finetuned version of Roberta large. It was introduced in
this paper. The model improves over roberta-base in a wide range of GLUE, QA tasks (details can be found in the paper). The gains in
smaller datasets are significant.
Note: This checkpoint does not contain the classificaiton/MRC heads used during pre-finetuning due to compatibility issues and hence you might get slightly lower performance than that reported in the paper on some datasets
Model description
-----------------
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Intended uses & limitations
---------------------------
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
### BibTeX entry and citation info
|
[
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2101.11038 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### BibTeX entry and citation info"
] |
null |
transformers
|
## RAG
This is a non-finetuned version of the RAG-Sequence model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf)
by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.
Rag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a `RagRetriever` instance. The *question encoder* can be any model that can be loaded with `AutoModel` and the *generator* can be any model that can be loaded with `AutoModelForSeq2SeqLM`.
This model is a non-finetuned RAG-Sequence model and was created as follows:
```python
from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration, AutoTokenizer
model = RagSequenceForGeneration.from_pretrained_question_encoder_generator("facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large")
question_encoder_tokenizer = AutoTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
generator_tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
tokenizer = RagTokenizer(question_encoder_tokenizer, generator_tokenizer)
model.config.use_dummy_dataset = True
model.config.index_name = "exact"
retriever = RagRetriever(model.config, question_encoder_tokenizer, generator_tokenizer)
model.save_pretrained("./")
tokenizer.save_pretrained("./")
retriever.save_pretrained("./")
```
Note that the model is *uncased* so that all capital input letters are converted to lower-case.
## Usage:
*Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever,
by setting `config.index_name="legacy"` and `config.use_dummy_dataset=False`.
The model can be fine-tuned as follows:
```python
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-base")
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-base")
model = RagTokenForGeneration.from_pretrained("facebook/rag-sequence-base", retriever=retriever)
input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", "michael phelps", return_tensors="pt")
outputs = model(input_dict["input_ids"], labels=input_dict["labels"])
loss = outputs.loss
# train on loss
```
|
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/facebook.png"}
|
facebook/rag-sequence-base
| null |
[
"transformers",
"pytorch",
"rag",
"arxiv:2005.11401",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2005.11401"
] |
[] |
TAGS
#transformers #pytorch #rag #arxiv-2005.11401 #license-apache-2.0 #endpoints_compatible #region-us
|
## RAG
This is a non-finetuned version of the RAG-Sequence model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.
Rag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a 'RagRetriever' instance. The *question encoder* can be any model that can be loaded with 'AutoModel' and the *generator* can be any model that can be loaded with 'AutoModelForSeq2SeqLM'.
This model is a non-finetuned RAG-Sequence model and was created as follows:
Note that the model is *uncased* so that all capital input letters are converted to lower-case.
## Usage:
*Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever,
by setting 'config.index_name="legacy"' and 'config.use_dummy_dataset=False'.
The model can be fine-tuned as follows:
|
[
"## RAG\n\nThis is a non-finetuned version of the RAG-Sequence model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks \nby Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.\n\nRag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a 'RagRetriever' instance. The *question encoder* can be any model that can be loaded with 'AutoModel' and the *generator* can be any model that can be loaded with 'AutoModelForSeq2SeqLM'. \n\nThis model is a non-finetuned RAG-Sequence model and was created as follows:\n\n\n\nNote that the model is *uncased* so that all capital input letters are converted to lower-case.",
"## Usage:\n\n*Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever, \nby setting 'config.index_name=\"legacy\"' and 'config.use_dummy_dataset=False'.\nThe model can be fine-tuned as follows:"
] |
[
"TAGS\n#transformers #pytorch #rag #arxiv-2005.11401 #license-apache-2.0 #endpoints_compatible #region-us \n",
"## RAG\n\nThis is a non-finetuned version of the RAG-Sequence model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks \nby Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.\n\nRag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a 'RagRetriever' instance. The *question encoder* can be any model that can be loaded with 'AutoModel' and the *generator* can be any model that can be loaded with 'AutoModelForSeq2SeqLM'. \n\nThis model is a non-finetuned RAG-Sequence model and was created as follows:\n\n\n\nNote that the model is *uncased* so that all capital input letters are converted to lower-case.",
"## Usage:\n\n*Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever, \nby setting 'config.index_name=\"legacy\"' and 'config.use_dummy_dataset=False'.\nThe model can be fine-tuned as follows:"
] |
null |
transformers
|
## RAG
This is the RAG-Sequence Model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf)
by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.
The model is a *uncased* model, which means that capital letters are simply converted to lower-case letters.
The model consits of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* `train` datasets, which is linked above.
The question_encoder and retriever are based on `facebook/dpr-question_encoder-single-nq-base` and `facebook/bart-large`, which were jointly finetuned on
on the *wiki_dpr* QA dataset in an end-to-end fashion.
## Usage:
**Note**: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM.
The model can generate answers to any factoid question as follows:
```python
from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever)
input_dict = tokenizer.prepare_seq2seq_batch("how many countries are in europe", return_tensors="pt")
generated = model.generate(input_ids=input_dict["input_ids"])
print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0])
# should give 54 => google says either 44 or 51
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["wiki_dpr"], "thumbnail": "https://huggingface.co/front/thumbnails/facebook.png"}
|
facebook/rag-sequence-nq
| null |
[
"transformers",
"pytorch",
"tf",
"rag",
"en",
"dataset:wiki_dpr",
"arxiv:2005.11401",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2005.11401"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #rag #en #dataset-wiki_dpr #arxiv-2005.11401 #license-apache-2.0 #endpoints_compatible #region-us
|
## RAG
This is the RAG-Sequence Model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.
The model is a *uncased* model, which means that capital letters are simply converted to lower-case letters.
The model consits of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* 'train' datasets, which is linked above.
The question_encoder and retriever are based on 'facebook/dpr-question_encoder-single-nq-base' and 'facebook/bart-large', which were jointly finetuned on
on the *wiki_dpr* QA dataset in an end-to-end fashion.
## Usage:
Note: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM.
The model can generate answers to any factoid question as follows:
|
[
"## RAG\n\nThis is the RAG-Sequence Model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks \nby Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.\n\nThe model is a *uncased* model, which means that capital letters are simply converted to lower-case letters.\n\nThe model consits of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* 'train' datasets, which is linked above.\nThe question_encoder and retriever are based on 'facebook/dpr-question_encoder-single-nq-base' and 'facebook/bart-large', which were jointly finetuned on \non the *wiki_dpr* QA dataset in an end-to-end fashion.",
"## Usage:\n\nNote: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM.\nThe model can generate answers to any factoid question as follows:"
] |
[
"TAGS\n#transformers #pytorch #tf #rag #en #dataset-wiki_dpr #arxiv-2005.11401 #license-apache-2.0 #endpoints_compatible #region-us \n",
"## RAG\n\nThis is the RAG-Sequence Model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks \nby Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.\n\nThe model is a *uncased* model, which means that capital letters are simply converted to lower-case letters.\n\nThe model consits of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* 'train' datasets, which is linked above.\nThe question_encoder and retriever are based on 'facebook/dpr-question_encoder-single-nq-base' and 'facebook/bart-large', which were jointly finetuned on \non the *wiki_dpr* QA dataset in an end-to-end fashion.",
"## Usage:\n\nNote: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM.\nThe model can generate answers to any factoid question as follows:"
] |
null |
transformers
|
## RAG
This is a non-finetuned version of the RAG-Token model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf)
by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.
Rag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a `RagRetriever` instance. The *question encoder* can be any model that can be loaded with `AutoModel` and the *generator* can be any model that can be loaded with `AutoModelForSeq2SeqLM`.
This model is a non-finetuned RAG-Token model and was created as follows:
```python
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, AutoTokenizer
model = RagTokenForGeneration.from_pretrained_question_encoder_generator("facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large")
question_encoder_tokenizer = AutoTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
generator_tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
tokenizer = RagTokenizer(question_encoder_tokenizer, generator_tokenizer)
model.config.use_dummy_dataset = True
model.config.index_name = "exact"
retriever = RagRetriever(model.config, question_encoder_tokenizer, generator_tokenizer)
model.save_pretrained("./")
tokenizer.save_pretrained("./")
retriever.save_pretrained("./")
```
Note that the model is *uncased* so that all capital input letters are converted to lower-case.
## Usage:
*Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever,
by setting `config.index_name="legacy"` and `config.use_dummy_dataset=False`.
The model can be fine-tuned as follows:
```python
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base")
retriever = RagRetriever.from_pretrained("facebook/rag-token-base")
model = RagTokenForGeneration.from_pretrained("facebook/rag-token-base", retriever=retriever)
input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", "michael phelps", return_tensors="pt")
outputs = model(input_dict["input_ids"], labels=input_dict["labels"])
loss = outputs.loss
# train on loss
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["wiki_dpr"], "thumbnail": "https://huggingface.co/front/thumbnails/facebook.png"}
|
facebook/rag-token-base
| null |
[
"transformers",
"pytorch",
"rag",
"en",
"dataset:wiki_dpr",
"arxiv:2005.11401",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2005.11401"
] |
[
"en"
] |
TAGS
#transformers #pytorch #rag #en #dataset-wiki_dpr #arxiv-2005.11401 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
## RAG
This is a non-finetuned version of the RAG-Token model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.
Rag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a 'RagRetriever' instance. The *question encoder* can be any model that can be loaded with 'AutoModel' and the *generator* can be any model that can be loaded with 'AutoModelForSeq2SeqLM'.
This model is a non-finetuned RAG-Token model and was created as follows:
Note that the model is *uncased* so that all capital input letters are converted to lower-case.
## Usage:
*Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever,
by setting 'config.index_name="legacy"' and 'config.use_dummy_dataset=False'.
The model can be fine-tuned as follows:
|
[
"## RAG\n\nThis is a non-finetuned version of the RAG-Token model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks \nby Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.\n\nRag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a 'RagRetriever' instance. The *question encoder* can be any model that can be loaded with 'AutoModel' and the *generator* can be any model that can be loaded with 'AutoModelForSeq2SeqLM'. \n\nThis model is a non-finetuned RAG-Token model and was created as follows:\n\n\n\nNote that the model is *uncased* so that all capital input letters are converted to lower-case.",
"## Usage:\n\n*Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever, \nby setting 'config.index_name=\"legacy\"' and 'config.use_dummy_dataset=False'.\nThe model can be fine-tuned as follows:"
] |
[
"TAGS\n#transformers #pytorch #rag #en #dataset-wiki_dpr #arxiv-2005.11401 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"## RAG\n\nThis is a non-finetuned version of the RAG-Token model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks \nby Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.\n\nRag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a 'RagRetriever' instance. The *question encoder* can be any model that can be loaded with 'AutoModel' and the *generator* can be any model that can be loaded with 'AutoModelForSeq2SeqLM'. \n\nThis model is a non-finetuned RAG-Token model and was created as follows:\n\n\n\nNote that the model is *uncased* so that all capital input letters are converted to lower-case.",
"## Usage:\n\n*Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever, \nby setting 'config.index_name=\"legacy\"' and 'config.use_dummy_dataset=False'.\nThe model can be fine-tuned as follows:"
] |
null |
transformers
|
## RAG
This is the RAG-Token Model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf)
by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.
The model is a *uncased* model, which means that capital letters are simply converted to lower-case letters.
The model consists of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* `train` datasets, which is linked above.
The question_encoder and retriever are based on `facebook/dpr-question_encoder-single-nq-base` and `facebook/bart-large`, which were jointly finetuned on
on the *wiki_dpr* QA dataset in an end-to-end fashion.
## Usage:
**Note**: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM.
The model can generate answers to any factoid question as follows:
```python
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True)
model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", return_tensors="pt")
generated = model.generate(input_ids=input_dict["input_ids"])
print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0])
# should give michael phelps => sounds reasonable
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["wiki_dpr"], "thumbnail": "https://huggingface.co/front/thumbnails/facebook.png"}
|
facebook/rag-token-nq
| null |
[
"transformers",
"pytorch",
"tf",
"rag",
"en",
"dataset:wiki_dpr",
"arxiv:2005.11401",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2005.11401"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #rag #en #dataset-wiki_dpr #arxiv-2005.11401 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
## RAG
This is the RAG-Token Model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.
The model is a *uncased* model, which means that capital letters are simply converted to lower-case letters.
The model consists of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* 'train' datasets, which is linked above.
The question_encoder and retriever are based on 'facebook/dpr-question_encoder-single-nq-base' and 'facebook/bart-large', which were jointly finetuned on
on the *wiki_dpr* QA dataset in an end-to-end fashion.
## Usage:
Note: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM.
The model can generate answers to any factoid question as follows:
|
[
"## RAG\n\nThis is the RAG-Token Model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks \nby Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.\n\nThe model is a *uncased* model, which means that capital letters are simply converted to lower-case letters.\n\nThe model consists of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* 'train' datasets, which is linked above.\nThe question_encoder and retriever are based on 'facebook/dpr-question_encoder-single-nq-base' and 'facebook/bart-large', which were jointly finetuned on \non the *wiki_dpr* QA dataset in an end-to-end fashion.",
"## Usage:\n\nNote: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM.\nThe model can generate answers to any factoid question as follows:"
] |
[
"TAGS\n#transformers #pytorch #tf #rag #en #dataset-wiki_dpr #arxiv-2005.11401 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"## RAG\n\nThis is the RAG-Token Model of the the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks \nby Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.\n\nThe model is a *uncased* model, which means that capital letters are simply converted to lower-case letters.\n\nThe model consists of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* 'train' datasets, which is linked above.\nThe question_encoder and retriever are based on 'facebook/dpr-question_encoder-single-nq-base' and 'facebook/bart-large', which were jointly finetuned on \non the *wiki_dpr* QA dataset in an end-to-end fashion.",
"## Usage:\n\nNote: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM.\nThe model can generate answers to any factoid question as follows:"
] |
automatic-speech-recognition
|
transformers
|
# S2T-LARGE-LIBRISPEECH-ASR
`s2t-large-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-large-librispeech-asr")
processor = Speech2Textprocessor.from_pretrained("facebook/s2t-large-librispeech-asr")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
input_features = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_ids=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
import soundfile as sf
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-large-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-large-librispeech-asr", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 3.3 | 7.5 |
## Training data
The S2T-LARGE-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": "en", "license": "mit", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "datasets": ["librispeech_asr"], "model-index": [{"name": "hubert-large-ls960-ft", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 3.3, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 7.5, "name": "Test WER"}]}]}]}
|
facebook/s2t-large-librispeech-asr
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #model-index #endpoints_compatible #has_space #region-us
|
S2T-LARGE-LIBRISPEECH-ASR
=========================
's2t-large-librispeech-asr' is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in this paper and released in
this repository
Model description
-----------------
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
Intended uses & limitations
---------------------------
This model can be used for end-to-end speech recognition (ASR).
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the LibriSpeech
*"clean"* and *"other"* test dataset.
*Result (WER)*:
Training data
-------------
The S2T-LARGE-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
Training procedure
------------------
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
|
[
"### How to use\n\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly\nwith 'pip install torchaudio sentencepiece'.",
"#### Evaluation on LibriSpeech Test\n\n\nThe following script shows how to evaluate this model on the LibriSpeech\n*\"clean\"* and *\"other\"* test dataset.\n\n\n*Result (WER)*:\n\n\n\nTraining data\n-------------\n\n\nThe S2T-LARGE-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of\napproximately 1000 hours of 16kHz read English speech.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.",
"### Training\n\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #model-index #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly\nwith 'pip install torchaudio sentencepiece'.",
"#### Evaluation on LibriSpeech Test\n\n\nThe following script shows how to evaluate this model on the LibriSpeech\n*\"clean\"* and *\"other\"* test dataset.\n\n\n*Result (WER)*:\n\n\n\nTraining data\n-------------\n\n\nThe S2T-LARGE-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of\napproximately 1000 hours of 16kHz read English speech.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.",
"### Training\n\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively.",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-MEDIUM-LIBRISPEECH-ASR
`s2t-medium-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-librispeech-asr")
processor = Speech2Textprocessor.from_pretrained("facebook/s2t-medium-librispeech-asr")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
input_features = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_features=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from evaluate import load
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-librispeech-asr", do_upper_case=True)
def map_to_pred(batch):
features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print("WER:", wer.compute(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 3.5 | 7.8 |
## Training data
The S2T-MEDIUM-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": "en", "license": "mit", "tags": ["audio", "automatic-speech-recognition"], "datasets": ["librispeech_asr"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-medium-librispeech-asr
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #speech_to_text #automatic-speech-recognition #audio #en #dataset-librispeech_asr #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #has_space #region-us
|
S2T-MEDIUM-LIBRISPEECH-ASR
==========================
's2t-medium-librispeech-asr' is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in this paper and released in
this repository
Model description
-----------------
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
Intended uses & limitations
---------------------------
This model can be used for end-to-end speech recognition (ASR).
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the LibriSpeech
*"clean"* and *"other"* test dataset.
*Result (WER)*:
Training data
-------------
The S2T-MEDIUM-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
Training procedure
------------------
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
|
[
"### How to use\n\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly\nwith 'pip install torchaudio sentencepiece'.",
"#### Evaluation on LibriSpeech Test\n\n\nThe following script shows how to evaluate this model on the LibriSpeech\n*\"clean\"* and *\"other\"* test dataset.\n\n\n*Result (WER)*:\n\n\n\nTraining data\n-------------\n\n\nThe S2T-MEDIUM-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of\napproximately 1000 hours of 16kHz read English speech.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.",
"### Training\n\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #speech_to_text #automatic-speech-recognition #audio #en #dataset-librispeech_asr #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly\nwith 'pip install torchaudio sentencepiece'.",
"#### Evaluation on LibriSpeech Test\n\n\nThe following script shows how to evaluate this model on the LibriSpeech\n*\"clean\"* and *\"other\"* test dataset.\n\n\n*Result (WER)*:\n\n\n\nTraining data\n-------------\n\n\nThe S2T-MEDIUM-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of\napproximately 1000 hours of 16kHz read English speech.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.",
"### Training\n\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively.",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-MEDIUM-MUSTC-MULTILINGUAL-ST
`s2t-medium-mustc-multilingual-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Multilingual Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to French text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
For multilingual speech translation models, `eos_token_id` is used as the `decoder_start_token_id` and
the target language id is forced as the first generated token. To force the target language id as the first
generated token, pass the `forced_bos_token_id` parameter to the `generate()` method. The following
example shows how to transate English speech to French and German text using the `facebook/s2t-medium-mustc-multilingual-st`
checkpoint.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
# translate English Speech To French Text
generated_ids = model.generate(
input_ids=inputs["input_features"],
attention_mask=inputs["attention_mask"],
forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"]
)
translation_fr = processor.batch_decode(generated_ids)
# translate English Speech To German Text
generated_ids = model.generate(
input_ids=inputs["input_features"],
attention_mask=inputs["attention_mask"],
forced_bos_token_id=processor.tokenizer.lang_code_to_id["de"]
)
translation_de = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-medium-mustc-multilingual-st is trained on [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for multilingual ASR. For multilingual models, target language ID token
is used as target BOS.
## Evaluation results
MuST-C test results (BLEU score):
| En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru |
|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| 24.5 | 28.6 | 28.2 | 34.9 | 24.6 | 31.1 | 23.8 | 16.0 |
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "de", "nl", "es", "fr", "it", "pt", "ro", "ru"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["mustc"], "pipeline_tag": "automatic-speech-recognition"}
|
facebook/s2t-medium-mustc-multilingual-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"de",
"nl",
"es",
"fr",
"it",
"pt",
"ro",
"ru",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en",
"de",
"nl",
"es",
"fr",
"it",
"pt",
"ro",
"ru"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #de #nl #es #fr #it #pt #ro #ru #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
S2T-MEDIUM-MUSTC-MULTILINGUAL-ST
================================
's2t-medium-mustc-multilingual-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Multilingual Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
Model description
-----------------
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
Intended uses & limitations
---------------------------
This model can be used for end-to-end English speech to French text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
For multilingual speech translation models, 'eos\_token\_id' is used as the 'decoder\_start\_token\_id' and
the target language id is forced as the first generated token. To force the target language id as the first
generated token, pass the 'forced\_bos\_token\_id' parameter to the 'generate()' method. The following
example shows how to transate English speech to French and German text using the 'facebook/s2t-medium-mustc-multilingual-st'
checkpoint.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
Training data
-------------
The s2t-medium-mustc-multilingual-st is trained on MuST-C.
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
Training procedure
------------------
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for multilingual ASR. For multilingual models, target language ID token
is used as target BOS.
Evaluation results
------------------
MuST-C test results (BLEU score):
### BibTeX entry and citation info
|
[
"### How to use\n\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n\nFor multilingual speech translation models, 'eos\\_token\\_id' is used as the 'decoder\\_start\\_token\\_id' and\nthe target language id is forced as the first generated token. To force the target language id as the first\ngenerated token, pass the 'forced\\_bos\\_token\\_id' parameter to the 'generate()' method. The following\nexample shows how to transate English speech to French and German text using the 'facebook/s2t-medium-mustc-multilingual-st'\ncheckpoint.\n\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly\nwith 'pip install torchaudio sentencepiece'.\n\n\nTraining data\n-------------\n\n\nThe s2t-medium-mustc-multilingual-st is trained on MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.",
"### Training\n\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for multilingual ASR. For multilingual models, target language ID token\nis used as target BOS.\n\n\nEvaluation results\n------------------\n\n\nMuST-C test results (BLEU score):",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #de #nl #es #fr #it #pt #ro #ru #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"### How to use\n\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n\nFor multilingual speech translation models, 'eos\\_token\\_id' is used as the 'decoder\\_start\\_token\\_id' and\nthe target language id is forced as the first generated token. To force the target language id as the first\ngenerated token, pass the 'forced\\_bos\\_token\\_id' parameter to the 'generate()' method. The following\nexample shows how to transate English speech to French and German text using the 'facebook/s2t-medium-mustc-multilingual-st'\ncheckpoint.\n\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly\nwith 'pip install torchaudio sentencepiece'.\n\n\nTraining data\n-------------\n\n\nThe s2t-medium-mustc-multilingual-st is trained on MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.",
"### Training\n\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for multilingual ASR. For multilingual models, target language ID token\nis used as target BOS.\n\n\nEvaluation results\n------------------\n\n\nMuST-C test results (BLEU score):",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-COVOST2-CA-EN-ST
`s2t-small-covost2-ca-en-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end Catalan speech to English text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-ca-en-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-ca-en-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-ca-en-st is trained on Catalan-English subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for ca-en (BLEU score): 17.85
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["ca", "en"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["covost2"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-covost2-ca-en-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"ca",
"en",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1912.06670",
"1904.08779"
] |
[
"ca",
"en"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #ca #en #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-COVOST2-CA-EN-ST
's2t-small-covost2-ca-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end Catalan speech to English text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-covost2-ca-en-st is trained on Catalan-English subset of CoVoST2.
CoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for ca-en (BLEU score): 17.85
### BibTeX entry and citation info
|
[
"# S2T-SMALL-COVOST2-CA-EN-ST\n\n's2t-small-covost2-ca-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end Catalan speech to English text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-ca-en-st is trained on Catalan-English subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for ca-en (BLEU score): 17.85",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #ca #en #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-COVOST2-CA-EN-ST\n\n's2t-small-covost2-ca-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end Catalan speech to English text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-ca-en-st is trained on Catalan-English subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for ca-en (BLEU score): 17.85",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-COVOST2-DE-EN-ST
`s2t-small-covost2-de-en-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end German speech to English text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-de-en-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-de-en-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-de-en-st is trained on German-English subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for de-en (BLEU score): 17.58
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["de", "en"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["covost2"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-covost2-de-en-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"de",
"en",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1912.06670",
"1904.08779"
] |
[
"de",
"en"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #de #en #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-COVOST2-DE-EN-ST
's2t-small-covost2-de-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end German speech to English text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-covost2-de-en-st is trained on German-English subset of CoVoST2.
CoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for de-en (BLEU score): 17.58
### BibTeX entry and citation info
|
[
"# S2T-SMALL-COVOST2-DE-EN-ST\n\n's2t-small-covost2-de-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end German speech to English text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-de-en-st is trained on German-English subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for de-en (BLEU score): 17.58",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #de #en #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-COVOST2-DE-EN-ST\n\n's2t-small-covost2-de-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end German speech to English text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-de-en-st is trained on German-English subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for de-en (BLEU score): 17.58",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-COVOST2-EN-CA-ST
`s2t-small-covost2-en-ca-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Catlan text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-en-ca-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-en-ca-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-en-ca-st is trained on English-Catlan subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for en-ca (BLEU score): 21.68
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "ca"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["covost2"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-covost2-en-ca-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"ca",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1912.06670",
"1904.08779"
] |
[
"en",
"ca"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #ca #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-COVOST2-EN-CA-ST
's2t-small-covost2-en-ca-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Catlan text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-covost2-en-ca-st is trained on English-Catlan subset of CoVoST2.
CoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for en-ca (BLEU score): 21.68
### BibTeX entry and citation info
|
[
"# S2T-SMALL-COVOST2-EN-CA-ST\n\n's2t-small-covost2-en-ca-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Catlan text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-en-ca-st is trained on English-Catlan subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for en-ca (BLEU score): 21.68",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #ca #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-COVOST2-EN-CA-ST\n\n's2t-small-covost2-en-ca-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Catlan text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-en-ca-st is trained on English-Catlan subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for en-ca (BLEU score): 21.68",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-COVOST2-EN-DE-ST
`s2t-small-covost2-en-de-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to German text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-en-de-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-en-de-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-en-de-st is trained on English-German subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for en-de (BLEU score): 16.29
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "de"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["covost2"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-covost2-en-de-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"de",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1912.06670",
"1904.08779"
] |
[
"en",
"de"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #de #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-COVOST2-EN-DE-ST
's2t-small-covost2-en-de-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to German text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-covost2-en-de-st is trained on English-German subset of CoVoST2.
CoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for en-de (BLEU score): 16.29
### BibTeX entry and citation info
|
[
"# S2T-SMALL-COVOST2-EN-DE-ST\n\n's2t-small-covost2-en-de-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to German text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-en-de-st is trained on English-German subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for en-de (BLEU score): 16.29",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #de #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-COVOST2-EN-DE-ST\n\n's2t-small-covost2-en-de-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to German text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-en-de-st is trained on English-German subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for en-de (BLEU score): 16.29",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-COVOST2-EN-ET-ST
`s2t-small-covost2-en-et-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Estonian text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-en-et-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-en-et-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-en-et-st is trained on English-Estonian subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for en-et (BLEU score): 13.01
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "et"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["covost2"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-covost2-en-et-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"et",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1912.06670",
"1904.08779"
] |
[
"en",
"et"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #et #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-COVOST2-EN-ET-ST
's2t-small-covost2-en-et-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Estonian text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-covost2-en-et-st is trained on English-Estonian subset of CoVoST2.
CoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for en-et (BLEU score): 13.01
### BibTeX entry and citation info
|
[
"# S2T-SMALL-COVOST2-EN-ET-ST\n\n's2t-small-covost2-en-et-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Estonian text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-en-et-st is trained on English-Estonian subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for en-et (BLEU score): 13.01",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #et #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-COVOST2-EN-ET-ST\n\n's2t-small-covost2-en-et-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Estonian text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-en-et-st is trained on English-Estonian subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for en-et (BLEU score): 13.01",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-COVOST2-EN-FA-ST
`s2t-small-covost2-en-fa-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Farsi text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-en-fa-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-en-fa-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-en-fa-st is trained on English-Farsi subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for en-fa (BLEU score): 11.43
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "fa"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["covost2"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-covost2-en-fa-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"fa",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1912.06670",
"1904.08779"
] |
[
"en",
"fa"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #fa #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-COVOST2-EN-FA-ST
's2t-small-covost2-en-fa-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Farsi text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-covost2-en-fa-st is trained on English-Farsi subset of CoVoST2.
CoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for en-fa (BLEU score): 11.43
### BibTeX entry and citation info
|
[
"# S2T-SMALL-COVOST2-EN-FA-ST\n\n's2t-small-covost2-en-fa-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Farsi text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-en-fa-st is trained on English-Farsi subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for en-fa (BLEU score): 11.43",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #fa #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-COVOST2-EN-FA-ST\n\n's2t-small-covost2-en-fa-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Farsi text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-en-fa-st is trained on English-Farsi subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for en-fa (BLEU score): 11.43",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-COVOST2-ES-EN-ST
`s2t-small-covost2-es-en-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end Spanish speech to English text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-es-en-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-es-en-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-es-en-st is trained on Spanish-English subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for es-en (BLEU score): 22.31
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["es", "en"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["covost2"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-covost2-es-en-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"es",
"en",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1912.06670",
"1904.08779"
] |
[
"es",
"en"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #es #en #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-COVOST2-ES-EN-ST
's2t-small-covost2-es-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end Spanish speech to English text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-covost2-es-en-st is trained on Spanish-English subset of CoVoST2.
CoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for es-en (BLEU score): 22.31
### BibTeX entry and citation info
|
[
"# S2T-SMALL-COVOST2-ES-EN-ST\n\n's2t-small-covost2-es-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end Spanish speech to English text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-es-en-st is trained on Spanish-English subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for es-en (BLEU score): 22.31",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #es #en #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-COVOST2-ES-EN-ST\n\n's2t-small-covost2-es-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end Spanish speech to English text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-es-en-st is trained on Spanish-English subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for es-en (BLEU score): 22.31",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-COVOST2-FR-EN-ST
`s2t-small-covost2-fr-en-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end French speech to English text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-fr-en-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-fr-en-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-fr-en-st is trained on French-English subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for fr-en (BLEU score): 26.25
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["fr", "en"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["covost2"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-covost2-fr-en-st
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"fr",
"en",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1912.06670",
"1904.08779"
] |
[
"fr",
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #speech_to_text #automatic-speech-recognition #audio #speech-translation #fr #en #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-COVOST2-FR-EN-ST
's2t-small-covost2-fr-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end French speech to English text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-covost2-fr-en-st is trained on French-English subset of CoVoST2.
CoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for fr-en (BLEU score): 26.25
### BibTeX entry and citation info
|
[
"# S2T-SMALL-COVOST2-FR-EN-ST\n\n's2t-small-covost2-fr-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end French speech to English text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-fr-en-st is trained on French-English subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for fr-en (BLEU score): 26.25",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #speech_to_text #automatic-speech-recognition #audio #speech-translation #fr #en #dataset-covost2 #arxiv-2010.05171 #arxiv-1912.06670 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-COVOST2-FR-EN-ST\n\n's2t-small-covost2-fr-en-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end French speech to English text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-covost2-fr-en-st is trained on French-English subset of CoVoST2.\nCoVoST is a large-scale multilingual ST corpus based on Common Voice, created to to foster \nST research with the largest ever open dataset",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using character based SentencePiece vocab.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nCoVOST2 test results for fr-en (BLEU score): 26.25",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-LIBRISPEECH-ASR
`s2t-small-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
*Note: The feature extractor depends on [torchaudio](https://github.com/pytorch/audio) and the tokenizer depends on [sentencepiece](https://github.com/google/sentencepiece)
so be sure to install those packages before running the examples.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
input_features = processor(
ds[0]["audio"]["array"],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_features=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from evaluate import load
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
def map_to_pred(batch):
features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print("WER:", wer.compute(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 4.3 | 9.0 |
## Training data
The S2T-SMALL-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": "en", "license": "mit", "tags": ["speech", "audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "datasets": ["librispeech_asr"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "s2t-small-librispeech-asr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 4.3, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 9.0, "name": "Test WER"}]}]}]}
|
facebook/s2t-small-librispeech-asr
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"speech_to_text",
"automatic-speech-recognition",
"speech",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #speech_to_text #automatic-speech-recognition #speech #audio #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #model-index #endpoints_compatible #has_space #region-us
|
S2T-SMALL-LIBRISPEECH-ASR
=========================
's2t-small-librispeech-asr' is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in this paper and released in
this repository
Model description
-----------------
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
Intended uses & limitations
---------------------------
This model can be used for end-to-end speech recognition (ASR).
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
*Note: The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece
so be sure to install those packages before running the examples.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the LibriSpeech
*"clean"* and *"other"* test dataset.
*Result (WER)*:
Training data
-------------
The S2T-SMALL-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
Training procedure
------------------
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
|
[
"### How to use\n\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\n\n*Note: The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece\nso be sure to install those packages before running the examples.*\n\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly\nwith 'pip install torchaudio sentencepiece'.",
"#### Evaluation on LibriSpeech Test\n\n\nThe following script shows how to evaluate this model on the LibriSpeech\n*\"clean\"* and *\"other\"* test dataset.\n\n\n*Result (WER)*:\n\n\n\nTraining data\n-------------\n\n\nThe S2T-SMALL-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of\napproximately 1000 hours of 16kHz read English speech.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.",
"### Training\n\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #speech_to_text #automatic-speech-recognition #speech #audio #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #model-index #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\n\n*Note: The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece\nso be sure to install those packages before running the examples.*\n\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly\nwith 'pip install torchaudio sentencepiece'.",
"#### Evaluation on LibriSpeech Test\n\n\nThe following script shows how to evaluate this model on the LibriSpeech\n*\"clean\"* and *\"other\"* test dataset.\n\n\n*Result (WER)*:\n\n\n\nTraining data\n-------------\n\n\nThe S2T-SMALL-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of\napproximately 1000 hours of 16kHz read English speech.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.",
"### Training\n\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively.",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-MUSTC-EN-DE-ST
`s2t-small-mustc-en-de-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to German text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-de-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-de-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-de-st is trained on English-German subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-de (BLEU score): 22.7
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "de"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["mustc"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-mustc-en-de-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"de",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en",
"de"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #de #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-MUSTC-EN-DE-ST
's2t-small-mustc-en-de-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to German text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-mustc-en-de-st is trained on English-German subset of MuST-C.
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-de (BLEU score): 22.7
### BibTeX entry and citation info
|
[
"# S2T-SMALL-MUSTC-EN-DE-ST\n\n's2t-small-mustc-en-de-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to German text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-de-st is trained on English-German subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-de (BLEU score): 22.7",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #de #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-MUSTC-EN-DE-ST\n\n's2t-small-mustc-en-de-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to German text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-de-st is trained on English-German subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-de (BLEU score): 22.7",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-MUSTC-EN-ES-ST
`s2t-small-mustc-en-es-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Spanish text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-es-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-es-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-es-st is trained on English-Spanish subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-es (BLEU score): 27.2
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "es"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["mustc"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-mustc-en-es-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"es",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en",
"es"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #es #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-MUSTC-EN-ES-ST
's2t-small-mustc-en-es-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Spanish text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-mustc-en-es-st is trained on English-Spanish subset of MuST-C.
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-es (BLEU score): 27.2
### BibTeX entry and citation info
|
[
"# S2T-SMALL-MUSTC-EN-ES-ST\n\n's2t-small-mustc-en-es-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Spanish text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-es-st is trained on English-Spanish subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-es (BLEU score): 27.2",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #es #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-MUSTC-EN-ES-ST\n\n's2t-small-mustc-en-es-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Spanish text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-es-st is trained on English-Spanish subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-es (BLEU score): 27.2",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-MUSTC-EN-FR-ST
`s2t-small-mustc-en-fr-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to French text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-fr-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-fr-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-fr-st is trained on English-French subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-fr (BLEU score): 32.9
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "fr"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["mustc"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-mustc-en-fr-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"fr",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en",
"fr"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #fr #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-MUSTC-EN-FR-ST
's2t-small-mustc-en-fr-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to French text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-mustc-en-fr-st is trained on English-French subset of MuST-C.
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-fr (BLEU score): 32.9
### BibTeX entry and citation info
|
[
"# S2T-SMALL-MUSTC-EN-FR-ST\n\n's2t-small-mustc-en-fr-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to French text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-fr-st is trained on English-French subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-fr (BLEU score): 32.9",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #fr #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-MUSTC-EN-FR-ST\n\n's2t-small-mustc-en-fr-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to French text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-fr-st is trained on English-French subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-fr (BLEU score): 32.9",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-MUSTC-EN-IT-ST
`s2t-small-mustc-en-it-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Italian text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-it-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-it-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-it-st is trained on English-Italian subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-it (BLEU score): 22.7
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "it"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["mustc"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-mustc-en-it-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"it",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en",
"it"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #it #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-MUSTC-EN-IT-ST
's2t-small-mustc-en-it-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Italian text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-mustc-en-it-st is trained on English-Italian subset of MuST-C.
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-it (BLEU score): 22.7
### BibTeX entry and citation info
|
[
"# S2T-SMALL-MUSTC-EN-IT-ST\n\n's2t-small-mustc-en-it-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Italian text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-it-st is trained on English-Italian subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-it (BLEU score): 22.7",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #it #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-MUSTC-EN-IT-ST\n\n's2t-small-mustc-en-it-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Italian text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-it-st is trained on English-Italian subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-it (BLEU score): 22.7",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-MUSTC-EN-NL-ST
`s2t-small-mustc-en-nl-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Dutch text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-nl-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-nl-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-nl-st is trained on English-Dutch subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-nl (BLEU score): 27.3
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "nl"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["mustc"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-mustc-en-nl-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"nl",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en",
"nl"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #nl #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-MUSTC-EN-NL-ST
's2t-small-mustc-en-nl-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Dutch text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-mustc-en-nl-st is trained on English-Dutch subset of MuST-C.
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-nl (BLEU score): 27.3
### BibTeX entry and citation info
|
[
"# S2T-SMALL-MUSTC-EN-NL-ST\n\n's2t-small-mustc-en-nl-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Dutch text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-nl-st is trained on English-Dutch subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-nl (BLEU score): 27.3",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #nl #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-MUSTC-EN-NL-ST\n\n's2t-small-mustc-en-nl-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Dutch text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-nl-st is trained on English-Dutch subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-nl (BLEU score): 27.3",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-MUSTC-EN-PT-ST
`s2t-small-mustc-en-pt-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Portuguese text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-pt-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-pt-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-pt-st is trained on English-Portuguese subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-pt (BLEU score): 28.1
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "pt"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["mustc"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-mustc-en-pt-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"pt",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en",
"pt"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #pt #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-MUSTC-EN-PT-ST
's2t-small-mustc-en-pt-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Portuguese text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-mustc-en-pt-st is trained on English-Portuguese subset of MuST-C.
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-pt (BLEU score): 28.1
### BibTeX entry and citation info
|
[
"# S2T-SMALL-MUSTC-EN-PT-ST\n\n's2t-small-mustc-en-pt-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Portuguese text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-pt-st is trained on English-Portuguese subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-pt (BLEU score): 28.1",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #pt #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-MUSTC-EN-PT-ST\n\n's2t-small-mustc-en-pt-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Portuguese text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-pt-st is trained on English-Portuguese subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-pt (BLEU score): 28.1",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-MUSTC-EN-RO-ST
`s2t-small-mustc-en-ro-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Romanian text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-ro-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-ro-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-ro-st is trained on English-Romanian subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-ro (BLEU score): 21.9
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "ro"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["mustc"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-mustc-en-ro-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"ro",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en",
"ro"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #ro #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-MUSTC-EN-RO-ST
's2t-small-mustc-en-ro-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Romanian text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-mustc-en-ro-st is trained on English-Romanian subset of MuST-C.
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-ro (BLEU score): 21.9
### BibTeX entry and citation info
|
[
"# S2T-SMALL-MUSTC-EN-RO-ST\n\n's2t-small-mustc-en-ro-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Romanian text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-ro-st is trained on English-Romanian subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-ro (BLEU score): 21.9",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #ro #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-MUSTC-EN-RO-ST\n\n's2t-small-mustc-en-ro-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Romanian text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-ro-st is trained on English-Romanian subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-ro (BLEU score): 21.9",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T-SMALL-MUSTC-EN-RU-ST
`s2t-small-mustc-en-ru-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Russian text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-ru-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-ru-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-ru-st is trained on English-Russian subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-ru (BLEU score): 15.3
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
{"language": ["en", "ru"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition"], "datasets": ["mustc"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
|
facebook/s2t-small-mustc-en-ru-st
| null |
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"ru",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.05171",
"1904.08779"
] |
[
"en",
"ru"
] |
TAGS
#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #ru #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us
|
# S2T-SMALL-MUSTC-EN-RU-ST
's2t-small-mustc-en-ru-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in this paper and released in
this repository
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Russian text translation.
See the model hub to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the
filter bank features. Make sure to install the 'torchaudio' package before running this example.*
You could either install those as extra speech dependancies with
'pip install transformers"[speech, sentencepiece]"' or install the packages seperatly
with 'pip install torchaudio sentencepiece'.
## Training data
The s2t-small-mustc-en-ru-st is trained on English-Russian subset of MuST-C.
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using SpecAugment.
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-ru (BLEU score): 15.3
### BibTeX entry and citation info
|
[
"# S2T-SMALL-MUSTC-EN-RU-ST\n\n's2t-small-mustc-en-ru-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Russian text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-ru-st is trained on English-Russian subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-ru (BLEU score): 15.3",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #speech_to_text #automatic-speech-recognition #audio #speech-translation #en #ru #dataset-mustc #arxiv-2010.05171 #arxiv-1904.08779 #license-mit #endpoints_compatible #region-us \n",
"# S2T-SMALL-MUSTC-EN-RU-ST\n\n's2t-small-mustc-en-ru-st' is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).\nThe S2T model was proposed in this paper and released in\nthis repository",
"## Model description\n\nS2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are\nfed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the\ntranscripts/translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Russian text translation.\nSee the model hub to look for other S2T checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\n*Note: The 'Speech2TextProcessor' object uses torchaudio to extract the\nfilter bank features. Make sure to install the 'torchaudio' package before running this example.*\n\nYou could either install those as extra speech dependancies with\n'pip install transformers\"[speech, sentencepiece]\"' or install the packages seperatly \nwith 'pip install torchaudio sentencepiece'.",
"## Training data\n\nThe s2t-small-mustc-en-ru-st is trained on English-Russian subset of MuST-C.\nMuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems\nfor speech translation from English into several languages. For each target language, MuST-C comprises several hundred\nhours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual\ntranscriptions and translations.",
"## Training procedure",
"### Preprocessing\n\nThe speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from\nWAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)\nis applied to each example.\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.",
"### Training\n\nThe model is trained with standard autoregressive cross-entropy loss and using SpecAugment.\nThe encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate\nmodel training and for better performance the encoder is pre-trained for English ASR.",
"## Evaluation results\n\nMuST-C test results for en-ru (BLEU score): 15.3",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T2-Wav2Vec2-CoVoST2-EN-AR-ST
`s2t-wav2vec2-large-en-ar` is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/pdf/2104.06678.pdf) and officially released in
[Fairseq](https://github.com/pytorch/fairseq/blob/6f847c8654d56b4d1b1fbacec027f47419426ddb/fairseq/models/wav2vec/wav2vec2_asr.py#L266).
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained [Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html) as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Arabic text translation.
See the [model hub](https://huggingface.co/models?filter=speech2text2) to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
```python
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-ar", feature_extractor="facebook/s2t-wav2vec2-large-en-ar")
translation = asr(librispeech_en[0]["file"])
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoder
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-ar")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-ar")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
```
## Evaluation results
CoVoST-V2 test results for en-ar (BLEU score): **20.2**
For more information, please have a look at the [official paper](https://arxiv.org/pdf/2104.06678.pdf) - especially row 10 of Table 2.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-06678,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino and
Alexei Baevski and
Michael Auli and
Alexis Conneau},
title = {Large-Scale Self- and Semi-Supervised Learning for Speech Translation},
journal = {CoRR},
volume = {abs/2104.06678},
year = {2021},
url = {https://arxiv.org/abs/2104.06678},
archivePrefix = {arXiv},
eprint = {2104.06678},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-06678.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": ["en", "ar"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition", "speech2text2"], "datasets": ["covost2", "librispeech_asr"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Common Voice 1", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3"}, {"example_title": "Common Voice 2", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_99987.mp3"}, {"example_title": "Common Voice 3", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_99988.mp3"}]}
|
facebook/s2t-wav2vec2-large-en-ar
| null |
[
"transformers",
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"audio",
"speech-translation",
"speech2text2",
"en",
"ar",
"dataset:covost2",
"dataset:librispeech_asr",
"arxiv:2104.06678",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06678"
] |
[
"en",
"ar"
] |
TAGS
#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #audio #speech-translation #speech2text2 #en #ar #dataset-covost2 #dataset-librispeech_asr #arxiv-2104.06678 #license-mit #endpoints_compatible #has_space #region-us
|
# S2T2-Wav2Vec2-CoVoST2-EN-AR-ST
's2t-wav2vec2-large-en-ar' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in
Fairseq.
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Arabic text translation.
See the model hub to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
or step-by-step as follows:
## Evaluation results
CoVoST-V2 test results for en-ar (BLEU score): 20.2
For more information, please have a look at the official paper - especially row 10 of Table 2.
### BibTeX entry and citation info
|
[
"# S2T2-Wav2Vec2-CoVoST2-EN-AR-ST\n\n's2t-wav2vec2-large-en-ar' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).\nThe S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in\nFairseq.",
"## Model description\n\nS2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Arabic text translation.\nSee the model hub to look for other S2T2 checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\nYou can use the model directly via the ASR pipeline\n\n\n\nor step-by-step as follows:",
"## Evaluation results\n\nCoVoST-V2 test results for en-ar (BLEU score): 20.2\n\nFor more information, please have a look at the official paper - especially row 10 of Table 2.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #audio #speech-translation #speech2text2 #en #ar #dataset-covost2 #dataset-librispeech_asr #arxiv-2104.06678 #license-mit #endpoints_compatible #has_space #region-us \n",
"# S2T2-Wav2Vec2-CoVoST2-EN-AR-ST\n\n's2t-wav2vec2-large-en-ar' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).\nThe S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in\nFairseq.",
"## Model description\n\nS2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Arabic text translation.\nSee the model hub to look for other S2T2 checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\nYou can use the model directly via the ASR pipeline\n\n\n\nor step-by-step as follows:",
"## Evaluation results\n\nCoVoST-V2 test results for en-ar (BLEU score): 20.2\n\nFor more information, please have a look at the official paper - especially row 10 of Table 2.",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T2-Wav2Vec2-CoVoST2-EN-CA-ST
`s2t-wav2vec2-large-en-ca` is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/pdf/2104.06678.pdf) and officially released in
[Fairseq](https://github.com/pytorch/fairseq/blob/6f847c8654d56b4d1b1fbacec027f47419426ddb/fairseq/models/wav2vec/wav2vec2_asr.py#L266).
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained [Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html) as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Catalan text translation.
See the [model hub](https://huggingface.co/models?filter=speech2text2) to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
```python
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-ca", feature_extractor="facebook/s2t-wav2vec2-large-en-ca")
translation = asr(librispeech_en[0]["file"])
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoder
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-ca")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-ca")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
```
## Evaluation results
CoVoST-V2 test results for en-ca (BLEU score): **34.1**
For more information, please have a look at the [official paper](https://arxiv.org/pdf/2104.06678.pdf) - especially row 10 of Table 2.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-06678,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino and
Alexei Baevski and
Michael Auli and
Alexis Conneau},
title = {Large-Scale Self- and Semi-Supervised Learning for Speech Translation},
journal = {CoRR},
volume = {abs/2104.06678},
year = {2021},
url = {https://arxiv.org/abs/2104.06678},
archivePrefix = {arXiv},
eprint = {2104.06678},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-06678.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": ["en", "ca"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition", "speech2text2"], "datasets": ["covost2", "librispeech_asr"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Common Voice 1", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3"}, {"example_title": "Common Voice 2", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_99989.mp3"}, {"example_title": "Common Voice 3", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_9999.mp3"}]}
|
facebook/s2t-wav2vec2-large-en-ca
| null |
[
"transformers",
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"audio",
"speech-translation",
"speech2text2",
"en",
"ca",
"dataset:covost2",
"dataset:librispeech_asr",
"arxiv:2104.06678",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06678"
] |
[
"en",
"ca"
] |
TAGS
#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #audio #speech-translation #speech2text2 #en #ca #dataset-covost2 #dataset-librispeech_asr #arxiv-2104.06678 #license-mit #endpoints_compatible #region-us
|
# S2T2-Wav2Vec2-CoVoST2-EN-CA-ST
's2t-wav2vec2-large-en-ca' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in
Fairseq.
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Catalan text translation.
See the model hub to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
or step-by-step as follows:
## Evaluation results
CoVoST-V2 test results for en-ca (BLEU score): 34.1
For more information, please have a look at the official paper - especially row 10 of Table 2.
### BibTeX entry and citation info
|
[
"# S2T2-Wav2Vec2-CoVoST2-EN-CA-ST\n\n's2t-wav2vec2-large-en-ca' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).\nThe S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in\nFairseq.",
"## Model description\n\nS2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Catalan text translation.\nSee the model hub to look for other S2T2 checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\nYou can use the model directly via the ASR pipeline\n\n\n\nor step-by-step as follows:",
"## Evaluation results\n\nCoVoST-V2 test results for en-ca (BLEU score): 34.1\n\nFor more information, please have a look at the official paper - especially row 10 of Table 2.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #audio #speech-translation #speech2text2 #en #ca #dataset-covost2 #dataset-librispeech_asr #arxiv-2104.06678 #license-mit #endpoints_compatible #region-us \n",
"# S2T2-Wav2Vec2-CoVoST2-EN-CA-ST\n\n's2t-wav2vec2-large-en-ca' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).\nThe S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in\nFairseq.",
"## Model description\n\nS2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Catalan text translation.\nSee the model hub to look for other S2T2 checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\nYou can use the model directly via the ASR pipeline\n\n\n\nor step-by-step as follows:",
"## Evaluation results\n\nCoVoST-V2 test results for en-ca (BLEU score): 34.1\n\nFor more information, please have a look at the official paper - especially row 10 of Table 2.",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T2-Wav2Vec2-CoVoST2-EN-DE-ST
`s2t-wav2vec2-large-en-de` is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/pdf/2104.06678.pdf) and officially released in
[Fairseq](https://github.com/pytorch/fairseq/blob/6f847c8654d56b4d1b1fbacec027f47419426ddb/fairseq/models/wav2vec/wav2vec2_asr.py#L266).
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained [Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html) as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to German text translation.
See the [model hub](https://huggingface.co/models?filter=speech2text2) to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
```python
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-de", feature_extractor="facebook/s2t-wav2vec2-large-en-de")
translation_de = asr(librispeech_en[0]["file"])
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoder
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
```
## Evaluation results
CoVoST-V2 test results for en-de (BLEU score): **26.5**
For more information, please have a look at the [official paper](https://arxiv.org/pdf/2104.06678.pdf) - especially row 10 of Table 2.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-06678,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino and
Alexei Baevski and
Michael Auli and
Alexis Conneau},
title = {Large-Scale Self- and Semi-Supervised Learning for Speech Translation},
journal = {CoRR},
volume = {abs/2104.06678},
year = {2021},
url = {https://arxiv.org/abs/2104.06678},
archivePrefix = {arXiv},
eprint = {2104.06678},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-06678.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": ["en", "de"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition", "speech2text2"], "datasets": ["covost2", "librispeech_asr"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Common Voice 1", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3"}, {"example_title": "Common Voice 2", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_99985.mp3"}, {"example_title": "Common Voice 3", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_99986.mp3"}]}
|
facebook/s2t-wav2vec2-large-en-de
| null |
[
"transformers",
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"audio",
"speech-translation",
"speech2text2",
"en",
"de",
"dataset:covost2",
"dataset:librispeech_asr",
"arxiv:2104.06678",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06678"
] |
[
"en",
"de"
] |
TAGS
#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #audio #speech-translation #speech2text2 #en #de #dataset-covost2 #dataset-librispeech_asr #arxiv-2104.06678 #license-mit #endpoints_compatible #has_space #region-us
|
# S2T2-Wav2Vec2-CoVoST2-EN-DE-ST
's2t-wav2vec2-large-en-de' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in
Fairseq.
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to German text translation.
See the model hub to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
or step-by-step as follows:
## Evaluation results
CoVoST-V2 test results for en-de (BLEU score): 26.5
For more information, please have a look at the official paper - especially row 10 of Table 2.
### BibTeX entry and citation info
|
[
"# S2T2-Wav2Vec2-CoVoST2-EN-DE-ST\n\n's2t-wav2vec2-large-en-de' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).\nThe S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in\nFairseq.",
"## Model description\n\nS2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to German text translation.\nSee the model hub to look for other S2T2 checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\nYou can use the model directly via the ASR pipeline\n\n\n\nor step-by-step as follows:",
"## Evaluation results\n\nCoVoST-V2 test results for en-de (BLEU score): 26.5\n\nFor more information, please have a look at the official paper - especially row 10 of Table 2.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #audio #speech-translation #speech2text2 #en #de #dataset-covost2 #dataset-librispeech_asr #arxiv-2104.06678 #license-mit #endpoints_compatible #has_space #region-us \n",
"# S2T2-Wav2Vec2-CoVoST2-EN-DE-ST\n\n's2t-wav2vec2-large-en-de' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).\nThe S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in\nFairseq.",
"## Model description\n\nS2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to German text translation.\nSee the model hub to look for other S2T2 checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\nYou can use the model directly via the ASR pipeline\n\n\n\nor step-by-step as follows:",
"## Evaluation results\n\nCoVoST-V2 test results for en-de (BLEU score): 26.5\n\nFor more information, please have a look at the official paper - especially row 10 of Table 2.",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# S2T2-Wav2Vec2-CoVoST2-EN-TR-ST
`s2t-wav2vec2-large-en-tr` is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/pdf/2104.06678.pdf) and officially released in
[Fairseq](https://github.com/pytorch/fairseq/blob/6f847c8654d56b4d1b1fbacec027f47419426ddb/fairseq/models/wav2vec/wav2vec2_asr.py#L266).
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained [Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html) as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Turkish text translation.
See the [model hub](https://huggingface.co/models?filter=speech2text2) to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
```python
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-tr", feature_extractor="facebook/s2t-wav2vec2-large-en-tr")
translation = asr(librispeech_en[0]["file"])
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoder
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-tr")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-tr")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
```
## Evaluation results
CoVoST-V2 test results for en-tr (BLEU score): **17.5**
For more information, please have a look at the [official paper](https://arxiv.org/pdf/2104.06678.pdf) - especially row 10 of Table 2.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-06678,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino and
Alexei Baevski and
Michael Auli and
Alexis Conneau},
title = {Large-Scale Self- and Semi-Supervised Learning for Speech Translation},
journal = {CoRR},
volume = {abs/2104.06678},
year = {2021},
url = {https://arxiv.org/abs/2104.06678},
archivePrefix = {arXiv},
eprint = {2104.06678},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-06678.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": ["en", "tr"], "license": "mit", "tags": ["audio", "speech-translation", "automatic-speech-recognition", "speech2text2"], "datasets": ["covost2", "librispeech_asr"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Common Voice 1", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_99989.mp3"}, {"example_title": "Common Voice 2", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_99986.mp3"}, {"example_title": "Common Voice 3", "src": "https://cdn-media.huggingface.co/speech_samples/common_voice_en_99987.mp3"}]}
|
facebook/s2t-wav2vec2-large-en-tr
| null |
[
"transformers",
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"audio",
"speech-translation",
"speech2text2",
"en",
"tr",
"dataset:covost2",
"dataset:librispeech_asr",
"arxiv:2104.06678",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06678"
] |
[
"en",
"tr"
] |
TAGS
#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #audio #speech-translation #speech2text2 #en #tr #dataset-covost2 #dataset-librispeech_asr #arxiv-2104.06678 #license-mit #endpoints_compatible #has_space #region-us
|
# S2T2-Wav2Vec2-CoVoST2-EN-TR-ST
's2t-wav2vec2-large-en-tr' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in
Fairseq.
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Turkish text translation.
See the model hub to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
or step-by-step as follows:
## Evaluation results
CoVoST-V2 test results for en-tr (BLEU score): 17.5
For more information, please have a look at the official paper - especially row 10 of Table 2.
### BibTeX entry and citation info
|
[
"# S2T2-Wav2Vec2-CoVoST2-EN-TR-ST\n\n's2t-wav2vec2-large-en-tr' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).\nThe S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in\nFairseq.",
"## Model description\n\nS2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Turkish text translation.\nSee the model hub to look for other S2T2 checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\nYou can use the model directly via the ASR pipeline\n\n\n\nor step-by-step as follows:",
"## Evaluation results\n\nCoVoST-V2 test results for en-tr (BLEU score): 17.5\n\nFor more information, please have a look at the official paper - especially row 10 of Table 2.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #audio #speech-translation #speech2text2 #en #tr #dataset-covost2 #dataset-librispeech_asr #arxiv-2104.06678 #license-mit #endpoints_compatible #has_space #region-us \n",
"# S2T2-Wav2Vec2-CoVoST2-EN-TR-ST\n\n's2t-wav2vec2-large-en-tr' is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).\nThe S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in\nFairseq.",
"## Model description\n\nS2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech\nTranslation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.",
"## Intended uses & limitations\n\nThis model can be used for end-to-end English speech to Turkish text translation.\nSee the model hub to look for other S2T2 checkpoints.",
"### How to use\n\nAs this a standard sequence to sequence transformer model, you can use the 'generate' method to generate the\ntranscripts by passing the speech features to the model.\n\nYou can use the model directly via the ASR pipeline\n\n\n\nor step-by-step as follows:",
"## Evaluation results\n\nCoVoST-V2 test results for en-tr (BLEU score): 17.5\n\nFor more information, please have a look at the official paper - especially row 10 of Table 2.",
"### BibTeX entry and citation info"
] |
text-to-speech
|
fairseq
|
# tts_transformer-ar-cv7
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Arabic
- Single-speaker male voice
- Trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-ar-cv7",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "مرحبًا ، هذا اختبار تشغيل."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
{"language": "ar", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["common_voice"], "task": "text-to-speech", "widget": [{"text": "\u0645\u0631\u062d\u0628\u064b\u0627 \u060c \u0647\u0630\u0627 \u0627\u062e\u062a\u0628\u0627\u0631 \u062a\u0634\u063a\u064a\u0644.", "example_title": "Hello, this is a test run."}]}
|
facebook/tts_transformer-ar-cv7
| null |
[
"fairseq",
"audio",
"text-to-speech",
"ar",
"dataset:common_voice",
"arxiv:1809.08895",
"arxiv:2109.06912",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1809.08895",
"2109.06912"
] |
[
"ar"
] |
TAGS
#fairseq #audio #text-to-speech #ar #dataset-common_voice #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us
|
# tts_transformer-ar-cv7
Transformer text-to-speech model from fairseq S^2 (paper/code):
- Arabic
- Single-speaker male voice
- Trained on Common Voice v7
## Usage
See also fairseq S^2 example.
|
[
"# tts_transformer-ar-cv7\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Arabic\n- Single-speaker male voice\n- Trained on Common Voice v7",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
[
"TAGS\n#fairseq #audio #text-to-speech #ar #dataset-common_voice #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us \n",
"# tts_transformer-ar-cv7\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Arabic\n- Single-speaker male voice\n- Trained on Common Voice v7",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
text-to-speech
|
fairseq
|
# tts_transformer-en-200_speaker-cv4
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- English
- 200 male/female voices (random speaker when using the widget)
- Trained on [Common Voice v4](https://commonvoice.mozilla.org/en/datasets)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-en-200_speaker-cv4",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Hello, this is a test run."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
{"language": "en", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech", "multi-speaker"], "datasets": ["common_voice"], "task": "text-to-speech", "widget": [{"text": "Hello, this is a test run.", "example_title": "Hello, this is a test run."}]}
|
facebook/tts_transformer-en-200_speaker-cv4
| null |
[
"fairseq",
"audio",
"text-to-speech",
"multi-speaker",
"en",
"dataset:common_voice",
"arxiv:1809.08895",
"arxiv:2109.06912",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1809.08895",
"2109.06912"
] |
[
"en"
] |
TAGS
#fairseq #audio #text-to-speech #multi-speaker #en #dataset-common_voice #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us
|
# tts_transformer-en-200_speaker-cv4
Transformer text-to-speech model from fairseq S^2 (paper/code):
- English
- 200 male/female voices (random speaker when using the widget)
- Trained on Common Voice v4
## Usage
See also fairseq S^2 example.
|
[
"# tts_transformer-en-200_speaker-cv4\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- English\n- 200 male/female voices (random speaker when using the widget)\n- Trained on Common Voice v4",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
[
"TAGS\n#fairseq #audio #text-to-speech #multi-speaker #en #dataset-common_voice #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us \n",
"# tts_transformer-en-200_speaker-cv4\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- English\n- 200 male/female voices (random speaker when using the widget)\n- Trained on Common Voice v4",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
text-to-speech
|
fairseq
|
# tts_transformer-en-ljspeech
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- English
- Single-speaker female voice
- Trained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-en-ljspeech",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Hello, this is a test run."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/ljspeech_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
{"language": "en", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["ljspeech"], "task": "text-to-speech", "widget": [{"text": "Hello, this is a test run.", "example_title": "Hello, this is a test run."}]}
|
facebook/tts_transformer-en-ljspeech
| null |
[
"fairseq",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1809.08895",
"arxiv:2109.06912",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1809.08895",
"2109.06912"
] |
[
"en"
] |
TAGS
#fairseq #audio #text-to-speech #en #dataset-ljspeech #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us
|
# tts_transformer-en-ljspeech
Transformer text-to-speech model from fairseq S^2 (paper/code):
- English
- Single-speaker female voice
- Trained on LJSpeech
## Usage
See also fairseq S^2 example.
|
[
"# tts_transformer-en-ljspeech\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- English\n- Single-speaker female voice\n- Trained on LJSpeech",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
[
"TAGS\n#fairseq #audio #text-to-speech #en #dataset-ljspeech #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us \n",
"# tts_transformer-en-ljspeech\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- English\n- Single-speaker female voice\n- Trained on LJSpeech",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
text-to-speech
|
fairseq
|
# tts_transformer-es-css10
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Spanish
- Single-speaker male voice
- Trained on [CSS10](https://github.com/Kyubyong/css10)
## Usage
Dependencies
```sh
pip install fairseq sentencepiece
```
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-es-css10",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator([model], cfg)
text = "Hola, esta es una prueba."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
{"language": "es", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["css10"], "task": "text-to-speech", "widget": [{"text": "Hola, esta es una prueba.", "example_title": "Hello, this is a test run."}]}
|
facebook/tts_transformer-es-css10
| null |
[
"fairseq",
"audio",
"text-to-speech",
"es",
"dataset:css10",
"arxiv:1809.08895",
"arxiv:2109.06912",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1809.08895",
"2109.06912"
] |
[
"es"
] |
TAGS
#fairseq #audio #text-to-speech #es #dataset-css10 #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us
|
# tts_transformer-es-css10
Transformer text-to-speech model from fairseq S^2 (paper/code):
- Spanish
- Single-speaker male voice
- Trained on CSS10
## Usage
Dependencies
See also fairseq S^2 example.
|
[
"# tts_transformer-es-css10\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Spanish\n- Single-speaker male voice\n- Trained on CSS10",
"## Usage\nDependencies\n\n\n\n\nSee also fairseq S^2 example."
] |
[
"TAGS\n#fairseq #audio #text-to-speech #es #dataset-css10 #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us \n",
"# tts_transformer-es-css10\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Spanish\n- Single-speaker male voice\n- Trained on CSS10",
"## Usage\nDependencies\n\n\n\n\nSee also fairseq S^2 example."
] |
text-to-speech
|
fairseq
|
# tts_transformer-fr-cv7_css10
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- French
- Single-speaker male voice
- Pre-trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets), fine-tuned on [CSS10](https://github.com/Kyubyong/css10)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-fr-cv7_css10",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Bonjour, ceci est un test."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
{"language": "fr", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["common_voice", "css10"], "task": "text-to-speech", "widget": [{"text": "Bonjour, ceci est un test.", "example_title": "Hello, this is a test run."}]}
|
facebook/tts_transformer-fr-cv7_css10
| null |
[
"fairseq",
"audio",
"text-to-speech",
"fr",
"dataset:common_voice",
"dataset:css10",
"arxiv:1809.08895",
"arxiv:2109.06912",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1809.08895",
"2109.06912"
] |
[
"fr"
] |
TAGS
#fairseq #audio #text-to-speech #fr #dataset-common_voice #dataset-css10 #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us
|
# tts_transformer-fr-cv7_css10
Transformer text-to-speech model from fairseq S^2 (paper/code):
- French
- Single-speaker male voice
- Pre-trained on Common Voice v7, fine-tuned on CSS10
## Usage
See also fairseq S^2 example.
|
[
"# tts_transformer-fr-cv7_css10\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- French\n- Single-speaker male voice\n- Pre-trained on Common Voice v7, fine-tuned on CSS10",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
[
"TAGS\n#fairseq #audio #text-to-speech #fr #dataset-common_voice #dataset-css10 #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us \n",
"# tts_transformer-fr-cv7_css10\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- French\n- Single-speaker male voice\n- Pre-trained on Common Voice v7, fine-tuned on CSS10",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
text-to-speech
|
fairseq
|
# tts_transformer-ru-cv7_css10
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Russian
- Single-speaker male voice
- Pre-trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets), fine-tuned on [CSS10](https://github.com/Kyubyong/css10)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-ru-cv7_css10",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Здравствуйте, это пробный запуск."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
{"language": "ru", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["common_voice", "css10"], "task": "text-to-speech", "widget": [{"text": "\u0417\u0434\u0440\u0430\u0432\u0441\u0442\u0432\u0443\u0439\u0442\u0435, \u044d\u0442\u043e \u043f\u0440\u043e\u0431\u043d\u044b\u0439 \u0437\u0430\u043f\u0443\u0441\u043a.", "example_title": "Hello, this is a test run."}]}
|
facebook/tts_transformer-ru-cv7_css10
| null |
[
"fairseq",
"audio",
"text-to-speech",
"ru",
"dataset:common_voice",
"dataset:css10",
"arxiv:1809.08895",
"arxiv:2109.06912",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1809.08895",
"2109.06912"
] |
[
"ru"
] |
TAGS
#fairseq #audio #text-to-speech #ru #dataset-common_voice #dataset-css10 #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us
|
# tts_transformer-ru-cv7_css10
Transformer text-to-speech model from fairseq S^2 (paper/code):
- Russian
- Single-speaker male voice
- Pre-trained on Common Voice v7, fine-tuned on CSS10
## Usage
See also fairseq S^2 example.
|
[
"# tts_transformer-ru-cv7_css10\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Russian\n- Single-speaker male voice\n- Pre-trained on Common Voice v7, fine-tuned on CSS10",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
[
"TAGS\n#fairseq #audio #text-to-speech #ru #dataset-common_voice #dataset-css10 #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us \n",
"# tts_transformer-ru-cv7_css10\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Russian\n- Single-speaker male voice\n- Pre-trained on Common Voice v7, fine-tuned on CSS10",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
text-to-speech
|
fairseq
|
# tts_transformer-tr-cv7
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Turkish
- Single-speaker male voice
- Trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-tr-cv7",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Merhaba, bu bir deneme çalışmasıdır."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
{"language": "tr", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["common_voice"], "task": "text-to-speech", "widget": [{"text": "Merhaba, bu bir deneme \u00e7al\u0131\u015fmas\u0131d\u0131r.", "example_title": "Hello, this is a test run."}]}
|
facebook/tts_transformer-tr-cv7
| null |
[
"fairseq",
"audio",
"text-to-speech",
"tr",
"dataset:common_voice",
"arxiv:1809.08895",
"arxiv:2109.06912",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1809.08895",
"2109.06912"
] |
[
"tr"
] |
TAGS
#fairseq #audio #text-to-speech #tr #dataset-common_voice #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us
|
# tts_transformer-tr-cv7
Transformer text-to-speech model from fairseq S^2 (paper/code):
- Turkish
- Single-speaker male voice
- Trained on Common Voice v7
## Usage
See also fairseq S^2 example.
|
[
"# tts_transformer-tr-cv7\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Turkish\n- Single-speaker male voice\n- Trained on Common Voice v7",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
[
"TAGS\n#fairseq #audio #text-to-speech #tr #dataset-common_voice #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us \n",
"# tts_transformer-tr-cv7\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Turkish\n- Single-speaker male voice\n- Trained on Common Voice v7",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
text-to-speech
|
fairseq
|
# tts_transformer-vi-cv7
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Vietnamese
- Single-speaker male voice
- Trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-vi-cv7",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Xin chào, đây là một cuộc chạy thử nghiệm."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
{"language": "vi", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["common_voice"], "task": "text-to-speech", "widget": [{"text": "Xin ch\u00e0o, \u0111\u00e2y l\u00e0 m\u1ed9t cu\u1ed9c ch\u1ea1y th\u1eed nghi\u1ec7m.", "example_title": "Hello, this is a test run."}]}
|
facebook/tts_transformer-vi-cv7
| null |
[
"fairseq",
"audio",
"text-to-speech",
"vi",
"dataset:common_voice",
"arxiv:1809.08895",
"arxiv:2109.06912",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1809.08895",
"2109.06912"
] |
[
"vi"
] |
TAGS
#fairseq #audio #text-to-speech #vi #dataset-common_voice #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us
|
# tts_transformer-vi-cv7
Transformer text-to-speech model from fairseq S^2 (paper/code):
- Vietnamese
- Single-speaker male voice
- Trained on Common Voice v7
## Usage
See also fairseq S^2 example.
|
[
"# tts_transformer-vi-cv7\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Vietnamese\n- Single-speaker male voice\n- Trained on Common Voice v7",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
[
"TAGS\n#fairseq #audio #text-to-speech #vi #dataset-common_voice #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us \n",
"# tts_transformer-vi-cv7\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Vietnamese\n- Single-speaker male voice\n- Trained on Common Voice v7",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
text-to-speech
|
fairseq
|
# tts_transformer-zh-cv7_css10
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Simplified Chinese
- Single-speaker female voice
- Pre-trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets), fine-tuned on [CSS10](https://github.com/Kyubyong/css10)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-zh-cv7_css10",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "您好,这是试运行。"
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
{"language": "zh", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["common_voice", "css10"], "task": "text-to-speech", "widget": [{"text": "\u60a8\u597d\uff0c\u8fd9\u662f\u8bd5\u8fd0\u884c\u3002", "example_title": "Hello, this is a test run."}]}
|
facebook/tts_transformer-zh-cv7_css10
| null |
[
"fairseq",
"audio",
"text-to-speech",
"zh",
"dataset:common_voice",
"dataset:css10",
"arxiv:1809.08895",
"arxiv:2109.06912",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1809.08895",
"2109.06912"
] |
[
"zh"
] |
TAGS
#fairseq #audio #text-to-speech #zh #dataset-common_voice #dataset-css10 #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us
|
# tts_transformer-zh-cv7_css10
Transformer text-to-speech model from fairseq S^2 (paper/code):
- Simplified Chinese
- Single-speaker female voice
- Pre-trained on Common Voice v7, fine-tuned on CSS10
## Usage
See also fairseq S^2 example.
|
[
"# tts_transformer-zh-cv7_css10\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Simplified Chinese\n- Single-speaker female voice\n- Pre-trained on Common Voice v7, fine-tuned on CSS10",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
[
"TAGS\n#fairseq #audio #text-to-speech #zh #dataset-common_voice #dataset-css10 #arxiv-1809.08895 #arxiv-2109.06912 #has_space #region-us \n",
"# tts_transformer-zh-cv7_css10\n\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Simplified Chinese\n- Single-speaker female voice\n- Pre-trained on Common Voice v7, fine-tuned on CSS10",
"## Usage\n\n\n\nSee also fairseq S^2 example."
] |
null |
transformers
|
# Vision Transformer (base-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in [this repository](https://github.com/facebookresearch/mae).
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/vit-mae) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, ViTMAEForPreTraining
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/vit-mae-base')
model = ViTMAEForPreTraining.from_pretrained('facebook/vit-mae-base')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
mask = outputs.mask
ids_restore = outputs.ids_restore
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-06377,
author = {Kaiming He and
Xinlei Chen and
Saining Xie and
Yanghao Li and
Piotr Doll{\'{a}}r and
Ross B. Girshick},
title = {Masked Autoencoders Are Scalable Vision Learners},
journal = {CoRR},
volume = {abs/2111.06377},
year = {2021},
url = {https://arxiv.org/abs/2111.06377},
eprinttype = {arXiv},
eprint = {2111.06377},
timestamp = {Tue, 16 Nov 2021 12:12:31 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-06377.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"license": "apache-2.0", "tags": ["vision"], "datasets": ["imagenet-1k"]}
|
facebook/vit-mae-base
| null |
[
"transformers",
"pytorch",
"tf",
"vit_mae",
"pretraining",
"vision",
"dataset:imagenet-1k",
"arxiv:2111.06377",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2111.06377"
] |
[] |
TAGS
#transformers #pytorch #tf #vit_mae #pretraining #vision #dataset-imagenet-1k #arxiv-2111.06377 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# Vision Transformer (base-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in this repository.
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the model hub to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
### BibTeX entry and citation info
|
[
"# Vision Transformer (base-sized model) pre-trained with MAE\n\nVision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in this repository. \n\nDisclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.\n\nDuring pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #vit_mae #pretraining #vision #dataset-imagenet-1k #arxiv-2111.06377 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# Vision Transformer (base-sized model) pre-trained with MAE\n\nVision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in this repository. \n\nDisclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.\n\nDuring pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:",
"### BibTeX entry and citation info"
] |
null |
transformers
|
# Vision Transformer (huge-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in [this repository](https://github.com/facebookresearch/mae).
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/vit-mae) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, ViTMAEForPreTraining
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/vit-mae-huge')
model = ViTMAEForPreTraining.from_pretrained('facebook/vit-mae-huge')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
mask = outputs.mask
ids_restore = outputs.ids_restore
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-06377,
author = {Kaiming He and
Xinlei Chen and
Saining Xie and
Yanghao Li and
Piotr Doll{\'{a}}r and
Ross B. Girshick},
title = {Masked Autoencoders Are Scalable Vision Learners},
journal = {CoRR},
volume = {abs/2111.06377},
year = {2021},
url = {https://arxiv.org/abs/2111.06377},
eprinttype = {arXiv},
eprint = {2111.06377},
timestamp = {Tue, 16 Nov 2021 12:12:31 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-06377.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"license": "apache-2.0", "tags": ["vision"], "datasets": ["imagenet-1k"]}
|
facebook/vit-mae-huge
| null |
[
"transformers",
"pytorch",
"tf",
"vit_mae",
"pretraining",
"vision",
"dataset:imagenet-1k",
"arxiv:2111.06377",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2111.06377"
] |
[] |
TAGS
#transformers #pytorch #tf #vit_mae #pretraining #vision #dataset-imagenet-1k #arxiv-2111.06377 #license-apache-2.0 #endpoints_compatible #region-us
|
# Vision Transformer (huge-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in this repository.
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the model hub to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
### BibTeX entry and citation info
|
[
"# Vision Transformer (huge-sized model) pre-trained with MAE\n\nVision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in this repository. \n\nDisclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.\n\nDuring pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #vit_mae #pretraining #vision #dataset-imagenet-1k #arxiv-2111.06377 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Vision Transformer (huge-sized model) pre-trained with MAE\n\nVision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in this repository. \n\nDisclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.\n\nDuring pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:",
"### BibTeX entry and citation info"
] |
null |
transformers
|
# Vision Transformer (large-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in [this repository](https://github.com/facebookresearch/mae).
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/vit-mae) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, ViTMAEForPreTraining
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/vit-mae-large')
model = ViTMAEForPreTraining.from_pretrained('facebook/vit-mae-large')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
mask = outputs.mask
ids_restore = outputs.ids_restore
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-06377,
author = {Kaiming He and
Xinlei Chen and
Saining Xie and
Yanghao Li and
Piotr Doll{\'{a}}r and
Ross B. Girshick},
title = {Masked Autoencoders Are Scalable Vision Learners},
journal = {CoRR},
volume = {abs/2111.06377},
year = {2021},
url = {https://arxiv.org/abs/2111.06377},
eprinttype = {arXiv},
eprint = {2111.06377},
timestamp = {Tue, 16 Nov 2021 12:12:31 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-06377.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"license": "apache-2.0", "tags": ["vision"], "datasets": ["imagenet-1k"]}
|
facebook/vit-mae-large
| null |
[
"transformers",
"pytorch",
"tf",
"vit_mae",
"pretraining",
"vision",
"dataset:imagenet-1k",
"arxiv:2111.06377",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2111.06377"
] |
[] |
TAGS
#transformers #pytorch #tf #vit_mae #pretraining #vision #dataset-imagenet-1k #arxiv-2111.06377 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# Vision Transformer (large-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in this repository.
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the model hub to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
### BibTeX entry and citation info
|
[
"# Vision Transformer (large-sized model) pre-trained with MAE\n\nVision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in this repository. \n\nDisclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.\n\nDuring pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #vit_mae #pretraining #vision #dataset-imagenet-1k #arxiv-2111.06377 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# Vision Transformer (large-sized model) pre-trained with MAE\n\nVision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in this repository. \n\nDisclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.\n\nDuring pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model:",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-100h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h")
# define function to read in sound file
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 6.1 | 13.5 |
|
{"language": "en", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition"], "datasets": ["librispeech_asr"]}
|
facebook/wav2vec2-base-100h
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.11477"
] |
[
"en"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #en #dataset-librispeech_asr #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Wav2Vec2-Base-100h
==================
Facebook's Wav2Vec2
The base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
Paper
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
Abstract
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under URL
Usage
=====
To transcribe audio files the model can be used as a standalone acoustic model as follows:
Evaluation
----------
This code snippet shows how to evaluate facebook/wav2vec2-base-100h on LibriSpeech's "clean" and "other" test data.
*Result (WER)*:
|
[] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #en #dataset-librispeech_asr #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 100k unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
{"language": "multilingual", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-100k-voxpopuli
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"multilingual",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #multilingual #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli
Facebook's Wav2Vec2 base model pretrained on the 100k unlabeled subset of VoxPopuli corpus.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Fine-Tuning
Please refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '"facebook/wav2vec2-large-xlsr-53"' with this checkpoint for fine-tuning.
|
[
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the 100k unlabeled subset of VoxPopuli corpus.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #multilingual #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the 100k unlabeled subset of VoxPopuli corpus.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in cs (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-cs")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-cs")
# load dataset
ds = load_dataset("common_voice", "cs", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "cs", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-cs
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"cs",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"cs"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #cs #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in cs (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in cs (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #cs #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in cs (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in de (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-de")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-de")
# load dataset
ds = load_dataset("common_voice", "de", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "de", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-de
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"de",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"de"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #de #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in de (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in de (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #de #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in de (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in en (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-en")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-en")
# load dataset
ds = load_dataset("common_voice", "en", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "en", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-en
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"en",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"en"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #en #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in en (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in en (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #en #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in en (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in es (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-es")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-es")
# load dataset
ds = load_dataset("common_voice", "es", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "es", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-es
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"es",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"es"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #es #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in es (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in es (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #es #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in es (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
| null |
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in et (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-et")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-et")
# load dataset
ds = load_dataset("common_voice", "et", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "et", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-et
| null |
[
"audio",
"automatic-speech-recognition",
"voxpopuli",
"et",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"et"
] |
TAGS
#audio #automatic-speech-recognition #voxpopuli #et #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in et (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in et (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#audio #automatic-speech-recognition #voxpopuli #et #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in et (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in fi (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-fi")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-fi")
# load dataset
ds = load_dataset("common_voice", "fi", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "fi", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-fi
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"fi",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"fi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #fi #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in fi (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in fi (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #fi #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in fi (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in fr (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-fr")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-fr")
# load dataset
ds = load_dataset("common_voice", "fr", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "fr", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-fr
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"fr",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #fr #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in fr (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in fr (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #fr #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in fr (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in hr (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-hr")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-hr")
# load dataset
ds = load_dataset("common_voice", "hr", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "hr", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-hr
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"hr",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"hr"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #hr #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in hr (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in hr (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #hr #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in hr (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in hu (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-hu")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-hu")
# load dataset
ds = load_dataset("common_voice", "hu", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "hu", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-hu
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"hu",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"hu"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #hu #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in hu (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in hu (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #hu #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in hu (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in it (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-it")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-it")
# load dataset
ds = load_dataset("common_voice", "it", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "it", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-it
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"it",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"it"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #it #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in it (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in it (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #it #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in it (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
| null |
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in lt (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-lt")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-lt")
# load dataset
ds = load_dataset("common_voice", "lt", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "lt", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-lt
| null |
[
"audio",
"automatic-speech-recognition",
"voxpopuli",
"lt",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"lt"
] |
TAGS
#audio #automatic-speech-recognition #voxpopuli #lt #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in lt (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in lt (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#audio #automatic-speech-recognition #voxpopuli #lt #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in lt (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in nl (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-nl")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-nl")
# load dataset
ds = load_dataset("common_voice", "nl", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "nl", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-nl
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"nl",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"nl"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #nl #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in nl (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in nl (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #nl #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in nl (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in pl (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-pl")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-pl")
# load dataset
ds = load_dataset("common_voice", "pl", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "pl", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-pl
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"pl",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"pl"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #pl #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in pl (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in pl (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #pl #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in pl (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in ro (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-ro")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-ro")
# load dataset
ds = load_dataset("common_voice", "ro", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "ro", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-ro
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"ro",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"ro"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #ro #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in ro (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in ro (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #ro #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in ro (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in sk (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-sk")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-sk")
# load dataset
ds = load_dataset("common_voice", "sk", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "sk", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-sk
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"sk",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"sk"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #sk #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in sk (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in sk (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #sk #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in sk (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in sl (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-sl")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-sl")
# load dataset
ds = load_dataset("common_voice", "sl", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
{"language": "sl", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli-ft-sl
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"voxpopuli",
"sl",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"sl"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #sl #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli-Finetuned
Facebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in sl (refer to Table 1 of paper for more information).
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the Common Voice dataset
|
[
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in sl (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #voxpopuli #sl #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli-Finetuned\n\nFacebook's Wav2Vec2 base model pretrained on the 10K unlabeled subset of VoxPopuli corpus and fine-tuned on the transcribed data in sl (refer to Table 1 of paper for more information).\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Usage for inference\n\nIn the following it is shown how the model can be used in inference on a sample of the Common Voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10k unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
{"language": "multilingual", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-10k-voxpopuli
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"multilingual",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #multilingual #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli
Facebook's Wav2Vec2 base model pretrained on the 10k unlabeled subset of VoxPopuli corpus.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Fine-Tuning
Please refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '"facebook/wav2vec2-large-xlsr-53"' with this checkpoint for fine-tuning.
|
[
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the 10k unlabeled subset of VoxPopuli corpus.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #multilingual #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the 10k unlabeled subset of VoxPopuli corpus.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-960h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.4 | 8.6 |
|
{"language": "en", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "datasets": ["librispeech_asr"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "wav2vec2-base-960h", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 3.4, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 8.6, "name": "Test WER"}]}]}]}
|
facebook/wav2vec2-base-960h
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.11477"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2006.11477 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
Wav2Vec2-Base-960h
==================
Facebook's Wav2Vec2
The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
Paper
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
Abstract
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under URL
Usage
=====
To transcribe audio files the model can be used as a standalone acoustic model as follows:
Evaluation
----------
This code snippet shows how to evaluate facebook/wav2vec2-base-960h on LibriSpeech's "clean" and "other" test data.
*Result (WER)*:
|
[] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2006.11477 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **bg** on **17.6k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **bg**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "bg", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-bg-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"bg",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"bg"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #bg #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in bg on 17.6k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in bg. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in bg on 17.6k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in bg. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #bg #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in bg on 17.6k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in bg. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **cs** on **18.7k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **cs**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "cs", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-cs-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"cs",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"cs"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #cs #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in cs on 18.7k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in cs. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in cs on 18.7k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in cs. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #cs #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in cs on 18.7k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in cs. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **da** on **13.6k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **da**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "da", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-da-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"da",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"da"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #da #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in da on 13.6k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in da. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in da on 13.6k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in da. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #da #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in da on 13.6k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in da. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **de** on **23.2k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **de**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "de", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-de-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"de",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"de"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #de #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in de on 23.2k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in de. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in de on 23.2k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in de. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #de #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in de on 23.2k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in de. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **el** on **17.7k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **el**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "el", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-el-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"el",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"el"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #el #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in el on 17.7k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in el. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in el on 17.7k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in el. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #el #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in el on 17.7k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in el. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **en** on **24.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **en**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "en", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-en-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"en",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"en"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #en #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in en on 24.1k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in en. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in en on 24.1k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in en. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #en #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in en on 24.1k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in en. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **es** on **21.4k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **es**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "es", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-es-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"es",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"es"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #es #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in es on 21.4k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in es. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in es on 21.4k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in es. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #es #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in es on 21.4k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in es. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the es unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
{"language": "es", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-es-voxpopuli
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"es",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"es"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #es #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli
Facebook's Wav2Vec2 base model pretrained on the es unlabeled subset of VoxPopuli corpus.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Fine-Tuning
Please refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '"facebook/wav2vec2-large-xlsr-53"' with this checkpoint for fine-tuning.
|
[
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the es unlabeled subset of VoxPopuli corpus.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #es #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the es unlabeled subset of VoxPopuli corpus.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **et** on **10.6k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **et**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "et", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-et-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"et",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"et"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #et #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in et on 10.6k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in et. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in et on 10.6k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in et. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #et #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in et on 10.6k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in et. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **fi** on **14.2k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **fi**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "fi", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-fi-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"fi",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"fi"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #fi #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in fi on 14.2k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in fi. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in fi on 14.2k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in fi. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #fi #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in fi on 14.2k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in fi. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **fr** on **22.8k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **fr**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "fr", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-fr-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"fr",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #fr #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in fr on 22.8k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in fr. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in fr on 22.8k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in fr. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #fr #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in fr on 22.8k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in fr. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the fr unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
{"language": "fr", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-fr-voxpopuli
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"fr",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #fr #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli
Facebook's Wav2Vec2 base model pretrained on the fr unlabeled subset of VoxPopuli corpus.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Fine-Tuning
Please refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '"facebook/wav2vec2-large-xlsr-53"' with this checkpoint for fine-tuning.
|
[
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the fr unlabeled subset of VoxPopuli corpus.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #fr #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the fr unlabeled subset of VoxPopuli corpus.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **hr** on **8.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **hr**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "hr", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-hr-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"hr",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"hr"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #hr #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in hr on 8.1k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in hr. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in hr on 8.1k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in hr. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #hr #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in hr on 8.1k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in hr. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **hu** on **17.7k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **hu**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "hu", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-hu-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"hu",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"hu"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #hu #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in hu on 17.7k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in hu. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in hu on 17.7k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in hu. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #hu #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in hu on 17.7k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in hu. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **it** on **21.9k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **it**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "it", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-it-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"it",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"it"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #it #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in it on 21.9k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in it. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in it on 21.9k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in it. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #it #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in it on 21.9k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in it. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the it unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
{"language": "it", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-it-voxpopuli
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"it",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"it"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #it #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli
Facebook's Wav2Vec2 base model pretrained on the it unlabeled subset of VoxPopuli corpus.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Fine-Tuning
Please refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '"facebook/wav2vec2-large-xlsr-53"' with this checkpoint for fine-tuning.
|
[
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the it unlabeled subset of VoxPopuli corpus.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #it #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the it unlabeled subset of VoxPopuli corpus.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **lt** on **14.4k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **lt**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "lt", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-lt-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"lt",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"lt"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #lt #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in lt on 14.4k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in lt. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in lt on 14.4k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in lt. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #lt #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in lt on 14.4k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in lt. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **lv** on **13.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **lv**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "lv", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-lv-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"lv",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"lv"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #lv #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in lv on 13.1k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in lv. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in lv on 13.1k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in lv. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #lv #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in lv on 13.1k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in lv. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **mt** on **9.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **mt**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "mt", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-mt-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"mt",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"mt"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #mt #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in mt on 9.1k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in mt. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in mt on 9.1k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in mt. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #mt #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in mt on 9.1k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in mt. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **nl** on **19.0k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **nl**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "nl", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-nl-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"nl",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"nl"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #nl #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in nl on 19.0k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in nl. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in nl on 19.0k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in nl. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #nl #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in nl on 19.0k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in nl. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the nl unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
{"language": "nl", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli"]}
|
facebook/wav2vec2-base-nl-voxpopuli
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"nl",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"nl"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #nl #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-VoxPopuli
Facebook's Wav2Vec2 base model pretrained on the nl unlabeled subset of VoxPopuli corpus.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, here
# Fine-Tuning
Please refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '"facebook/wav2vec2-large-xlsr-53"' with this checkpoint for fine-tuning.
|
[
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the nl unlabeled subset of VoxPopuli corpus.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli #nl #arxiv-2101.00390 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-VoxPopuli\n\nFacebook's Wav2Vec2 base model pretrained on the nl unlabeled subset of VoxPopuli corpus.\n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*\n\nSee the official website for more information, here",
"# Fine-Tuning\n\nPlease refer to this blog on how to fine-tune this model on a specific language. Note that you should replace '\"facebook/wav2vec2-large-xlsr-53\"' with this checkpoint for fine-tuning."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **pl** on **21.2k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **pl**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "pl", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-pl-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"pl",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"pl"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #pl #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in pl on 21.2k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in pl. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in pl on 21.2k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in pl. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #pl #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in pl on 21.2k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in pl. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **pt** on **17.5k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **pt**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "pt", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-pt-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"pt",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"pt"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #pt #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in pt on 17.5k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in pt. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in pt on 17.5k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in pt. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #pt #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in pt on 17.5k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in pt. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **ro** on **17.9k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **ro**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "ro", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-ro-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"ro",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"ro"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #ro #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in ro on 17.9k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in ro. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in ro on 17.9k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in ro. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #ro #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in ro on 17.9k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in ro. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **sk** on **12.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **sk**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "sk", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-sk-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"sk",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"sk"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #sk #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in sk on 12.1k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in sk. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in sk on 12.1k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in sk. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #sk #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in sk on 12.1k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in sk. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **sl** on **11.3k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **sl**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "sl", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-sl-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"sl",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"sl"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #sl #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in sl on 11.3k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in sl. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in sl on 11.3k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in sl. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #sl #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in sl on 11.3k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in sl. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **sv** on **16.3k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **sv**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
{"language": "sv", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "voxpopuli-v2"], "datasets": ["voxpopuli"], "inference": false}
|
facebook/wav2vec2-base-sv-voxpopuli-v2
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"sv",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.00390"
] |
[
"sv"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #sv #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us
|
# Wav2Vec2-base-VoxPopuli-V2
Facebook's Wav2Vec2 base model pretrained only in sv on 16.3k unlabeled datat of the VoxPopuli corpus.
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in sv. Check out this blog for a more in-detail explanation of how to fine-tune the model.
Paper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation*
Authors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, here.
|
[
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in sv on 16.3k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in sv. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxpopuli-v2 #sv #dataset-voxpopuli #arxiv-2101.00390 #license-cc-by-nc-4.0 #region-us \n",
"# Wav2Vec2-base-VoxPopuli-V2\n\nFacebook's Wav2Vec2 base model pretrained only in sv on 16.3k unlabeled datat of the VoxPopuli corpus.\n\nThe model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in sv. Check out this blog for a more in-detail explanation of how to fine-tune the model. \n\nPaper: *VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation\nLearning, Semi-Supervised Learning and Interpretation*\n\nAuthors: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.\n\nSee the official website for more information, here."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.